text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
One-Shot Concept Learning by Simulating Evolutionary Instinct Development Abrar Ahmed South Forsyth High SchoolCumming, Georgia 30041 Anish Bikmal South Forsyth High SchoolCumming, Georgia 30041December 30, 2023 ===================================================================================================================================== Object recognition has become a crucial part of machine learning and computer vision recently. The current approach to object recognition involves Deep Learning and uses Convolutional Neural Networks to learn the pixel patterns of the objects implicitly through backpropagation. However, CNNs require thousands of examples in order to generalize successfully and often require heavy computing resources for training. This is considered rather sluggish when compared to humans’ ability to generalize and learn new categories given just a single example. Additionally, CNNs make it difficult to explicitly programmatically modify or intuitively interpret their learned representations.We propose a computational model that can successfully learn an object category from as few as one example and allows its learning style to be tailored explicitly to a scenario. Our model decomposes each image into two attributes: shape and color distribution. We then use a Bayesian criterion to probabilistically determine the likelihood of each category. The model takes each factor into account based on importance and calculates the conditional probability of the object belonging to each learned category. Our model is not only applicable to visual scenarios, but can also be implemented in a broader and more practical scope of situations such as Natural Language Processing as well as other places where it is possible to retrieve and construct individual attributes. Because the only condition our model presents is the ability to retrieve and construct individual attributes such as shape and color,it can be applied to essentially any class of visual objects no matter how large or small.§ INTRODUCTION Recognition is one of the most remarkable features of biological cognition. This important function is not only present in humans but also in animals with less developed cognitive abilities such as birds. The ability to recognize surroundings, prey, and potential mates is critical to the survival of any creature regardless of environment or function. The recognition capabilities that these animals possess are unique in the way that they require very few training examples in order to learn a new category or concept. For example, after a chick sees its guardian capture a fish and eat it, it will learn that a fish means food. Furthermore, using this learned concept, the bird may associate similar organisms like tadpoles to also be food. People and animals can use concepts in richer ways than conventional computational models or algorithms - for action, imagination, an explanation.Additionally, CNNs make it difficult to explicitly programmatically modify or intuitively interpret their learned representations.Humans and animals require few examples to learn when compared to even the latest machine models involving deep learning. Even though deep learning models are based on the biological brain, they require many more examples because they lack an important feature that animals are born with: instinct. Animals come pre-programmed with evolutionary instinct that allows them to learn concepts much faster while Deep Learning models often initialize their weights randomly. This gives animals a huge advantage when it comes to learning concepts and representations quickly [10]. Natural selection has gifted them with basic concepts that their ancestors learned through trial and error the hard way. We present a computational model that can successfully learn an object category from as few as one example and allows its learning style to be tailored explicitly to a scenario. Our model compensates for the evolutionary instincts that animals are born with. Just as different animals have different instincts, the instincts of our model are adaptable to different scenarios and tasks. Our model “is born with” an instinctive ability to differentiate objects based on color and shape. The model takes each factor into account based on importance and calculates the conditional probability of the object belonging to each learned category through a Bayesian logistic regression. < g r a p h i c s >§ ONE-SHOT LEARNING One-Shot Learning can be defined as the ability to use concepts in richer ways than conventional computational models or algorithms - for action, imagination, an explanation. It is a new relatively new topic in the field of machine learning and computer vision. The goal of our model is to outperform traditional deep learning models by implementing this concept of one-shot learning in the field of object recognition. Our model successfully recognized a set of fruits of vegetables with "one-shot" and it can use just one image to recognize not only fruits, but any image with the parameters of color and shape - a huge class of objects. § TECHNIQUES§.§ Dataset30 images of one category and 3 images of all other categories were obtained for training and testing the classifier. §.§ Pre - processing Each image was scaled to the same size. Canny Edge Detection was performed on each image to obtain an edge outline of each image [2]. Edge outlines were then fed into a binary thresholding function to ensure pixel values were either 0 or 255. The locations of pixels with a value of 255 were stored in an n by 2 pixel coordinate array. The coordinates are then centered by subtracting the respective axis median from each column of the array. The height of the outline of each image is then scaled up to the height of the image with the greatest height to prevent errors due to the magnification of the image by multiplying the corresponding pixel coordinate array by a scale factor. The image outlines are used as masks on the original image to locate a color sample on the surface of each object. The color sample is processed to remove the alpha transparency values. The color sample in RGB color sample is then projected into the YUV color space because it better represents human perceived color similarity as demonstrated by Podpora et al. [8]. The three dimensional color sample matrix is then averaged to produce a three dimensional average color vector for each image.< g r a p h i c s >(Figure 1) Using Canny edge detection (Canny et al.) [2] to obtain outline of image. [.299.587.114; -.14713 -.28886.436;.615 -.51499 -.10001 ] × [ R; G; B ] =[ Y';U;V ]⟶ √((Y'_2-Y'_1)^2 + (U_2-U_1)^2 + (V_2-V_1)^2) = ϵ_i Converting RGB values to YUV using a coefficient matrix to calculate Euclidean distance. §.§ ProcedureModified Hausdorff Distance developed by Dubuisson et al. [1] is then used as a morphological similarity metric to gauge the difference between two image outlines. To determine chromatic similarity, the Euclidean distance is calculated between the YUV average color vectors giving Delta E, the color difference. Feature scaling is then used on the MHD and Delta E values based on their maximum ranges to yield similar ranges for both. A Bayesian Logistic Regression model is then trained on the resulting MHD and Delta E values of the 30 bananas and remaining non-banana objects in order to determine the coefficient matrix theta that minimizes the error. The resulting logistic function is then used to output the conditional probability of an object being from a certain category given the MHD and Delta E values of that object and that category. The model then predicts the most probable category of the object. The objects were divided into two sets: objects with distinct color and shape attributes, and objects with nearly identical color and shape features. Because we had 3 images of each object, we were able to make 3P2 = 6 comparisons for each image to image pair. For the first set, 6×9 = 54 comparisons were made. For the second set, 6×8 = 48 comparisons were made. Three different Deep Neural Networks were trained on the same set using Amazon AWS cloud services and their accuracies were analyzed in comparison to our one shot approach. Additionally, a group of 6 participants were each shown a portion of the images and asked to classify them given the possible categories. The image sets were constructed so that there were duplicates of some objects and absence of others so that participants could not use process of elimination. < g r a p h i c s >(Figure 2) Scaling the images to the same width, then using modified Hausdorff distance to find the δ_1 value or the "error in shape." P(Φ_i|δ_i,ϵ_i)=h_θ(x^(i))= 11+e^-θ^T [δ_i,ϵ_i](Equation 1) θ = min_θJ(θ)(Equation 2) J(θ) = -1m[∑_i=1^m y^(i) log h_θ(x^(i))+(1-y^(i))log(1-h_θ(x^(i)))] (Equation 3) Equation 1 uses the intermediate step of the logistic regression to give us the exact probability of a scanned object being an object given the shape (MHD - δ_i) and color (Euclidean distance - ϵ_i).Equation 2 minimizes the error of the cost function.Equation 3 is the cost function.§ RESULTSOur model proved to be successful in generalizing concepts using one example, and outperformed conventional methods using deep learning neural networks, and in some cases, it even outperformed humans. We found that the accuracy varied significantly based on the objects that we were using. In fact, we found that the results were strikingly similar to a human’s performance. For example, the model mixed up zucchinis and cucumbers, and bananas and plantains relatively more than it mixed up other objects, yet still performed better than humans at differentiating these nearly identical objects . However, when the “distinct objects” (the objects that are not easily mistaken for others) were tested alone, the accuracy was almost perfect - 98.15 percent. When the “similar objects” (the objects easily mixed up) were tested separately, the accuracy of our model was about 72.92 percent. At first glance this statistic may seem low, but it was more accurate than a human at detecting a “similar fruit.” An average human’s accuracy rate turned out be 100 percent on the distinct objects, outperforming our model by 1.85 percent. However,the human prediction success rate is significantly less than our model’s accuracy on the identical objects coming in at 62.51 percent. Many subjects frequently mixed up zucchinis and cucumbers, and bananas and plantains just like our model did. < g r a p h i c s >(Figure 3) Comparison of accuracy rates between humans one-shot(our model), and the three most commonly used deep learning models.Figure 4 is a histogram of the sum of the color and shape errors (after being appropriately weighted) of 30 bananas and 30 non-bananas. Our initial thought was to use some type of standard distribution to fit the data. However, standard distributions like chi-square distributions do not have a distinct gap in the middle of the data. Although this gap prevented us from using a chi-square distribution, it showed that there is a clear distinction between the correct and incorrect objects. We finally decided to use a Bayesian logistic regression fit which is appropriate in our task of classification. It proved to be very accurate, outperforming conventional deep learning models and even outperforming humans on certain tasks. The three deep learning models - Deep Siamese convent, deep convent, and Hierarchical Deep had accuracy rates of approximately 32.37 percent, 34.76 percent, and 40.07 percent respectively when given the same data as our model (can be seen in Figure 3). Our model demonstrates that object categories can be generalized successfully from few examples using a Bayesian classifier, coming close to the abilities of humans by compensating for the evolutionary instinct that animals are born with.< g r a p h i c s > (Figure 4) Histogram of sum of δ_i and ϵ_i to see if a standard distribution will fit the data. < g r a p h i c s > (Figure 5) Bayesian Logistic Regression showing probabilities based on δ_i and ϵ_i.< g r a p h i c s >< g r a p h i c s >(Figures 6 and 7) Specific classifier outputs split into two groups (distinct and similar fruits) to analyze data better. § DISCUSSIONThe only condition our model presents to work effectively is the ability to retrieve and construct individual attributes such as shape and color. Therefore it can be applied to a vast variety of classes of visual objects no matter how large or small. Using the individual attributes of shape and color, our model can be applied to help in the recognition of cells, pathogens, DNA etc. Astrocytes, which are star shaped glial cells that act as “the immune system” of the nervous system, perform many functions including biochemical support of endothelial cells that form the blood–brain barrier, provision of nutrients to the nervous tissue, maintenance of extracellular ion balance, and a role in the repair and scarring process of the brain and spinal cord following traumatic injuries [11]. When stained with antibodies to GFAP and vimentin, they appear yellow. From a picture of a tissue from a microscope, our model should be able to successfully identify these astrocytes based on their shape and color. Moreover, number of neurodegenerative diseases like Alexander’s disease are characterized by intra-astrocytic protein aggregates, consisting of mutant GFAP, heat shock protein 27, and αβ-crystallin which can result in a conformational change in the shape of the astrocyte and a change in the GFAP concentration that will result in a color change [12]. Our model will not only recognize which cells are astrocytes, but based on the shape and color changes, it can also recognize which of these astrocytes are diseased.An important aspect of our model is that the input parameters, which in our application were color and shape, are fully interchangeable and are able to be chosen differently depending on the scenario. This gives the model a special degree of flexibility and allows it to be expanded to use other individual attributes such as parts, subparts, and spatial relations in classes do not have color as a variable. This will present a method of recognition in classes like characters (alphabets and numerals), and other colorless classes. Our model which takes a one-shot approach, can revolutionize object recognition and can be implemented to essentially any class of visual objects. § ACKNOWLEDGMENTWe would like to thank Carol Sikes - high school AP Calculus and Statistics teacher - for assisting us.1IEEEhowto:kopka Dubuisson, M. P. and Jain, A. K. 1994. A modified Hausdorff distance for object matching. In Proceedings of the International Conference on Pattern Recognition (ICPR ’94), 566–568. IEEEhowto:kopka J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Machine Intell., vol. PAMI-8, pp. 679-698, 1986.IEEEhowto:kopka Lake, B. M., Salakhutdinov, R., Tenenbaum, J. B. Human-level concept learning through probabilistic program induction. Science 350, 1332–1338 (2015) IEEEhowto:kopka L. Fei-Fei, R. Fergus and P. Perona, "One-Shot learning of object categories". IEEE Trans. Pattern Analysis and Machine Intelligence, Vol28(4), 594 - 611, 2006 IEEEhowto:kopka Bever, T. G., Poeppel, D. (2010). Analysis by synthesis: a (re-) emerging program of research for language and vision. Biolinguistics, 4, 174–200. IEEEhowto:kopka Smith, L., Yu, C. (2008, March). Infants rapidly learn word referent mappings via cross- situational statistics. Cognition, 106(3), 1558–68 IEEEhowto:kopka Kemp, C., Perfors, A., Tenenbaum, J. B. (2007). Learning over hypotheses with hierarchical Bayesian models. Developmental Science, 10, 307–321. IEEEhowto:kopka Podpora, M., Korbaś, G. P., Kawala-Janik, A. (2014). YUV vs RGB—Choosing a Color Space for Human-Machine Interaction. Position Papers of the 2014 Federated Conference on Computer Science and Information Systems. doi:10.15439/2014f206IEEEhowto:kopka Speech Signal Processing Toolkit (SPTK). (2013). Retrieved from http://sp-tk.sourceforge.net/ Tenenbaum, J. B., Griffiths, T. L. (2001). Generalization, similarity. IEEEhowto:kopka Gopnik, A., Glymour, C., Sobel, D., Schulz, L., Kushnir, T., Danks, D. (2004). A theory of causal learning in children: Causal maps and Bayes nets. Psychological Review, 111, 1-31 IEEEhowto:kopka Astrocytes | Network Glia. (n.d.). Retrieved January 27, 2017, from http://www.networkglia.eu/en/astrocytes IEEEhowto:kopka Alexander disease - Genetics Home Reference. (n.d.). Retrieved January 27, 2017, from https://ghr.nlm.nih.gov/condition/alexander-disease
http://arxiv.org/abs/1708.08141v1
{ "authors": [ "Abrar Ahmed", "Anish Bikmal" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170827211523", "title": "One-Shot Concept Learning by Simulating Evolutionary Instinct Development" }
Department of Physics, Technion, Israel [email protected]      [email protected] Studies suggest that the pollution of white dwarf (WD) atmospheres arises from the accretion of minor planets, but the exact properties of polluting material, and in particular the evidence for water in some cases are not yet understood. Previous works studied the water retention in minor planets around main-sequence and evolving host stars, in order to evaluate the possibility that water survives inside minor planets around WDs. However, all of these studies focused on small, comet-sized to moonlet-sized minor planets, when the inferred mass inside the convection zones of He-dominated WDs could actually also be compatible with much more massive minor planets. In this study we therefore explore for the first time, the water retention inside exo-planetary dwarf planets, or moderate-sized moons, with radii of the order of hundreds of kilometres. We now cover nearly the entire potential mass range of minor planets. The rest of the parameter space considered in this study is identical to that of our previous study, and also includes multiple WD progenitor star masses. We find that water retention in more massive minor planets is still affected by the mass of the WD progenitor, however not as much as when small minor planets were considered. We also find that water retention is now almost always greater than zero. On average, the detected water fraction in He-dominated WD atmospheres should be at least 5%, irrespective of the assumed initial water composition, if it came from a single accretion event of an icy dwarf planet or moon. This finding also strengthens the possibility of WD habitability. To finalize our previous and current findings, we provide a code which may be freely used as a service to the community. The code calculates ice and water retention by interpolation, spanning the full mass range of both minor planets and their host stars. § INTRODUCTIONDespite the typically short sinking time scale of elements heavier than helium in the atmospheres of WDs <cit.>, between 25% to 50% of all WDs <cit.> are found to be polluted with heavy elements. This is suggestive of their ongoing accretion of planetary material <cit.>, originating from minor planets that survive the main sequence, red giant branch (RGB) and asymptotic giant branch (AGB) stellar evolution phases, and remain bound to the WD. When some of these minor planets are perturbed to orbits with proximity to the WD, they are thought to be tidally disrupted and form a circumstellar disk <cit.>, which eventually accretes onto the WD <cit.>.The spectroscopic analysis of WD atmospheres <cit.> as well as infra-red spectroscopy of the debris disks themselves <cit.>, are typically consistent with 'dry' compositions, characteristic of inner solar system objects. Studies that analysed the atmospheric composition of large samples of nearby He-dominated WDs, inferred their collective water mass fraction to be of order ∼1%, and no more than a few % even for particular subsets <cit.>. On the other hand, an increasing number of individual WDs <cit.> have now been inferred to contain a large water mass fraction, ranging between 26%-38%. Polluted WDs are therefore the only current means to understand exo-planetary composition. However, linking the inferred quantity of water in WD atmospheres to the origin of polluting planetary material is not trivial, and requires additional assumptions and calculations. Several studies have previously focused on water retention inside minor planets, as they evolve through and off the main-sequence of their host stars. Understanding how water survive inside minor planets as a function of their intrinsic characteristics, is a necessary first step in beginning to understand the origin and nature of polluting planetary material. The seminal work of <cit.> introduced a simple sublimation model that considered the initial orbital distance and size of minor planets as they evolve around a post-main-sequence 1 M_⊙ star. <cit.> considered a more advanced model, taking into account conductive heat transport to the interior, and also the minor planets' orbital expansion during the post-main-sequence stellar evolution of a 1 M_⊙ and 3 M_⊙ host stars. <cit.> (hereafter MP16) utilized a modern and more sophisticated code that consistently accounts for the thermal, as well as physical (rock/water differentiation), chemical (rock hydration and dehydration) and orbital evolution of icy minor planets. Contrary to previous models, it starts at the beginning of the main-sequence (of a 1 M_⊙ host star), and considers new parameters: the formation time of the minor planet (which affects its early evolution through radiogenic heating and thus its internal differentiated state) and also variation in the initial minor planet composition. <cit.> (hereafter MP17) finally investigated multiple host star masses, ranging from 1 M_⊙ to 6.4 M_⊙ (corresponding to ∼0.5-1 M_⊙ WDs), as well as various host star metallicities.Until now all of these studies shared a common feature - they were all limited to relatively small minor planets, ranging from comet-sized to minor planets with radii of about 100 km or slightly more. This has been justified by the fact that small minor planets are far more numerous and are likely to trigger most of the pollution in WDs. Nevertheless, intermediate and large minor planets are far more massive. The occasional accretion of even a single minor planet of this size may be sufficient in order to account for all the mass in the convection zone of He-dominated WD atmospheres, whereas for small minor planets multiple accretion events are typically needed in order to accumulate and reach the required mass (e.g., see Figure 6 in <cit.>). Indeed, it should be kept in mind that we currently do not know exactly what path or paths typically lead to accretion. Perturbed minor planets could be of various sizes, locations, potential system configurations and perturbation mechanisms <cit.>. Most of the aforementioned mechanisms do not place specific limits on their size, and some mechanisms even discuss particular objects such as liberated exo-moons <cit.> which have a higher chance of being massive. While the actual hunt for exo-moons is still at its infancy stage <cit.>, knowledge from our own solar system shows this to be the case. The goal of this paper is therefore to complement all previous studies by calculating, for the first time, the fate of water inside moon-sized or dwarf-planet-sized objects. Unlike small minor planets, which tend to have a low bulk density and thus very large porosity (we refer to the discussion in Section 2.1 (d) and Table 1 in MP16), the objects discussed in this paper will require a different treatment of porosity, since minor planets of this size evolve mechanically due to the combination of their large internal pressures by self-gravity and also high temperatures, which tend to reduce or eliminate porosity completely <cit.>. In what follows we briefly outline in Section <ref> the size range investigated and the model used in this study. In Section <ref> we initially present one evolutionary path for a dwarf-planet-sized minor planet and emphasize the differences to previous studies. We then present the water retention results for the entire parameter space. We discuss the results in Section <ref> and finalize our work in this and previous papers by providing the community with a code that calculates water retention, spanning the entire minor planet mass spectrum.§ MODELIn the framework of this paper, the terms "intermediate", "medium" or "moderate" are used with reference to objects which are large enough to be in hydrostatic equilibrium, so any object with a radius smaller than about 200 km can be excluded from this definition <cit.>, but are not so large as to permit the occurrence of high-pressure phases of water ice since our model currently only treats the low-pressure 'Ice I' phase. This limits the object radius to be of the order of several hundreds of km, although the precise size determination depends on various parameters, like the assumed initial composition, structure, temperature etc. For example, if one assumes an initially differentiated structure, the self-gravity pressure in the icy mantle will be compatible with the Ice I phase, even for a dwarf planet the size of Pluto (radius∼1200 km). However if one initially assumes a homogeneous ice-rock structure prior to differentiation, as we do in our model (in order to calculate differentiation as a function of variable formation times), the self-gravity pressure in Pluto's core is initially above that of Ice I. Given this constraint of initial homogeneity, objects that are much larger than about 600-700 km in radius cannot be considered in the framework of this paper, and their analysis remains the goal of future studies. As we shall see, their exclusion from this paper is of very little significance, at least from an observational point of view, as follows. At our current level of understanding the distribution of mass, when observed in WD atmospheres (see Figure 6 in <cit.>) follows a bell shape with a peak at around ∼10^22 g. Previous studies therefore investigated about half this mass range, ranging from well below the minimum and reaching approximately the peak in the mass distribution. This study will roughly cover the other half, since beyond a mass of ∼10^24 g, just under the limit of this study, the tail becomes relatively unimportant. Our evolution model couples the thermal, physical, chemical, mechanical and orbital evolution of icy minor planets of various sizes. It considers the energy contribution primarily from radiogenic heating, latent heat released/absorped by geochemical reactions and surface insolation. It treats heat transport by conduction and advection, and follows the transitions among three phases of water (crystalline ice, liquid and vapor), and two phases of silicates (hydrous rock and anhydrous rock). In our predecessor studies on water retention and WD pollution (MP16,MP17) we considered only small minor planets, and therefore we deactivated a specific feature in the model that deals with internal mechanical changes. In this work we add an additional level of complication, and maintain hydrostatic equilibrium. This involves continuously solving the hydrostatic equation, using an equation of state that provides the density to pressure relation of a porous mix of rock/ice. For details see equtions 1 and 16 in <cit.> and references therein. All other model details, including equations, parameters and numerical scheme, are identical to our previous work and can be reviewed in MP16. Our code requires as input the stellar evolution of WD progenitor stars of different masses (that is, change in star luminosity and mass as a function of time). These inputs are also compatible with our previous work. See MP17 for details on how they were obtained using the MESA stellar evolution code. We consider progenitor masses of 1, 2, 3, 3.6, 5 and 6.4 M_⊙, with a metallicity of 0.0143 (or [Fe/H]=0 – the typically used iron abundance relative to solar) corresponding to the final WD masses of 0.54, 0.59, 0.65, 0.76, 0.89 and 1 M_⊙ respectively. The final outcome of the main-sequence, RGB and AGB stellar evolution phases in terms of minor planet water retention, depends on five different parameters. The first parameter is the progenitor's mass. More massive progenitors correlate with a more luminous stellar evolution, albeit a shorter lifetime and also a higher initial (progenitor) to final (WD) mass ratio. The former affects water retention negatively, while the latter two have a positive effect. We also investigate four characteristic variables related to the minor planets: their size (mass/radius), initial orbital distance, formation time and initial rock/ice mass ratio. To comply with our previous work, we consider a similar parameter space, however in this study larger minor planet masses and thus radii are considered, as well as additional changes in the orbital distances and formation times, as outlined below. (1) Object mass/radius - as previously mentioned, past studies investigated the minor planet mass range up to 10^22 g. This paper aims to investigate the 10^23-10^24 g mass range. For the purpose of familiarity, we consider the mass of two well-known Solar system objects, Enceladus and Charon, as our reference masses. Enceladus has a mass of ∼10^23 g whereas Charon has a mass of ∼1.5·10^24 g. These two objects therefore differ in mass by roughly one order of magnitude, and span the desired mass range almost perfectly. In terms of size, the radius is not constant as in previous studies. Our previous studies did not take into account mechanical changes, and therefore porosity never decreased. Whenever water migrated toward the surface or sublimated off the surface, the initial porosity simply increased, and thus the radius was fixed, while mass wasn't necessarily. Here the case is different, since massive minor planets undergo huge mechanical, structural and compositional changes that considerably alter their size. For example, given a fixed mass, the initial radius may depend on the value chosen for the minor planet's bulk density, and also on the assumed initial rock/ice mass ratio. More rocky objects will have a smaller initial radius whereas more icy objects a larger initial radius. The radius may increase over time as water initially melts, migrates and then freezes to form a differentiated internal structure. The radius may also decrease over time as change in temperature or pressure by self-gravity will diminish internal porosity, or as the body looses water due to insolation from the star. It is therefore the case here, that neither the initial mass nor the initial radius are fixed. Nevertheless, it is convenient to discuss a characteristic radius, which is approximately the radius of the minor planet at the beginning of its evolution, before insolation from the star starts to expel mass. For the two reference masses chosen, the characteristic radii will be approximately 270 and 600 km respectively. Naturally, if and when a minor planet expels water via insolation, the radius can decrease well below this characteristic value (up to 26% less). We note that size changes in minor planets can naturally occur even around non-evolving stars (e.g., see <cit.>), but these changes are amplified when insolation or other processes (as in <cit.>) lead to massive ablation of ice.(2) Orbital distance - we consider a range of possible initial orbital distances (note that with stellar mass loss the orbit undergoes expansion as the minor planet conserves its angular momentum). The minimal initial orbital distance is 3 AU. Below approximately 3 AU, a massive planet (and by extension also its moons) runs the risk of being engulfed or otherwise tidally affected by the expanding envelope of the post-main sequence RGB <cit.> or AGB <cit.> star. Another consideration for icy minor planets is the location of the snowline <cit.>, although minor planets could potentially also migrate inward from their initial birthplace. Overall, a minimum of ∼3 AU was chosen as the minimal distance. The maximum orbital distance changes from 75 AU to 200 AU, depending on the progenitor star's mass. It is determined according to the water retention upper bound, defined as the distance at which full water retention is ensured. The water retention upper bound was previously found to increase as a function of the progenitor's mass (MP17). In this study it behaves in the same way, and therefore the number of grid points increases slightly in order to cover a wider distance range.(3) Formation time - the formation time of a minor planet is defined as the time it takes a minor planet to fully form, after the birth of its host star. Since here we only consider first-generation minor planets, this time is usually on the order of ∼ 10^0-10^1 Myr. The formation time determines the initial abundance of short-lived radionuclides, and thus the peak temperatures (hence, internal structure) attained during its early thermal evolution. Although it is clear that the formation time also depends on the orbital distance, the exact relation is unconstrained, which is why we set the formation time as a free parameter. We consider the following formation times: 3, 4, 5 and 10 Myr. This choice is compatible with our previous works, however here we include an additional formation time of 10 Myr, since in more massive minor planets the internal temperature can build up more easily. At approximately 10 Myr formation time, short-lived radionuclides decay so much that effectively only long-term radiogenic heating becomes important. The initial abundances of radionuclides are assumed to be identical to the canonical values in the solar system, for lack of a better assumption.(4) Initial rock/ice mass ratio - this ratio initially depends on the location of the object as it forms in the protostar nebula, and like the formation time this parameter is unconstrained (see MP16 for various estimations). We consider three initial rock/ice mass ratios to allow for various possibilities: 1, 2 and 3 (that is, a rock mass fraction of 50%, 67% and 75% respectively), complying with previous work.The number of models for a single stellar evolution is thus determined by the number of variable minor planet parameters (2 x (7-9) x 4 x 3). Since we have six different progenitor star masses, we have well over one thousand production runs in total. These models were calculated using a cluster computer. The typical run time of each model was on the order of several hours on a single 2.60GHz, Intel CPU. All other model parameters are equal, and identical to the parameters used in Table 2 of MP16.§ RESULTS §.§ The evolutionary courseAs mentioned in Section <ref>, here we present a single detailed example for an evolutionary calculation. Our goal is to highlight the differences between the evolution of small minor planets and that of larger, dwarf-planet-sized minor planets. In Figure <ref> we show the entire evolution for a minor planet with the largest size considered in our sample (radius=∼600 km) at an initial orbital distance of merely 3 AU. This dwarf planet is 75% rock by mass, its assumed formation time is 3 My and its host star has the mass of 1 M_⊙ and the metallicity of [Fe/H]=0. We choose this particular combination of parameters as an example since it represents a minor planet that undergoes the most extreme conditions possible in our sample and attains very high surface and internal temperatures for the longest period of time. It is also a familiar and intuitive example, since we are considering an object only slightly more massive than Ceres, and at a similar orbital distance to Ceres. Its evolution around an almost sun-like star may therefore be seen as a exo-planetary Ceres-analog, or to be exact, a Charon-analog placed at 3 AU (since Ceres has a more rocky composition and higher bulk density than Charon). The final orbital distance (that is, around the WD) after stellar mass loss is 5.55 AU.The course of the evolution is illustrated by two surface plots showing the temperature <ref> and the relative fraction of water <ref> as a function of time and radial distance from the centre of the body. In both Panels, the x-axis shows the time interval, ranging from 0 to 11.46 Gyr. The y-axis shows the radial distance from the centre of the body. Note that the upper boundary of the y-axis changes with time. It increases as the inner rocky core dehydrates due to rising internal temperatures (see Panel <ref>), and the icy mantle thickens by about ∼7-8 km due to water released from the underlying rock (Panel <ref>). After this gradual size increase, at around 10 Gyr the size slowly starts to decrease as a rise in surface temperature triggers sublimation. By 11 Gyr the icy mantle sublimates entirely. The radius then reaches ∼450 km, and it only reduces a little bit further during the AGB phase when extremely high surface temperatures compress the near-surface rock porosity, and also alter the rock's composition via dehydration.Fig. <ref> shows the compositional cross-section of this minor planet as it evolves (since this is an animated figure, one may view the evolution, in addition to the end state which is depicted by the still image). It easily illustrates the large differences in water retention between this study and previous studies, as follows: 1) In small minor planets the dehydration of the rocky core was marginal at best (MP16), amounting to no more than a 5% of the rock and typically none at all. Here however we see that dehydration can have a huge effect. Almost the entire rock mass becomes dehydrated by the evolution's completion, and only a small fraction remains hydrated in a thin shell close to the surface. 2) In small minor planets the porosity remains high and even increases if water ice sublimates out of the interior (see Figs. 2-4 in MP17). Here however porosity decreases considerably, and in some cases it can even reduce to zero in the rocky core. Porosity is very important because it greatly decreases the effective thermal conductivity <cit.>, and thus, heat transport. In small minor planets the near-surface rock is not always hydrated to begin with, and even if it is, dehydration of external rock during the AGB phase is almost negligible since heat penetrates the interior much more slowly, as a result of the high porosity. §.§ Water retention In this section we discuss the bulk amount of water surviving in the planetary system, as a function of our free model parameters. Fig. <ref> shows the final fraction of water, based on the end states of the production runs discussed in Section <ref> (i.e., the ratio between the water when the star reaches the WD stage to the initial amount). We present the total fraction of retained water, defined as water ice + water in hydrated silicates, which ultimately contributes hydrogen and oxygen when accreting onto polluted WD atmospheres. Each panel consists of three subplots, each representing a different choice for the initial composition. Within each subplot there are multiple lines, depicting the final water fraction as a function of the initial orbital distance. Each line is characterized by a specific color and width, as well as a style. The line width decreases with the size of the object, so the thin lines represent the more massive objects, and each line style corresponds to a different formation time.Contrary to our previous work (MP17), where differences in progenitor star mass also entailed huge differences in water retention, here the differences appear to be much more subtle. The common general trend in the data is that minor planets at a greater distance from the star can better retain their water, as expected. There are two noticeable changes however, as the progenitor's mass increases. First, it can be seen that the outer bound of water retention increases with stellar mass. This is categorically true for any combination of parameters, and arises from the fact that stellar luminosity increases with star mass, and therefore the distance at which the minor planet's surface temperature can lead to sublimation extends outwards. It is also true for smaller minor planets, as reported by MP17.The second, less trivial difference, is that at short orbital distances (less than ∼40 AU) around the least massive, 1 and 2 M_⊙ progenitors, the fraction of remaining water tends to be higher for minor planets with a 270 km radius compared to minor planets with a 600 km radius. This trend starts to change for 3 and 3.6 M_⊙ progenitors, and it reverses completely for 5 and 6.4 M_⊙ progenitors, where the more massive minor planets now retain a larger fraction of water. The reason for this phenomena is simply the relative fraction of rock dehydration of the outer layers. After all the ice is expelled from the minor planet, the outer layer temperatures begin to climb (primarily during the intense AGB phase), as we have seen in Section <ref>. Given a finite amount of time (for the star to reach the WD), the heat can only penetrate to a certain distance inward of the minor planet's surface and surpass the characteristic rock dehydration temperature, assuming of course that rock hydration temperatures were attained in the outer layers to begin with (which is not always the case if the formation time is too long, or rock/ice mass ratio too small). Smaller minor planets therefore tend to have a larger fraction of their outer rocks dehydrate, since the heat transport time is the same and the porosity profile is also similar, but their relative size is much smaller. On the other hand, with increasing progenitor mass, the stellar evolution time shortens dramatically. Thus the initial hydration of rock, becomes much more important than the subsequent dehydration of near-surface rock.§ DISCUSSION AND SUMMARYIn this study we investigate a wide range of progenitor star masses, relevant to G, F, A and B type stars. We also investigate dwarf-planet-sized icy objects, for the first time in any previous related water retention study. The results in Section <ref> reaffirm the expectation that minor planets retain less water at closer distances to the star. However, the results differ from studies of smaller minor planets, since here the increase in the progenitor star's mass does not entail huge differences in water retention trends, primarily under 40 AU. Rather, the water retention in minor planets with the same assumed composition, follows a relatively similar trend regardless of the mass of the progenitor star. Beyond 40 AU, that is, at Kuiper belt distances, the progenitor's mass mainly changes the inclination of the slope, as the upper boundary of water retention extends outwards. Our conclusion is that most moderate-sized minor planets evolve similarly enough during their early evolution, that main-sequence and post-main-sequence sublimation effects are insufficient to set them apart significantly. Rather, their assumed initial parameters are more important. Particularly, minor planets in this study are large enough in order to attain rock hydration temperatures in nearly 100% of the cases. This is why the fraction of water retention is almost always higher than zero. While at close orbital distances the ice completely sublimates, neither internal rock dehydration, nor external rock dehydration by the intense luminosity of massive progenitors seem to be able to change the outcome of higher than zero water retention.It may thus be inferred that if the mass in the convection zone of certain polluted He-dominated WDs comes from the accretion event of a single moderate-sized minor planet, one might expect to always detect at least a small fraction of water (assuming the observation itself allows for such a detection). The detectable water fraction f could be calculated by the multiplication of the initial assumed water fraction f_w times the final fraction of water retention f_r. If we take the minimum initial orbital distance of 3 AU, and we average crudely over various progenitor masses, as well as object radii and formation times, we can qualitatively compute, as per assumed composition: (a) f=0.5×∼0.1=5% when the rock/ice mass ratio=1; (b) f=0.33×∼0.15=5% when the rock/ice mass ratio=2; and (c) f=0.25×∼0.2=5% when the rock/ice mass ratio=3. In other words, irrespective of progenitor mass and other parameters, the water fraction in single, moderate-sized exo-planetary minor planets should be of the order of 5% even if their initial orbital distance is assumed to be 3 AU. The inverse reasoning may also be applied. If it indeed turns out that observationally, most polluted WD atmospheres are much dryer than about 5%, it may perhaps be inferred that the accreted mass typically arises from multiple small accretion events and not singular large accretion events.An additional interesting result is related to the question of habitability. A minor planet, or perhaps a planet accompanied by its moderate-sized moon/s, could be orbiting at ∼0.01 AU around a non-magnetic, relatively cool WD, and thus be potentially habitable for approximately 3-8 Gyr, providing ample time for life to develop <cit.>. Our results would suggest that while small minor planets may not always retain sufficient water to support life, especially if they are perturbed from close heliocentric distances, moderate-sized minor planets may have a far greater chance to do so.This paper finalizes a series of papers on WD pollution and water retention (following MP16 and MP17). The results presented in MP17 for small minor planets, are complemented here with results for massive minor planets. Together they span nearly the entire potential minor planet mass range, as inferred from WD polluted atmospheres. We thus provide, as a service to the community, a code which may be used in order to evaluate water retention, covering the full mass range, of both minor planets and their respective host stars. The code runs on MATLAB, and is freely available at <https://github.com/UriMalamud/WaterRetention>. § ACKNOWLEDGMENTUM and HBP acknowledge support from the Marie Curie FP7 career integration grant "GRAND", the Research Cooperation Lower Saxony-Israel Niedersachsisches Vorab fund, the Minerva center for life under extreme planetary conditions and the ISF I-CORE grant 1829/12.apj59 natexlab#1#1[Agol(2011)]Agol-2011 Agol, E. 2011, The Astrophysical Journal Letters, 731, L31 [Bergfors et al.(2014)Bergfors, Farihi, Dufour, & Rocchetto]BergforsEtAl-2014 Bergfors, C., Farihi, J., Dufour, P., & Rocchetto, M. 2014, Monthly Notices of the Royal Astronomical Society, 444, 2147 [Bonsor et al.(2011)Bonsor, Mustill, & Wyatt]BonsorEtAl-2011 Bonsor, A., Mustill, A. J., & Wyatt, M. C. 2011, Monthly Notices of the Royal Astronomical Society, 414, 930 [Caiazzo & Heyl(2017)]CaiazzoHeyl-2017 Caiazzo, I. & Heyl, J. S. 2017, Monthly Notices of the Royal Astronomical Society, 469, 2750 [Debes & Sigurdsson(2002)]DebesSigurdsson-2002 Debes, J. H. & Sigurdsson, S. 2002, The Astrophysical Journal, 572, 556 [Debes et al.(2012)Debes, Walsh, & Stark]DebesEtAl-2012 Debes, J. H., Walsh, K. J., & Stark, C. 2012, The Astrophysical Journal, 747, 148 [Desharnais et al.(2008)Desharnais, Wesemael, Chayer, Kruk, & Saffer]DesharnaisEtAl-2008 Desharnais, S., Wesemael, F., Chayer, P., Kruk, J. W., & Saffer, R. A. 2008, The Astrophysical Journal, 672, 540 [Dufour et al.(2007)Dufour, Bergeron, Liebert, Harris, Knapp, Anderson, Hall, Strauss, Collinge, & Edwards]DufourEtAl-2007 Dufour, P., Bergeron, P., Liebert, J., Harris, H. C., Knapp, G. R., Anderson, S. F., Hall, P. B., Strauss, M. A., Collinge, M. J., & Edwards, M. C. 2007, The Astrophysical Journal, 663, 1291 [Farihi et al.(2013)Farihi, Gänsicke, & Koester]FarihiEtAl-2013 Farihi, J., Gänsicke, B. T., & Koester, D. 2013, Science, 342, 218 [Fossati et al.(2012)Fossati, Bagnulo, Haswell, Patel, Busuttil, Kowalski, Shulyak, & Sterzik]FossatiEtAl-2012 Fossati, L., Bagnulo, S., Haswell, C. A., Patel, M. R., Busuttil, R., Kowalski, P. M., Shulyak, D. V., & Sterzik, M. F. 2012, The Astrophysical Journal Letters, 757, L15 [Gänsicke et al.(2012)Gänsicke, Koester, Farihi, Girven, Parsons, & Breedt]GansickeEtAl-2012 Gänsicke, B. T., Koester, D., Farihi, J., Girven, J., Parsons, S. G., & Breedt, E. 2012, Monthly Notices of the Royal Astronomical Society, 424, 333 [Hamers & Portegies Zwart(2016)]HamersPortegiesZwart-2016 Hamers, A. S. & Portegies Zwart, S. F. 2016, ArXiv e-prints [Jura(2003)]Jura-2003 Jura, M. 2003, The Astrophysical Journal, 584, L91 [Jura(2008)]Jura-2008 —. 2008, The Astronomical Journal, 135, 1785 [Jura et al.(2009)Jura, Farihi, & Zuckerman]JuraEtAl-2009 Jura, M., Farihi, J., & Zuckerman, B. 2009, The Astronomical Journal, 137, 3191 [Jura et al.(2007)Jura, Farihi, Zuckerman, & Becklin]JuraEtAl-2007 Jura, M., Farihi, J., Zuckerman, B., & Becklin, E. E. 2007, The Astronomical Journal, 133, 1927 [Jura & Xu(2010)]JuraXu-2010 Jura, M. & Xu, S. 2010, The Astronomical Journal, 140, 1129 [Jura & Xu(2012)]JuraXu-2012 —. 2012, The Astronomical Journal, 143, 6 [Jura & Young(2014)]JuraYoung-2014 Jura, M. & Young, E. D. 2014, Annual Review of Earth and Planetary Sciences, 42, 45 [Kennedy & Kenyon(2008)]KennedyKenyon-2008 Kennedy, G. M. & Kenyon, S. J. 2008, The Astrophysical Journal, 673, 502 [Kilic et al.(2006)Kilic, von Hippel, Leggett, & Winget]KilicEtAl-2006 Kilic, M., von Hippel, T., Leggett, S. K., & Winget, D. E. 2006, The Astrophysical Journal, 646, 474 [Klein et al.(2010)Klein, Jura, Koester, Zuckerman, & Melis]KleinEtAl-2010 Klein, B., Jura, M., Koester, D., Zuckerman, B., & Melis, C. 2010, The Astrophysical Journal, 709, 950 [Koester(2009)]Koester-2009 Koester, D. 2009, Astronomy and Astrophysics, 498, 517 [Koester et al.(2014)Koester, Gänsicke, & Farihi]KoesterEtAl-2014 Koester, D., Gänsicke, B. T., & Farihi, J. 2014, Astronomy and Astrophysics, 566, A34 [Kratter & Perets(2012)]KratterPerets-2012 Kratter, K. M. & Perets, H. B. 2012, The Astrophysical Journal, 753, 91 [Kunitomo et al.(2011)Kunitomo, Ikoma, Sato, Katsuta, & Ida]KunitomoEtAl-2011 Kunitomo, M., Ikoma, M., Sato, B., Katsuta, Y., & Ida, S. 2011, The Astrophysical Journal, 737, 66 [Lineweaver & Norman(2010)]LineweaverNorman-2010 Lineweaver, C. H. & Norman, M. 2010, ArXiv e-prints [Malamud & Perets(2016)]MalamudPerets-2016 Malamud, U. & Perets, H. B. 2016, The Astrophysical Journal, 832, 160 [Malamud & Perets(2017)]MalamudPerets-2017 —. 2017, The Astrophysical Journal, 842, 67 [Malamud et al.(2017)Malamud, Perets, & Schubert]MalamudEtAl-2017 Malamud, U., Perets, H. B., & Schubert, G. 2017, Monthly Notices of the Royal Astronomical Society, 468, 1056 [Malamud & Prialnik(2015)]MalamudPrialnik-2015 Malamud, U. & Prialnik, D. 2015, Icarus, 246, 21 [Malamud & Prialnik(2016)]MalamudPrialnik-2016 —. 2016, Icarus, 268, 1 [Metzger et al.(2012)Metzger, Rafikov, & Bochkarev]MetzgerEtAl-2012 Metzger, B. D., Rafikov, R. R., & Bochkarev, K. V. 2012, Monthly Notices of the Royal Astronomical Society, 423, 505 [Michaely & Perets(2014)]MichaelyPerets-2014 Michaely, E. & Perets, H. B. 2014, The Astrophysical Journal, 794, 122 [Mustill & Villaver(2012)]MustillVillaver-2012 Mustill, A. J. & Villaver, E. 2012, The Astrophysical Journal, 761, 121 [Payne et al.(2017)Payne, Veras, Gänsicke, & Holman]PayneEtAl-2017 Payne, M. J., Veras, D., Gänsicke, B. T., & Holman, M. J. 2017, Monthly Notices of the Royal Astronomical Society, 464, 2557 [Payne et al.(2016)Payne, Veras, Holman, & Gänsicke]PayneEtAl-2016 Payne, M. J., Veras, D., Holman, M. J., & Gänsicke, B. T. 2016, Monthly Notices of the Royal Astronomical Society, 457, 217 [Perets & Kratter(2012)]PeretsKratter-2012 Perets, H. B. & Kratter, K. M. 2012, The Astrophysical Journal, 760, 99 [Petrovich & Muñoz(2016)]PetrovichMunoz-2016 Petrovich, C. & Muñoz, D. J. 2016, ArXiv e-prints [Pietro Gentile Fusillo et al.(2017)Pietro Gentile Fusillo, Gänsicke, Farihi, Koester, Schreiber, & Pala]PietroGentileFusilloEtAl-2017 Pietro Gentile Fusillo, N., Gänsicke, B. T., Farihi, J., Koester, D., Schreiber, M. R., & Pala, A. F. 2017, ArXiv e-prints [Raddi et al.(2015)Raddi, Gänsicke, Koester, Farihi, Hermes, Scaringi, Breedt, & Girven]RaddiEtAl-2015 Raddi, R., Gänsicke, B. T., Koester, D., Farihi, J., Hermes, J. J., Scaringi, S., Breedt, E., & Girven, J. 2015, Monthly Notices of the Royal Astronomical Society, 450, 2083 [Rafikov(2011)]Rafikov-2011 Rafikov, R. R. 2011, Monthly Notices of the Royal Astronomical Society: Letters, 416, L55 [Reach et al.(2005)Reach, Kuchner, von Hippel, Burrows, Mullally, Kilic, & Winget]ReachEtAl-2005 Reach, W. T., Kuchner, M. J., von Hippel, T., Burrows, A., Mullally, F., Kilic, M., & Winget, D. E. 2005, The Astrophysical Journal, 635, L161 [Reach et al.(2009)Reach, Lisse, von Hippel, & Mullally]ReachEtAl-2009 Reach, W. T., Lisse, C., von Hippel, T., & Mullally, F. 2009, The Astrophysical Journal, 693, 697 [Shappee & Thompson(2013)]ShapeeThompson-2013 Shappee, B. J. & Thompson, T. A. 2013, The Astrophysical Journal, 766, 64 [Smoluchowski(1981)]Smoluchowski-1981 Smoluchowski, R. 1981, Astrophysical Journal, Letters, 244, L31 [Stephan et al.(2017)Stephan, Naoz, & Zuckerman]StephanEtAl-2017 Stephan, A. P., Naoz, S., & Zuckerman, B. 2017, The Astrophysical Journal Letters, 844, L16 [Stern et al.(1990)Stern, Shull, & Brandt]SternEtAl-1990 Stern, S. A., Shull, J. M., & Brandt, J. C. 1990, Nature, 345, 305 [Stone et al.(2015)Stone, Metzger, & Loeb]StoneEtAl-2015 Stone, N., Metzger, B. D., & Loeb, A. 2015, Monthly Notices of the Royal Astronomical Society, 448, 188 [Teachey et al.(2017)Teachey, Kipping, & Schmitt]TeacheyEtAl-2017 Teachey, A., Kipping, D. M., & Schmitt, A. R. 2017, ArXiv e-prints [Veras(2016)]Veras-2016 Veras, D. 2016, Royal Society Open Science, 3, 150571 [Veras & Gänsicke(2015)]VerasGansicke-2015 Veras, D. & Gänsicke, B. T. 2015, Monthly Notices of the Royal Astronomical Society, 447, 1049 [Veras et al.(2014)Veras, Leinhardt, Bonsor, & Gänsicke]VerasEtAl-2014a Veras, D., Leinhardt, Z. M., Bonsor, A., & Gänsicke, B. T. 2014, Monthly Notices of the Royal Astronomical Society, 445, 2244 [Veras et al.(2015)Veras, Leinhardt, Eggl, & Gänsicke]VerasEtAl-2015 Veras, D., Leinhardt, Z. M., Eggl, S., & Gänsicke, B. T. 2015, Monthly Notices of the Royal Astronomical Society, 451, 3453 [Villaver et al.(2014)Villaver, Livio, Mustill, & Siess]VillaverEtAl-2014 Villaver, E., Livio, M., Mustill, A. J., & Siess, L. 2014, The Astrophysical Journal, 794, 3 [Wolff et al.(2002)Wolff, Koester, & Liebert]WolffEtAl-2002 Wolff, B., Koester, D., & Liebert, J. 2002, Astronomy and Astrophysics, 385, 995 [Xu et al.(2017)Xu, Zuckerman, Dufour, Young, Klein, & Jura]XuEtAl-2017 Xu, S., Zuckerman, B., Dufour, P., Young, E. D., Klein, B., & Jura, M. 2017, The Astrophysical Journal Letters, 836, L7 [Zuckerman et al.(2003)Zuckerman, Koester, Reid, & Hünsch]ZuckermanEtAl-2003 Zuckerman, B., Koester, D., Reid, I. N., & Hünsch, M. 2003, The Astrophysical Journal,, 596, 477 [Zuckerman et al.(2010)Zuckerman, Melis, Klein, Koester, & Jura]ZuckermanEtAl-2010 Zuckerman, B., Melis, C., Klein, B., Koester, D., & Jura, M. 2010, The Astrophysical Journal, 722, 725
http://arxiv.org/abs/1708.07489v1
{ "authors": [ "Uri Malamud", "Hagai B. Perets" ], "categories": [ "astro-ph.SR", "astro-ph.EP" ], "primary_category": "astro-ph.SR", "published": "20170824165626", "title": "Post-main-sequence evolution of icy minor planets. III. water retention in dwarf planets and exo-moons and implications for white dwarf pollution" }
Sales Forecast in E-commerce using Convolutional Neural NetworkKui Zhao College of Computer Science Zhejiang University Hangzhou, China [email protected] Can Wang College of Computer Science Zhejiang University Hangzhou, China [email protected]: December 30, 2023/ Accepted: December 30, 2023 ========================================================================================================================================================================================================================================================== Memory caches are being aggressively used in today's data-parallel frameworks such as Spark, Tez and Storm. By caching input and intermediate data in memory, compute tasks can witness speedup by orders of magnitude. To maximize the chance of in-memory data access, existing cache algorithms, be it recency- or frequency-based, settle on cache hit ratio as the optimization objective. However, unlike the conventional belief, we show in this paper that simply pursuing a higher cache hit ratio of individual data blocks does not necessarily translate into faster task completion in data-parallel environments. A data-parallel task typically depends on multiple input data blocks. Unless all of these blocks are cached in memory, no speedup will result. To capture this all-or-nothing property, we propose a more relevant metric, called effective cache hit ratio. Specifically, a cache hit of a data block is said to be effective if it can speed up a compute task. In order to optimize the effective cache hit ratio, we propose the Least Effective Reference Count (LERC) policy that persists the dependent blocks of a compute task as a whole in memory. We have implemented the LERC policy as a memory manager in Spark and evaluated its performance through Amazon EC2 deployment. Evaluation results demonstrate that LERC helps speed up data-parallel jobs by up to 37% compared with the widely employed least-recently-used (LRU) policy. § INTRODUCTIONMemory cache plays a pivotal role in data-parallel frameworks, such as Spark <cit.>, Tez <cit.>, Piccolo <cit.> and Storm <cit.>. By caching input and intermediate data in memory, I/O-intensive jobs can be sped up by orders of magnitude <cit.>. However, compared with stable storage, memory cache remains constrained in production clusters, and it is not possible to persist all data in memory <cit.>. Efficient cache management, therefore, becomes highly desirable for parallel data analytics.Unlike caching in storage systems, operating systems and databases, cache management in data-parallel frameworks is characterized by the defining all-or-nothing requirement. That is, a compute task cannot be accelerated unless all of its dependent datasets, which we call peers in this paper, are cached in memory. In Hadoop <cit.> and Spark <cit.>, data-parallel operations, such as ,and , are typically performed on multiple datasets. Fig. <ref> shows a Spark job consisting of twotasks. Task 1 (Task 2) fetches two data blocks a and b (blocks c and d) and coalesces them into a larger block x (block y). In this example, blocks a and b (blocks c and d) are peers of each other. Caching only one peer provides no benefit, as the task computation is bottlenecked by the reading of the other peer from disk.However, existing cache management policies are agnostic to this all- or-nothing cache requirement. Instead, they simply optimize the block cache hit ratio, regardless of whether a cache hit can effectively speed up a compute task. For instance, the popular least-recently-used (LRU) policy employed by prevalent parallel frameworks <cit.> caches the data blocks that have been recently used, counting on their future access to optimize the cache hit ratio. To illustrate that LRU may violate the all-or-nothing cache requirement for data-parallel tasks, we refer back to the example in Fig. <ref>. We assume that four unit-sized blocks a, b, c and d have been materialized and will be used by twotasks to compute blocks x and y. The cache can persist 3 blocks and initially holds blocks a, b and c. Suppose that another block e will be inserted to the cache, forcing one in-memory block to be evicted. Among the three cached blocks, block c is the only right choice of eviction, as caching it alone without its peer d speeds up no task. However, with the LRU policy, block c will remain in memory unless it is the least recently used. We shall show in Sec. <ref> that the recently proposed Least Reference Count (LRC) policy <cit.> for data- parallel frameworks may run into a similar problem.The all-or-nothing property calls for coordinated cache management in data-parallel clusters. In this work, we argue that cache policy should optimize a more relevant metric, which we call the effective cache hit ratio. In particular, we say a cache hit of a block is effective if it helps speed up a compute task, i.e., all the other peers of this block are in memory. Referring back to the previous example, caching block c without block d is ineffective as it speeds up no task. Intuitively, the effective cache hit ratio measures to what degree the all-or-nothing property can be retained and how likely compute tasks can get accelerated by having all their dependent datasets kept in memory.In order to optimize the effective cache hit ratio, we design a coordinated cache management policy, called Least Effective Reference Count (LERC). LERC builds on our previously proposed LRC policy <cit.> yet retains the all-or-nothing property of data-parallel tasks. LERC always evicts the data block with the smallest effective reference count. The effective reference count is defined, for each data block b, as the number of unmaterialized blocks whose computation depends on block b and can be sped up by caching—meaning, the input datasets of the computation are all in memory. We have implemented LERC in Spark and evaluated its performance in a 20-node Amazon EC2 cluster against representative workloads. Our prototype evaluation shows that LERC outperforms both LRU and LRC, reducing the job completion time by 37.0% and 18.6%, respectively. In addition, we have confirmed through experiments that compared to the widely adopted cache hit ratio, the proposed effective cache ratio serves as a more relevant metric to measure cache performance in data-parallel systems.§ INEFFICIENCY OF EXISTING CACHE POLICIESIn this section, we introduce the background information and motivate the need for coordinated cache management in data-parallel systems. §.§ Recency- and Frequency-Based Cache Replacement Policies Traditional cache management policies optimize the cache hit ratio by evicting data blocks based on their access recency and/or frequency. LRU <cit.> and LFU <cit.> are the two representative algorithms. * Least Recently Used (LRU): The LRU policy always evicts the data block that has not been accessed for the longest period of time. LRU bets on the short-term data popularity. That is, the recently accessed data is assumed to be likely used again in the near future. LRU is the default cache replacement policy in many popular parallel frameworks, such as Spark <cit.>, Tez <cit.> and Storm <cit.>. * Least Frequently Used (LFU): The LFU policy always evicts the data block that has been accessed the least times. Unlike LRU, LFU bets on the long-term data popularity. That is, the data accessed frequently in the past will likely remain popular in the future.Recency and frequency can also be used in combination, e.g., LRFU <cit.> and K-LRU <cit.>. All of these cache algorithms predict future data access based on historical information, and are typically employed in operating systems, storage, database and web servers where the underlying data access pattern cannot be known a priori. §.§ Dependency-Aware Cache Management In data-parallel frameworks such as Spark <cit.> and Tez <cit.>, compute jobs have rich semantics of data dependency in the form of directed acyclic graphs (DAGs). These dependency DAGs dictate the underlying data access patterns, providing new opportunities for dependency-aware cache management.For example, Fig. <ref> shows the DAG of a Sparkjob <cit.>. Spark manages data through an abstraction called Resilient Distributed Datasets (RDDs) <cit.>. Each RDD is partitioned into multiple blocks across different machines. In Fig. <ref>, there are three RDDs A, B and C, each consisting of 10 blocks. RDD C is a collection of key-value pairs obtained by zipping RDD A with RDD B. The computation of each block C_i depends on two blocks A_i (key) and B_i (value).In Spark, the data dependency DAG is readily available toupon a job submission. The recently proposed Least Reference Count (LRC) policy <cit.> takes advantage of this DAG information to determine which RDD block should be kept in memory.Least Reference Count (LRC): The LRC policy always evicts the data block with the least reference count. The reference count is defined, for each block, as the number of unmaterialized blocks depending on it. In the example of Fig. <ref>, each block of RDD A and RDD B has reference count 1. Intuitively, the higher the reference count an RDD block has, the more to-be-scheduled tasks depend on it, and the higher probability that this block will be used in the near future. In many applications, some RDD blocks would be used iteratively during the computation with much higher reference counts than others, e.g., the training datasets for cross-validation in machine learning <cit.>. With LRC, these blocks would have a higher chance to be cached in memory than others. §.§ All-or-Nothing Cache RequirementPrevalent cache algorithms, be it recency/frequency-based or dependency- aware, settle on the cache hit ratio as their optimization objective. However, the cache hit ratio fails to capture the all-or-nothingrequirement of data-parallel tasks and is not directly linked to their computation performance. The computation of a data-parallel task usually depends on multiple data blocks, e.g., ,andin Spark <cit.>. A task cannot be sped up unless all its dependent blocks, which we call peers (e.g., block A_1 and block B_1 in Fig. <ref>), are cached in memory.Measurement study: To demonstrate the all-or-nothing property in data-parallel systems, we ran theSparkjob in an Amazon EC2 cluster with 10instances <cit.>. The job DAG is illustrated in Fig. <ref>, where each of the two RDDs A and B is configured as 200 MB. We repeatedly run thejob in rounds. In the first round, no data block is cached in memory. In each of the subsequent rounds, weadd one more block to cache, following the caching order A_1, B_1, A_2, B_2, …, A_10, B_10. Eventually, all 20 blocks are cached in memory in the final round. In each round, we measure the cache hit ratio and the total runtime of all 10 tasks. Fig. <ref> depicts our measurement results against the number of RDD blocks in memory. Despite the linearly growing cache hit ratio with more in-memory blocks, the task completion time is notably reduced only after the two peering blocks A_i and B_i have been cached. Inefficiency of existing cache policies: The first step to meet the all-or-nothing cache requirement is to identify which blocks are peers with each other and should be cached together as a whole. This information can only be learned from the data dependency DAG, but it has not been well explored in the literature. Many existing cache policies, such as LRU and LFU, are oblivious to the DAG information, and are unable to retain the all-or-nothing property. The recently proposed LRC <cit.> policy, though DAG-aware, does not differentiate the peering blocks and hence suffer from the same inefficiency as LRU. By referring back to the previous example in Fig. <ref>, we see that blocks a, b and c have the same reference count of 1 and would have an equal chance to get evicted by LRC. In other words, LRC would evict a wrong block (other than c) with probability 67%. To the best of our knowledge, PACMan <cit.> is the only work that tries to meet the all-or-nothing requirement for cache management in parallel clusters. However, PACMan is agnostic to the semantics of job DAGs, and its objective is to speed up data sharing across different jobs by caching complete datasets (HDFS files). Since PACMan only retains the all-or-nothing property for each individual dataset, if a job depends on multiple datasets, completely caching a subset of them provides no performance benefits. In summary, we have shown through a toy example and measurement study that optimizing the cache hit ratio—the conventional performance metric employed by existing cache algorithms—does not necessarily speed up data-parallel computation owing to its all-or-nothing cache requirement.We shall design a new cache management policy that meets this requirement based on the peer information extracted from the data dependency DAGs.§ LEAST EFFECTIVE REFERENCE COUNTIn this section, we define the effective cache hit ratio as a more relevant cache performance metric for data-parallel tasks. To optimize this new metric, we propose a coordinated cache management policy, called Least Effective Reference Count (LERC). We also present our implementation of LERC as a cache manager in Spark.§.§ Effective Cache Hit Ratio In a nutshell, a cache hit of a block is effective if it can speed up the computation. In data-parallel frameworks, a compute task typically depends on multiple data blocks. We call all these blocks peers with respect to (w.r.t.) this task. Caching only a subset of peering blocks provides no speedup for the task. Formally, we have the following definition. For a running task, the cache hit of a dependent block is effective ifits peers w.r.t. the task are all in memory. We now define the effective cache hit ratio as the number of effective cache hits normalized by the total number of block accesses. This ratio directly measures how much compute tasks can benefit from data caching. By referring back to the previous example in Fig. <ref>, each of the four blocks a, b, c and d is used only once in the computation. Initially, blocks a, b and c are in memory. Since block d is on disk, evicting its peer c has no impact on the effective cache hit ratio, which remains at 50% (i.e., two effective cache hits for a, b out of 4 block accesses). Evicting block a (or b), on the other hand, results in no effective cache hit. The effective cache hit ratio directly measures the performance of a cache algorithm. Algorithms that optimize the traditional cache hit ratio may not perform well in terms of this new metric. Back to the previous example, the LRC policy <cit.> evicts each of the three in-memory blocks, a, b and c, with an equal probability (cf. Sec. <ref>), leading to a low effective cache hit ratio of 1/3× 50% + 2/3× 0% = 16.7% in expectation. The LRU policy can be even worse as the effective hit ratio is 0, unless block c is the least recently used. A naive approach to optimize the effective cache hit ratio is the sticky eviction policy. That is, the peering data blocks stick to each other and are evicted as a whole if any of them is not in memory. However, such a sticky policy can be highly inefficient. A data block might be a shared input of multiple tasks. Caching the block, even though not helping speed up one task (i.e., not all peers w.r.t. the task are in memory), may benefit another. With the sticky policy, this block would surely be evicted, and no task can be sped up. We propose our solution in the next subsection. §.§ Least Effective Reference Count Caching We start with the definition of the effective reference count. In data-parallel frameworks, a block may be referenced by many tasks (i.e., used as input). We differentiate effective reference from the others by the following definition. Let block b be referenced by task t.We say this reference is effective if task t's dependent blocks, if computed, are all cached in memory. For a data block, the effective reference count is simply the number of its effective references, which, intuitively, measures how many downstream tasks can be sped up by caching this block. Based on this definition, we present the Least Effective Reference Count (LERC) policy as follows.Least Effective Reference Count (LERC): The LERC policy evicts the block whose effective reference count is the smallest. As a running example, we refer back to Fig. <ref>. Each of blocks a and b has effective reference count 1, while block c has no effective reference count (its reference by Task 2 is not effective as block d is not in memory). With the LERC policy, block c is evicted, which is the optimal decision in this example.LERC has two desirable properties. First, by prioritizing blocks with large effective reference counts, LERC is able to retain the peering blocks as entities as much as possible, through which the effective cache hit ratio is optimized. Second, the effective reference count can be easily extracted from the job DAG that is readily available to the scheduler. §.§ Spark Implementation Implementing LERC in data-parallel frameworks, such as Spark, poses some non- trivial challenges. Maintaining a block's effective reference count requires knowing the caching status of its peers on different workers. Synchronizing each block's caching status across workers may incur significant communication overhead. The first approach is to maintain the caching status profile in a centralized manner at the Spark driver. When a block is evicted from memory, the worker reports to the driver. The driver updates the effective reference counts of other peering blocks and broadcasts the updates to all workers. Alternatively, we can let workers maintain the caching status in a distributed manner. Upon a block eviction on a worker, all the other workers must be notified immediately to update the effective reference counts. Both approaches broadcast a large amount of information across the network, and are expensive to implement.We next present our implementation to address this problem.Architecture overview. Fig. <ref> presents an architecture overview of our LERC cache manager in Spark. Theandare the legacy modules of our LRC implementation <cit.>. Thein the driver parses the reference count profile and maintains it together with the s in workers. In this paper, we have implemented two components: (1)that profiles the peer information in the job DAG obtained from the , and (2)in each worker that reports the status of the peer blocks when necessary. Workflow. When the Spark driver is launched, theis initialized together with other components in the driver. Upon a job submission, theobtains the job DAG from theand parses out the peer information. The peer information profile is then broadcast via theto all s in the cluster. Theinitially labels every peer-group (all peering blocks w.r.t. a task) as “complete,” and then changes it to “incomplete” if any of its materialized block has been evicted from memory. Upon a block eviction, thefirst checks whether this block belongs to any “complete” peer-group. If so, the effective reference counts of blocks in these peer-groups should be updated due to this block eviction. A block eviction report is sent to theand broadcast to other workers. Upon receiving a block eviction message, thescans all the “complete” peer-groups. If there are “complete” peer-groups containing the evicted block, thelabels them as “incomplete”and informs theto decrease the effective reference counts of the peering blocks accordingly.Communication Overhead. Our implementation minimizes the number of required communication messages. To see this, we first show that at most one broadcasting is triggered for the entire group of peer blocks in our implementation. By labeling the “complete” peer-groups locally in the s, there is no need to track the caching status of each peer block separately. Only the block eviction in a “complete” peer-group triggers the updating of the peering blocks' effective reference counts. Once a block eviction message is broadcast, the peer-group becomes “incomplete”, and no more updating messages will be required for this peer-group. We next show that it is necessary to broadcast at least one block eviction message for each group, if any of its peer has been evicted. Since it is possible that some of the evicted block's peers have not been computed yet, the block eviction message should be broadcast to all workers instead of those with the evicted block's peers.§ EVALUATIONIn this section, we evaluate the efficacy of the LERC cache manager with synthesized workloads in Amazon EC2. We also confirm that the effective cache hit ratio is a more relevant metric of cache performance than the widely used cache hit ratio. Cluster deployment. Our implementation is based on Spark 1.6.1. In order to highlight the advantage of memory locality, we disabled the OS page cache by triggering direct disk I/O from/to the hard disk. We deployed an Amazon EC2 <cit.> cluster with 20 nodes of type , each with a dual-core 2.4 GHz Intel Xeon E5-2676 v3 (Haswell) processor and 8 GB memory. Experiment settings. In the experiment, we simulated 10 tenants submitting Spark<cit.> jobs in parallel. In each job, two files of 400 MB are firstly partitioned into 100 blocks and stored in the 20 machines. Since the 10 jobs operate on different files, the total size of the input blocks is 400MB× 2 × 10 = 8GB. The cache manager will decide which blocks should be stored on disk when the cache is full. After that, 100 zip tasks are scheduled for each job to zip the two files into 100 key-value pairs, where the keys are the data of the first file, and the values are the data of the second file. Notice that only when both the key and value are cached in memory will atask be sped up. We measure the total completion time of the 10 jobs to compare the performance of different cache policies. §.§ Job Completion TimeWe conducted the experiment using three cache replacement policies, i.e., LRU, LRC, and LERC, with different memory cache sizes. In particular, we configuredin the legacy Spark to throttle the memory used for RDD caching to a given size. We measured the total experiment runtime, i.e., the make span of the 10 submitted jobs. We repeated each experiment 10 times and depict the average results in Fig. <ref>. The error bars show the maximum and minimum completion time in the 10 runs.To our expectation, as the size of RDD cache increases, the total experiment runtime decreases under all the three cache policies. In all cases, LRC consistently outperforms the default LRU policy. LERC further reduces the experiment completion time over LRC. When the cache size is 5.3 GB, for instance, the average runtimes under the three policies are284 s (LRU), 220 s (LRC) and 179 s (LERC), respectively. The LERC policy speeds up job completion by 37.0% and 18.6% compared to the LRU and LRC policies, respectively.§.§ Effective Cache Hit RatioWe now evaluate the relevance of the two metrics, i.e., the effective cache hit ratio and the cache hit ratio, in measuring the cache performance in data-parallel clusters. Both of the two metrics were recorded in the previous experiments. The results are shown in Fig. <ref> and Fig. <ref>. Fig. <ref> shows that LRC achieves the highest cache hit ratio, while LERC closely follows. This is because LRC aims to optimize the cache hit ratio, and it outperforms LRU by taking advantage of the DAG information. LERC also takes use of this information, but it gives up on retaining those ineffective cache hits that are unable to speed up tasks. It is for this reason that the cache hit ratio is slightly compromised.Fig. <ref> shows that LERC always achieves the highest effective cache hit ratio. The smaller the cache size is, the more advantageous LERC is. We therefore conclude that LERC is able to make the best use of the constrained cache resources to effectively speed up compute tasks. As the available cache space increases, LRC is more likely to retain an entire peer-group in memory, and its effective cache hit ratio becomes closer to that of LERC. On the other hand, when the cache size is small, many peer blocks will have to be evicted out of memory. Since LERC needs to maintain the effective reference count, a salient communication overhead is incurred. For this reason, in Fig. <ref>, LERC does not save much job runtime compared with LRC when the cache volume is relatively small, even though it achieves a much higher effective cache hit ratio. As the cache size increases, less communication cost is incurred, and the performance in the effective cache ratio gets in line with the job runtime.Notice that the effective cache hit ratio of LRU is always near zero in our experiment. Since the tenants submit their jobs in parallel, the first file (keys required by ) of each job is highly likely to be replaced by the second file (values required by ) of other jobs arriving later under the LRU policy. Therefore, when thestarts, only the values are cached, resulting in zero effective cache hit.We conclude this section by noting that although LRC achieves the highest cache hit ratio throughout the experiments, it incurs longer job completion time compared with LERC. On the other hand, LERC achieves a higher effective cache hit ratio over LRC. Its relative advantage in the effective cache hit ratio is consistent with that in the job completion time (when the communication overhead is not a concern). We therefore draw the conclusion that the effective cache hit ratio serves as a more relevant metric for cache performance in our experiments.§ CONCLUSIONSIn this paper, we have identified the defining all-or-nothing property in data-parallel systems, i.e., a compute task can only be sped up when all of its input datasets are cached in memory. Existing cache management policies are agnostic to this all-or-nothing requirement. Instead, they settle on the cache hit ratio as their optimization objective and hence cannot effectively reduce the task runtime in data-parallel environments. To address this problem, we have proposed the effective cache hit ratio as a more relevant cache performance metric for data-parallel tasks. We have designed a coordinated cache policy, Least Effective Reference Count (LERC), that optimizes this metric by evicting the data blocks with the smallest effective reference count. We have implemented LERC as a pluggable cache manager in Spark, and evaluated its performance through Amazon EC2 deployments. Experimental results validated the relevance of the effective cache hit ratio and the performance advantage of the LERC policy. Compared with the popular LRU policy and the recently proposed LRC policy, LERC speeds up the job completion by up to 37% and 19%, respectively. IEEEtran
http://arxiv.org/abs/1708.07941v1
{ "authors": [ "Yinghao Yu", "Wei Wang", "Jun Zhang", "Khaled B. Letaief" ], "categories": [ "cs.DC" ], "primary_category": "cs.DC", "published": "20170826070301", "title": "LERC: Coordinated Cache Management for Data-Parallel Systems" }
Granular NetworksNetwork Analysis of Particles and GrainsLia PapadopoulosDepartment of Physics & Astronomy, University of Pennsylvania, [email protected] A. PorterDepartment of Mathematics, University of California Los Angeles, USA Mathematical Institute, University of Oxford, UKCABDyN Complexity Centre, University of Oxford, [email protected] E. DanielsDepartment of Physics, North Carolina State University, [email protected] S. Bassett^*Departments of Bioengineering and Electrical & Systems Engineering, University of Pennsylvania, USA^*Corresponding author: [email protected] December 30, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The arrangements of particles and forces in granular materials have a complex organization on multiple spatial scales that ranges from local structures to mesoscale and system-wide ones. This multiscale organization can affect how a material responds or reconfigures when exposed to external perturbations or loading. The theoretical study of particle-level, force-chain, domain, and bulk properties requires the development and application of appropriate physical, mathematical, statistical, and computational frameworks. Traditionally, granular materials have been investigated using particulate or continuum models, each of which tends to be implicitly agnostic to multiscale organization. Recently, tools from network science have emerged as powerful approaches for probing and characterizing heterogeneous architectures across different scales in complex systems, and a diverse set of methods have yielded fascinating insights into granular materials. In this paper, we review work on network-based approaches to studying granular matter and explore the potential of such frameworks to provide a useful description of these systems and to enhance understanding of their underlying physics. We also outline a few open questions and highlight particularly promising future directions in the analysis and design of granular matter and other kinds of material networks.Granular materials, particulate systems, networks, network science, multiscale organization § GLOSSARY OF TERMSGranular materials: Granular materials are collections of discrete, macroscopic particles that interact with each other through contact (rather than long-range) forces. Importantly, these systems are non-equilibrium: the particles are large enough to avoid rearrangement under thermal fluctuations, and they lose energy through frictional and inelastic interactions with neighboring particles.Particulate materials: Like granular materials, particulate materials are collections of discrete, macroscopic elements. However, the elements making up the system may be entities — such as bubbles, foams, colloids, or suspensions — that include multiple phases of matter. The term “particulate material” is a more general one than “granular material”.Packing fraction: The fraction of a granular material that consists of the particles. One calculates the packing fraction as the ratio of the total volume of all particles to the volume of the region that they occupy. The packing fraction is also sometimes called the “packing density” or “volume fraction”.Force chain: Force chains are typically described as the subset of inter-particle contacts in a granular material that carry the largest forces in the system. They often form filamentary networks that align preferentially with the principal stress axes under which a material is loaded. Jamming: As certain system parameters change, disordered, particulate materials typically undergo a transition from an underconstrained, liquid-like state to a rigid, solid-like state characterized by the onset of mechanical stability. The transition to/from a jammed state may arise through increases/decreases in quantities like packing fraction or contact number, which can occur due to an applied load. A formal theory of jamming exists for idealized situations (with soft, frictionless, and spherical particles). Isostatic: A jammed packing is isostatic when it has exactly the minimum number of contacts that are required for maintaining mechanical stability through force balance and torque balance. One typically examines isostaticity by calculating the mean number of contacts per particle. “Hyperstatic” and “hypostatic” packings have more and fewer contacts, respectively.Mono/bi/poly-disperse: A particulate material is monodisperse if it is composed of particles of a single species (with the same size, shape, and material properties). Bidisperse materials have particles of two species, and polydisperse materials can either have particles from three or more discrete species or have particles with a continuum of properties. Structural rigidity theory: For a specified structure composed of fixed-length rods connected to one another by hinges, structural rigidity theory studies the conditions under which the associated structural graph is able to resist deformations and support applied loads.Stress: A stress is a force applied to an object's surfaces. (The units measure a force per unit area.) Shear stress arises from the component of the applied force that acts in a direction parallel to the object's cross-section, and normal stress arises from the perpendicular component. Strain: Strain is the fractional (unitless) deformation of a material that arises due to an applied stress. One calculates strain from the relative displacement of particles in a material, excluding rigid-body motion such as translation or rotation. Like stress, strain has both shear and normal components.Pure shear: One can use the term “pure shear” to describe either stresses or strains in which an object is elongated along one axis and shortened in the perpendicular direction without inducing a net rotation.Axial compression: In axial compression, one applies inward forces to an object in one direction (uniaxial), two directions (biaxial), or all directions (isotropic compression). These forces result in uniaxial strain, biaxial strain, or isotropic strain, respectively.Cyclic shear/compression: These consist of repeated cycles of shear or compression applied to the same system. Shear band: A shear band is a narrow region of a particulate material in which most of the strain is localized, whereas other regions remain largely undeformed. A shear band is also sometimes called a region of “strain localization”.Strain softening/hardening: As a material is loaded and undergoes deformation, continuing deformation can become either easier (strain softening) or harder (strain hardening). Eventually, after much deformation, the material can reach a critical state in which there are no further changes in the resistance to deformations.Stress ratio: The stress ratio, which is analogous to Coulomb's Law, is the ratio of shear to normal stresses. Frictional failure occurs when the shear force exceeds the product of the normal force and the coefficient of friction. Photoelasticity/birefringence: Photoelasticity is an optical technique for quantifying internal stresses based on the transmission of polarized light through “birefringent” materials, which have preferentially fast and slow directions for the propagation of light. DEM or MD simulations: The Discrete (or Distinct) Element Method and Molecular Dynamics simulations are related numerical techniques that compute the motions of all particles in a system (such as a granular material). In each method, a computer algorithm treats each particle as an object subject to Newton's laws of motion, where forces consist of body forces (e.g., gravity) and those that arise from interactions with the object's neighbors. § INTRODUCTIONGranular materials comprise a subset of the larger set of particulate matter <cit.>. People engage with such materials — which include sands, beans, grains, powders such as cornstarch, and more — often in their daily lives. One can define a granular material as a large collection of discrete, macroscopic particles that interact only when in contact. Granular materials are inherently non-equilibrium in two distinct ways, characterized by (1) the lack of rearrangement under thermal fluctuations and (2) the loss of energy through frictional and inelastic dissipation during contact between grains. Nonetheless, they phenomenologically reproduce equilibrium states of matter, exhibiting characteristics of solids (forming rigid materials), liquids (flowing out of a container), or gases (infrequent contacts between grains), depending on the type and amount of driving. In this review, we focus mainly on granular solids and slow (non-inertial) flows <cit.>; these are dense materials in which sustained inter-particle contacts provide the dominant contribution to material properties.The functional properties of granular materials are related in a nontrivial way to the complex manner in which particles interact with one another and to the spatial scales (particle, chain, domain, and bulk) and time scales over which those interactions occur. For example, pairs of particles can exert force on one another in a local neighborhood. However, as particles push on adjacent particles, the combined effect can transmit forces over long distances via important mesoscale structures commonly called force chains <cit.>. The idea of networks has been invoked for many years to help provide a quantitative understanding and explanation of force-chain organization <cit.>. Broadly speaking, force chains form a network of filamentary-like structures that are visually apparent in images from experiments, like the one shown in Fig. <ref>. In such images, the brighter particles carry larger forces <cit.>. Furthermore, force chains tend to align preferentially along the principal stress axes <cit.>. It can be helpful to think of a force-chain network as the backbone of strong forces that span a system, providing support for both static <cit.> and dynamic <cit.> loading. However, weaker forces can also play a stabilizing role, much as guy-wires do on an aerial tower <cit.>.It is also possible for sets of particles to cluster together into larger geographical domains, with potentially distinct properties, that can have weak structural boundaries between them <cit.>. At the largest scale, granular materials as a whole exhibit bulk properties, such as mechanical stability or instability in response to sheer or compression <cit.>. All of the aforementioned spatial scales are potentially relevant for understanding phenomena such as transmission of acoustic waves <cit.>, thermal conductivity and heat transfer <cit.>, electrical properties <cit.>, and more. The time scales of interactions in granular materials are also important, and they can vary over many orders of magnitude. For example, in systems under compression, statistical fluctuations of grain displacements depend fundamentally on the length of the strain step (i.e., “increment”) over which one makes measurements, as fluctuations over short windows are consistent with anomalous diffusion and those over longer windows are consistent with Brownian behavior <cit.>.The principled study of such diverse characteristics and organization in a single system can be very challenging, and the development of appropriate physical, mathematical, statistical, and computational models is important to attain a mechanistic understanding of granular materials. Traditionally, it has been common to model granular materials using either particulate-based or continuum-based frameworks <cit.>. However, both of these approaches are often implicitly agnostic to intermediate-scale organization, which is important for understanding both static granular packings <cit.> as well as granular dynamics<cit.>. Recently, tools from network science <cit.> and related mathematical subjects — which include approaches that can account explicitly for mesoscale structures <cit.> — have been used successfully to study properties of granular materials across multiple spatial and temporal scales. The most common representation of a network, an important idea for the study of complex systems of interacting entities <cit.>, is as a graph <cit.>. A graph consists of a set of nodes (to represent the entities) and a set of edges, each of which represents an interaction between a pair of entities (or between an entity and itself). Increasingly, more complicated network representations (such as multilayer networks <cit.>) are also employed. Moreover, there is also an increasing recognition that it is important to consider the impact of other features, such as spatial embedding and other spatial effects <cit.>, on network structure and dynamics, rather than taking an approach that promises that “one size fits all.” Network science offers methods for quantitatively probing and analyzing large, interacting systems whose associated networks have heterogeneous patterns that defy explanations attained by considering exclusively all-to-all, regular, or lattice-like interactions <cit.>.There are several open problems in granular physics that may benefit from network-science approaches. In particular, because granular materials have multiple relevant length and time scales <cit.>, it can be challenging to model and quantify their structural organization, material properties, and responses to external loads <cit.>. However, although complex, the pairwise inter-particle interactions that underlie and govern the structure and behavior of granular systems (and other particulate matter) render them amenable to various network representations and network-based analyses. <cit.> were among the first to explicitly suggest and formalize the use of ideas from network science to begin to study some of the difficult questions in granular physics. In their paper, they highlighted the ability of a network-based perspective to complement traditional methods for studying granular materials and to open new doors for the analysis of these complex systems. One place in which network analysis may be especially useful is in quantifying how local, pairwise interactions between particles in a granular packing yield organization on larger spatial scales (both mesoscale and system-level). For example, in sheared or compressed granular packings, such organization can manifest as force chains or other intermediate-sized sets of particles that together comprise a collective structure. Network science provides approaches to extract and quantitatively characterize heterogeneous architectures at microscale, mesoscale, and macroscale sizes, and one can use these methods to understand important physical phenomena, including how such multiscale organization relates to bulk material properties or to spatial and temporal patterns of force transmission through a material.Network-based approaches should also be able to provide new insights into the mechanisms that govern the dynamics of granular materials. For example, as we will discuss, network analysis can helpfully describe certain aspects of complex dynamics (such as granular flows), and can provide quantitative descriptions of how the structure of a dense granular material evolves as a system deforms under external loads (such as those induced by compression, shear, tapping, or impact). It seems sensible to use a network-based approach when a network is changing on temporal scales slower than the time that it takes for information to propagate along it. We also expect ideas from temporal networks <cit.> or adaptive networks <cit.> to be fruitful for studying faster dynamics, and investigation of granular dynamics in general should benefit from the development of both novel network representations and methods of network analysis that are designed specifically to understand temporally evolving systems. Another important problem in the study of granular materials is to predict when and where a granular system will fail. There has been some progress made in this area using network-based approaches, but it is important to continue to develop and apply tools from network analysis and related areas to gain a deeper understanding of which network features regulate or are most indicative of eventual failure. Another exciting direction for future work is to combine network-based approaches with questions about material design. In particular, can one use network-based approaches to help engineer granular systems — or other materials that are amenable to network representations — with desired and specialized properties?It is also important to note that network-based representations and methods of analysis can provide insightful descriptions of granular materials with various additional complexities, such as systems that are composed of differently-shaped particles, 3-dimensional (3D) materials, and so on. This flexibility makes the application of tools from network science a powerful approach for studying the structural properties and dynamics of granular networks. Such a framework also allows one to compare network architectures in diverse situations, such as between simulations and experiments, across systems that are composed of different types of particles or exposed to different loading conditions, and more. Exploiting these capabilities will yield improved understanding of which properties and behaviors of granular materials are general, versus which are specific to various details of a system. With the continued development of physically-informed network-analysis tools, network-based approaches show considerable promise for further development of both qualitative and quantitative descriptions of the organization and complex behavior of granular materials.The purpose of our paper is to review the nascent application of network theory (and related topics) to the study of granular materials. We begin in Sec. <ref> with a mathematical description of networks. In Sec. <ref>, we briefly review a set of measures that one can calculate on graphs and which have been useful in past investigations of granular materials. In Sec. <ref>, we review several different ways in which granular materials have been represented as networks, and we discuss investigations of such networks to quantify heterogeneous, multiscale organization in granular materials and to understand how these systems evolve when exposed to external perturbations. We also point out insights into the underlying physics that have resulted from network-based investigations of granular matter. We close in Sec. <ref> with some thoughts on the many remaining open questions, and we describe a few specific future directions that we feel are important to pursue. We hope that our review will be helpful for those interested in using tools from network science to better understand the physics of granular systems, and that it will spur interest in using these techniques to inform material design.§ NETWORK CONSTRUCTION AND CHARACTERIZATION§.§ What is a network? It is often useful to model a complex system as a network, the simplest type of which is a graph <cit.>. For most of our article, we will use the terms network and graph synonymously, but the former concept is more general than the latter.[Indeed, it is increasingly important to examine network representations that are more complicated than graphs (see Secs. <ref> and <ref>) — such as multilayer networks <cit.>, simplicial complexes <cit.>, and others — and it is also essential to study dynamical processes on networks, rather than focusing exclusively on structural characteristics <cit.>.]A graph G consists of nodes (i.e., vertices), where pairs of nodes are adjacent to each other via edges (i.e., links).We denote the set of nodes by 𝒱 and the set of edges by ℰ.A node can also be adjacent to itself via a self-edge (which is also sometimes called a self-loop), and a multi-edge describes the presence of two or more edges that are attached to (i.e., incident to) the same pair of nodes. (Unless we state otherwise, we henceforth assume that our networks have neither self-edges nor multi-edges.) The number of nodes in a graph is the size of the graph, and we also use the word “size” in the same way for other sets of nodes. A subgraph of a graph G is a graph constructed using a subset of G's nodes and edges.An edge between two nodes represents some sort of relationship between them. For example, edges can represent information flow between different parts of the internet <cit.>, friendship or other social interactions between people <cit.>, trading between banks <cit.>, anatomical or functional connections between large-scale brain regions <cit.>, physical connections between particles in contact <cit.>, and so on. Edges can be either unweighted or weighted, and they can be either undirected or directed <cit.>. In an unweighted (i.e., binary) network, an edge between two nodes is assigned a binary value (traditionally 0 or 1) to encode the absence or presence of a connection. In a weighted network, edges can take a variety of different values to convey varying strengths of relationships between nodes. In an undirected network, all edges are bidirectional, so one assumes that all relationships are reciprocal. In a directed network, however, edges have a direction that encodes a connection from one node to another.An adjacency matrix is a useful way to represent the information in a graph. For an unweighted and undirected graph, an element of the adjacency matrix 𝐀 of an N-node network is A_ij = {[ 1 ,if there is an edge between nodes i and j ,; 0 ,otherwise , ].where i,j ∈{1, …, N}. For a network in which nodes do not have labels, one can apply a permutation to A's rows and columns — one uses the same permutation for each — to obtain another adjacency matrix that represents the same network. In this paper, when we refer to “the” adjacency matrix A of a graph with unlabeled nodes, we mean any one of these matrices, which have the same properties (spectra, etc.).For a weighted graph, if nodes i and j are adjacent via an edge, we denote the corresponding edge weight by w_ij (which is usually given by a nonnegative real number (e.g., the value of the normal or tangential component of the force between two contacting particles)[We do not consider edges with negative weights, although it may be interesting to do so in future work if there is an appropriate physical reason.]). An element in the associated weighted adjacency matrix (which is sometimes called a weight matrix) 𝐖 isW_ij = {[ w_ij ,if there is an edge between nodes i and j ,;0 ,otherwise . ]. For the more general case of a weighted, directed graph, if there is an edge from node j to node i, then we let w_ij represent the weight of that edge <cit.>. The associated weighted and directed adjacency matrix 𝐖 is W_ij = {[ w_ij ,if there is an edge from node j to node i ,;0 ,otherwise . ]. An adjacency matrix associated with an undirected network is symmetric, but an adjacency matrix for a directed network need not be (and is symmetric if and only if all directed edges are reciprocated). In the present review, we primarily consider undirected networks, although we will occasionally make remarks about directed situations.For weighted graphs, it is often also important to consider a binary adjacency matrix 𝐀 associated with a weight matrix 𝐖. Note that 𝐀 captures only the connectivity of nodes (i.e., their adjacencies), irrespective of how strongly they interact with each other. In terms of 𝐖, the corresponding binary network (which can be either directed or undirected) isA_ij = {[ 1 ,ifW_ij≠ 0 ,; 0 ,otherwise . ].It is common to use terms like network topology when discussing structural properties of 𝐀, and sometimes one uses terms like network geometry when discussing properties that also depend on edge weights. Because we will also employ ideas from subjects like algebraic topology (see Sec. <ref>), we will need to be very careful with such terminology.The network representations that have been used to study granular matter (and other kinds of materials) employ diverse definitions of edges (both weighted and unweighted, and both directed and undirected), and some generalizations of graphs have also been considered. See Fig. <ref> for a schematic showing possible choices of nodes and edges for a network representation of a granular packing. A variety of tools and measures from network analysis have been used to study granular networks. We discuss some of these ideas in Sec. <ref>. §.§ Some tools for characterizing granular networksNetwork theory <cit.> provides myriad ways to characterize and quantify the topological and geometrical organization of complex networks. Thus, different network methods can reveal different important features of the underlying system, and these features, in turn, can help explain how a system behaves in certain situations. In the context of granular matter, for example, it is often desirable to understand the stability of a material, mechanical responses to external stresses, or wave propagation through a system. Recent investigations have demonstrated that network analysis can inform understanding of the mechanisms that underlie these phenomena. In this section, we discuss several network concepts; and in Sec. <ref>, we describe how they have been used for the study of granular materials. We are, of course, not presenting anything close to an exhaustive list of tools from network science. See <cit.> and other books and reviews (and references therein) for discussions of other tools from network science. For simplicity, we primarily give definitions for undirected networks, though many of the ideas that we present also have directed counterparts. We start with basic network diagnostics, and also discuss some more complicated methods.§.§.§ Degree.One local property of a network is node degree. In an undirected network, a node's degree is equal to the number of edges that are attached to it (see Fig. <ref>d). We denote the degree of node i as k_i, and we recall that N denotes the total number of nodes. For an unweighted graph with adjacency matrix A, one can calculate k_i with the formulak_i = ∑_j=1^N A_ij .One can generalize the idea of degree to strength (i.e., weighted degree) using a weight matrix 𝐖 <cit.>. The strength s_i of node i is equal to the sum of the weights of the edges that are attached to that node: s_i = ∑_j=1^N W_ij . Its associated degree k_i is still given by Eq. (<ref>).A common network representation of granular materials is to treat particles as nodes and physical contacts between particles as either unweighted or weighted edges (see Fig. <ref>a–c). In this representation, node degree and node strength quantify information at the scale of single particles. One can compute the mean degree (a global property) of a network by calculating⟨k⟩ = 1/N∑_i k_i ,and one can similarly compute the mean strength with the formula⟨s⟩ = 1/N∑_i s_i .In an undirected network, mean degree is related to N and the total number m of edges through the relation ⟨k⟩ = 2m/N. It is sometimes useful to characterize a network using its degree distribution P(k) <cit.>, which gives the probability that a node has degree k.When one represents a granular packing as a contact network (see Fig. <ref> and Sec. <ref>), which is a binary network (i.e., unweighted network), the degree k_i of node i is known more commonly in the physics literature as its contact number or coordination number Z_i. If every node has the same degree (or even if most nodes have this degree), such as in a regular lattice, one often refers to the mean coordination number Z of the lattice or packing. It is well-known that coordination number is related to the stability of granular packings <cit.> and plays a critical role in the jamming transition <cit.>, a change of phase from an underconstrained state to a rigid state that is characterized by the onset of mechanical stability.[Unless we note otherwise, we use the phrase jamming in the formal sense of the jamming transition as defined by <cit.>. Packings of particles above the jamming point (a critical point related to the jamming transition) are rigid and overconstrained (i.e., “hyperstatic”), those at this point are marginally stable and exactly constrained (i.e., “isostatic”), and those below this point are underconstrained (i.e., “hypostatic”). Additionally, packings below the jamming point are sometimes called “unjammed”, and those above the jamming point are called “jammed”.] We discuss these ideas further throughout the review. §.§.§ Walks and paths. In a network, a walk is an alternating sequence of nodes and edges that starts and ends at a node, such that consecutive edges are both incident to a common node. A walk thus describes a traversal from one node to another node (or to itself) along edges of a network. A path is a walk that does not intersect itself or visit the same node (or the same edge) more than once, except for a closed path, which starts and ends at the same node (see Sec. <ref>). One can compute the number of (unweighted) walks of a given length from a binary, undirected adjacency matrix A <cit.>. The length l of an unweighted walk is defined as the number of edges in the associated sequence (counting repeated edges the number of times that they appear). Letting Ξ^l_ij denote the number of walks of length l between nodes i and j, one calculatesΞ^l_ij = [𝐀^l]_ij .Various types of random walks yield short paths between nodes in a network, and such ideas (and their relation to topics such as spectral graph theory) are very insightful for studying networks <cit.>.In an undirected network, a path from node i to node j is necessarily also a path from node j to node i. However, this is not typically true in directed networks, which have sometimes been utilized in studies of granular force chains (see, e.g., <cit.>). Depending on a network's structure, there may be several or no paths between a given pair of nodes. An undirected network is called connected if there exists a path between each pair of nodes, and a directed network is called strongly connected if there is a path between each pair of nodes <cit.>. (A directed network is called weakly connected if its associated undirected network is connected, and a strongly connected network is necessarily also weakly connected.) In networks of granular packings, both the existence and lengths of paths can impact system behavior. A connected network consists of a single component. When a network has multiple components, it is common to study one component at a time (e.g., focusing on a largest connected component (LCC), which is one that has the largest number of nodes).The length of an unweighted path is the number of edges in the associated sequence, and it is sometimes also called the hop distance (or occasionally, unfortunately, the topological distance). Paths in a network can also be weighted by defining some (possibly abstract) notion of distance associated with the edges of the network. For example, in a spatially-embedded network <cit.>, distance may refer to actual physical distance along an edge, in which case the length of a weighted path in the network is given by the sum of the physical distances along the sequence of edges in the path. However, one can also consider “distance" more abstractly. For example, in a transportation or flow network, one can define a distance between two adjacent nodes to be some measure of resistance between those nodes, and then the length of a weighted path in such a network is given by the sum of the resistances along the sequence of edges in the path.[One can also calculate distances between nodes if they occupy positions in a metric space (such as a latent one that determines the probability that a pair of nodes is adjacent to each other) <cit.>, and the properties of that metric space can influence the distances (namely, the ones along a network) that concern us.]We use the term network distance to indicate a distance between two nodes (which can be either unweighted or weighted) that is computed by summing along edges in a path. A geodesic path — i.e., a shortest path (which need not be unique) — between two nodes can be particularly relevant (though other short paths are often also relevant), and a breadth-first search (BFS) algorithm <cit.> is commonly employed to find geodesic paths in a network. The diameter of a graph is the maximum geodesic distance between any pair of nodes. Denoting the shortest, unweighted network distance (i.e., shortest-path distance) between nodes i and j as d_ij, the mean shortest-path distance L between pairs of nodes in a graph is <cit.>L = 1/N(N-1)∑_i,j(i ≠ j) d_ij .Note that one must be cautious when computing the mean shortest-path distance on disconnected networks (i.e., on networks that have more than one component), because the usual convention is to set the distance between two nodes in different components to be infinite <cit.>. Therefore, in a network with multiple components, the distance L from Eq. (<ref>) is infinite. One solution to this problem is to compute L for each component separately. Another network notion that relies on paths is network efficiency <cit.>E = 1/N(N-1)∑_i,j (i ≠ j)1/d_ij . One can generalize measures based on walks and paths to incorporate edge weights <cit.>. Letting d^w_ij denote the shortest, weighted network distance between nodes i and j, one can define a weighted mean shortest-path distance L^w and weighted efficiency E^w as in Eqs. (<ref>) and  (<ref>), respectively, but now one uses d_ij^w instead of d_ij. The network efficiency E is a normalized version of the Harary index of a graph <cit.>. Additionally, the convention d_ij = ∞ (or d^w_ij = ∞) if there is no path from i to j allows one to use Eq. (<ref>) (or its weighted counterpart E^w) on connected graphs or on graphs with more than one component. For both unweighted and weighted scenarios, large values of network efficiency tend to correspond to small values of mean shortest-path length, and vice versa. One can also readily generalize notions of paths, distances, and efficiency to directed networks <cit.>.In later sections, we will describe the use of paths, walks, and related ideas for investigating the structure of granular materials and their response to perturbations — including, but not limited to, how these quantities change as a granular packing is compressed and goes through the jamming transition <cit.> — and we will also describe their use in specific applications, such as in understanding heat transfer through a granular material <cit.>. §.§.§ Cycles. A cycle (i.e., a closed walk) in a network is a walk that begins and ends at the same node <cit.>. As with other walks, one way to characterize a cycle is by calculating its length. An l-cycle is a cycle in which l edges are traversed (counting repeated edges the number of times that they appear in the cycle). A simple cycle is a cycle that does not include repeated nodes or edges, aside from one repetition of the origin node at the termination of a closed cycle. Thus, for example, a simple 3-cycle in an undirected network is a triangle. For the remainder of this review, we assume that cycles are simple cycles, unless we explicitly state otherwise. In the context of a granular packing, one can directly map particle contact loops — sets of physically-connected grains arranged in a circuit — to cycles in a corresponding graphical representation (see Fig. <ref>d). The length l is odd for an odd cycle and even for an even cycle.We briefly note a few related concepts that are used to examine cycles in graphs because of their relevance to several network-based studies of granular materials. These are the notions of cycle space, cycle basis, and minimum cycle basis <cit.>. The cycle space of an undirected graph is the set of all simple cycles in a graph along with all subgraphs that consist of unions of edge-disjoint simple cycles (i.e., they can share nodes but not edges) <cit.>. A cycle basis is a minimal set of simple cycles such that any element of the cycle space can be written as a symmetric difference of cycles in the cycle basis <cit.>. Finally, for unweighted networks, a minimum cycle basis is a basis in which the total length of all cycles in the basis is minimal. For weighted networks, it is a basis in which the sum of the weights of all cycles in the basis is minimal.Minimum cycle bases can provide useful information about the structure and organization of cycles in a network, so several algorithms have been developed to extract them (see, for example, <cit.>). Once one has determined a minimum cycle basis, one can examine the distribution of cycle lengths or define measures to quantify the participation of different nodes in cycles of different lengths. For example, <cit.> defined the concept of a cycle-participation vector X_i^cycle = [x_i^0,x_i^3,…,x_i^l] for each node i. The elements of this vector count the number of cycles of each length in which node i participates. In this definition, x_i^3 is the number of 3-cycles in which node i participates, x_i^4 is the number of 4-cycles in which node i participates, and so on (up to cycles of length l). If a node is not part of any cycle, then x_i^0 = 1 and x_i^j = 0 for all j ≥ 3; otherwise, x_i^0 = 0.One reason to examine cycles in granular networks <cit.> is that they can help characterize mesoscale structural features of a network. Cycles (that are nontrivial) involve more than a single node, but they do not typically embody global structures of a large network. This makes them appealing for studying network representations of granular materials, because mesoscale features seem to play a role in the behavior of these systems <cit.>. Perhaps the most important motivation, however, is that cycles appear to be relevant for stability and rigidity of a system. Specifically, in the context of structural rigidity theory, 3-cycles tend to be stabilizing structures that can maintain rigidity under applied forces <cit.>, whereas 4-cycles can bend or deform (see Sec. <ref>). In Sec. <ref>, we discuss in more detail how cycles can help characterize granular systems.§.§.§ Clustering coefficients.Clustering coefficients are commonly-used diagnostics to measure the density of triangles either locally or globally in a network <cit.>. For an unweighted, undirected network, the local clustering coefficient C_i is usually defined as the number of triangles involving node i divided by the number of triples centered at node i <cit.>. A triple is a set of three nodes that can include either three edges (to form a 3-cycle) or just two of them. In terms of the adjacency matrix and node degree, the local clustering coefficient isC_i = ∑_hj A_hj A_ih A_ij/k_i(k_i - 1)for k_i ≥ 2 (and C_i = 0 if k_i ∈{0,1}). One can then calculate a global clustering coefficient of a network as the mean of C_i over all nodes:C = 1/N∑_i^N C_i .There is also another (and simpler) common way of defining a global clustering coefficient in a network that is particularly useful when trying to determine analytical approximations of expectations over ensembles of random graphs <cit.>.The notion of a local clustering coefficient has also been extended to weighted networks in several ways <cit.>. In one formulation <cit.>, a local, weighted clustering coefficient C^w_i is defined asC^w_i = 1/s_i(k_i - 1)∑_j,h(W_ij + W_ih)/2 A_ij A_ih A_jhfor strength s_i > 0 and degree k_i ≥ 2. The quantity C^w_i = 0 if either s_i = 0 (so that k_i = 0) or k_i = 1.Recall that 𝐖 and 𝐀 are, respectively, associated weighted and unweighted adjacency matrices. The mean of C^w_i over all nodes gives a weighted clustering coefficient C^w of a network. As we will discuss later (see Secs. <ref> and <ref>), clustering coefficients have been employed in several studies of granular materials. For example, they have been used to examine stability in granular packings <cit.>. See Fig. <ref>a for an example of the spatial distribution of a clustering coefficient in a granular packing.§.§.§ Centrality measures.In network analysis, one calculates centrality measures to attempt to quantify the importance of particular nodes, edges, or other structures in a network <cit.>. Different types of centralities characterize importance in different ways. The degree centrality (i.e., degree) of a node, for example, is simply the number of edges attached to it (see Sec. <ref>). A few other types of centrality that have been used to study granular materials are closeness centrality, node betweenness centrality, edge betweenness centrality, and subgraph centrality.Notions of closeness centrality of a node measure how close that node is to other nodes in a network <cit.>. For a given node i, the most standard notion of closeness is defined as the inverse of the sum over the shortest-path lengths from node i to all other nodes j in a network. That is, node i's closeness centrality isH_i = N-1/∑_j ≠ i d_ij .Note that if we use the convention that the distance between two nodes in different components is infinite, then Eq. (<ref>) only makes sense for connected networks. For any network with more than one component, Eq. (<ref>) yields a closeness centrality of 0.The geodesic node betweenness centrality of node i is the fraction of geodesic paths (either unweighted or weighted) between distinct nodes (not including i) that traverse node i <cit.>. Let ψ_gh(i) denote the number of geodesic paths from node g to node h that traverse node i (with i ∉{g,h}), and let ψ_gh denote the total number of geodesic paths from node g to node h. The geodesic node betweenness centrality of node i is thenB_i = ∑_g,h; g ≠ hψ_gh(i)/ψ_gh ,i ∉{g,h} . Geodesic node betweenness can probe the heterogeneity of force patterns in granular networks. See Fig. <ref>b for an example spatial distribution of a geodesic node betweenness centrality in an experimental granular packing. One can also compute a geodesic edge betweenness centrality of an edge by calculating the fraction of shortest paths (either unweighted or weighted) that traverse it <cit.>. Let ψ_gh(i,j) denote the number of geodesic paths from node g to node h that traverse the edge that is attached to nodes i and j, and let ψ_gh denote the total number of geodesic paths from node g to node h. The geodesic edge betweenness centrality of this edge is thenB^e_ij = ∑_g,h;g ≠ hψ_gh(i,j)/ψ_gh . Another measure of node importance is subgraph centrality Y <cit.>, which quantifies a node's participation in closed walks of all lengths. Recall from Sec. <ref> that one can write the number of length-l walks from node i to node j in terms of powers of the adjacency matrix 𝐀. To calculate closed walks of length l that begin and end at node i, we take i = j in Eq. (<ref>). The subgraph centrality of node i, with a specific choice for how much we downweight longer paths, is then given byY_i = ∑_l=0^∞[𝐀^l]_ii/l! . Because shorter walks are weighted more strongly than longer walks in Eq. (<ref>), they contribute more to the value of subgraph centrality. (In other contexts, centrality measures based on walks have also been used to compare the spatial efficiencies of different networks, and such ideas are worth exploring in granular materials <cit.>.) One can also express subgraph centrality in terms of the eigenvalues and eigenvectors of the adjacency matrix <cit.>. Let v^i_α denote the ith component of the αth eigenvector v_α of 𝐀, and let λ_α denote the corresponding αth eigenvalue. One can then writeY_i = ∑_α=1^n (v^i_α)^2 e^λ_α . One can then calculate a mean subgraph centrality Y by averaging Y_i over the nodes in a network. In one study of granular materials <cit.>, a subgraph centrality was examined for weighted networks by considering the eigenvalues and eigenvectors of the weight matrix 𝐖 in Eq. (<ref>). One can also compute network bipartivity R <cit.> to quantify the contribution to mean subgraph centrality Y from closed walks of even length. In particular, the network bipartivity R_i of node i is R_i = Y^even_i/Y_i , where Y^even_i is the contribution to the sum in Eq. (<ref>) from even values of l (i.e., even-length closed walks). As with other node diagnostics, one can average bipartivity over all nodes in a network to obtain a global measure, which we denote by R. In Sec. <ref>, we will discuss calculations of closeness, betweenness, and subgraph centralities in granular packings. Obviously, our discussion above does not give an exhaustive presentation of centrality measures, and other types of centralities have also been used in studies of granular materials (see, for example, <cit.>). §.§.§ Subgraphs, motifs, and superfamilies. One can interpret the local clustering coefficient in Eq. (<ref>) as a relationship between two small subgraphs: a triangle and a connected triple. Recall that a subgraph of a graph G is a graph constructed using a subset of G's nodes and edges. Conceptually, one can interpret small subgraphs as building blocks or subunits that together can be used to construct a network. For example, in a directed network, there exist three possible 2-node subgraphs (i.e., dyads): the dyad in which node i is adjacent to node j by a directed edge, the dyad in which node j is adjacent to node i by a directed edge, and the dyad in which both of these adjacencies exist. In a directed, unweighted graph, there are 13 different connected 3-node subgraphs <cit.> (see Fig. <ref>). The term motif is sometimes used for a small subgraph that occurs often in a particular network or set of networks (typically relative to some null model, such as a randomly rewired network that preserves the original degree distribution) <cit.>. Borrowing terminology from genetics, these motifs appear to be overexpressed in a network (or set of networks). Unsurprisingly, the number of n-node subgraphs increases very steeply with n, so identifying subgraphs in large networks is computationally expensive, and many algorithms have been developed to estimate the number of subgraphs in an efficient (though approximate) way. See, for example, <cit.>. In applying algorithms for motif counting to data, one seeks to identify subgraphs that are present more often than expected in some appropriate random-network null model. The over-representation of a motif in a network is often interpreted as indicative of its playing a role in the function of that network (though one has to be cautious about drawing such conclusions). For example, 3-node motifs can form feedforward loops in which there are directed edges from node i_1 to node i_2, from node i_2 to node i_3, and from node i_3 to node i_1. The identification and characterization of motifs has yielded insights into the structure and function of a variety of systems, including food webs <cit.>, gene-regulation networks of yeast <cit.>, neuronal networks of the macaque monkey <cit.>, and others. For different types of networks, one can also identify so-called superfamilies, which are sets of networks that have similar motif-frequency distributions <cit.>. There also exists a less-stringent definition of a superfamily in which one disregards whether a subgraph is a motif in the sense of it being more abundant than expected from some random-graph null model and instead considers a superfamily to be a set of networks that have the same rank-ordering of the number of n-node subgraphs for some fixed value of n <cit.>. In either case, one can examine different superfamilies to help understand the role that specific motifs (or subgraphs) or sets of motifs (or subgraphs) may have in potentially similar functions of networks in a given superfamily. Subgraphs, motifs, and superfamilies have been examined in several studies that applied network analysis to granular materials <cit.>. They have revealed interesting insights into the deformation and reconfiguration that occurs in granular systems for different types of loading conditions and external perturbations. We discuss these ideas further in Secs. <ref> and <ref>.§.§.§ Community structure. Many real-world networks also have structure on intermediate scales (mesoscales) that can arise from particular organizations of nodes and edges <cit.>. The most commonly-studied mesoscale network property is community structure <cit.>, which describes sets of nodes, called communities, that are densely (or strongly) interconnected to each other but only weakly connected to other dense sets of nodes. In other words, a community has many edges (or large total edge weight, in the case of weighted networks) between its own nodes, but the number and/or weight of edges between nodes in different communities is supposed to be small. Once one has detected communities in a network, one way to quantify their organization is to compute and/or average various network quantities over the nodes (or edges) within each community separately, rather than over an entire network. For example, one can compute the size (i.e., number of nodes) of a community, mean path lengths between nodes in a given community, or some other quantity to help characterize the architecture of different communities in a network. Studying community structure can reveal useful insights about granular systems, whose behavior appears to be influenced by mesoscale network features <cit.>. Community structure and methods for detecting communities have been studied very extensively <cit.>. We will briefly discuss the method of modularity maximization <cit.>, in which one optimizes an (occasionally infamous) objective function known as modularity, as this approach has been employed previously in several studies of granular materials (see, e.g., Sec. <ref>). However, myriad other approaches exist for studying community structure in networks. These include stochastic block models (SBMs) and other methods for statistical inference (which are increasingly favored by many scholars) <cit.>, approaches based on random walks (e.g., InfoMap <cit.>), various methods for detecting local community structure (see, e.g., <cit.>), edge-based communities <cit.>, and many others.The goal of modularity maximization is to identify communities of nodes that are more densely (or more strongly) interconnected with other nodes in the same community than expected with respect to some null model. To do this, one maximizes a modularity objective functionQ = ∑_i,j [W_ij - γ P_ij] δ(g_i,g_j) ,where g_i is the community assignment of node i and g_j is the community assignment of node j, and where the Kronecker delta δ(g_i,g_j)=1 if g_i = g_j and δ(g_i,g_j)=0 otherwise. The quantity γ is a resolution parameter that adjusts the relative average sizes of communities <cit.>, where smaller values of γ favor larger communities and larger values of γ favor smaller communities <cit.>. The element P_ij is the expected weight of the edge between node i and node j under a specified null model. In many contexts, the most common choice is to determine the null-model matrix elements P_ij from the Newman–Girvan (NG) null model <cit.>, for whichP^NG_ij = s_i s_j/2m , where s_i=∑_j W_ij is the strength (and, for unweighted networks, the degree k_i) of node i and m=1/2∑_i,j W_ij is the total edge weight (and, for unweighted networks, the total number of edges) in the network. There are several other null models, which are usually based on a random-graph model, and they can incorporate system features (such as spatial information) in various ways <cit.>. In the next part of this subsubsection, we discuss a physically-motivated null model that is useful for studying granular force networks.Maximizing Q is NP-hard <cit.>, so it is necessary to use computational heuristics to identify near-optimal partitions of a network into communities of nodes <cit.>. Two well-known choices are the Louvain <cit.> and Louvain-like <cit.> locally greedy algorithms, which begin by placing all nodes in their own community, and they then iteratively agglomerate nodes when the resulting partition increases modularity Q. Because of the extreme near degeneracy of the modularity landscape (a very large number of different partitions can have rather similar values of the scalar Q), it is often useful to apply such an algorithm many times to construct an ensemble of partitions, over which one can average various properties to yield a consensus partition <cit.>. Physical considerations. Community-detection tools, such as modularity maximization, have often been applied to social, biological, and other networks <cit.>. In applying these techniques to granular materials, however, it is important to keep in mind that the organization of particulate systems (such as the arrangements of particles and forces in a material) is subject to significant spatial and physical constraints, which can severely impact the types of organization that can arise in a corresponding network representation of the material. When studying networks that are embedded in real space or constructed via some kind of physical relationship between elements, it is often crucial to consider the spatial constraints — and, more generally, a system's underlying physics — and their effects on network architecture <cit.>. Such considerations also impact how one should interpret network diagnostics such as path lengths and centrality measures, the null models that one uses in procedures such as modularity maximization, and so on. The NG null model was constructed to be appropriate for networks in which a connection between any pair of nodes is possible. Clearly, in granular materials — as in other spatially-embedded systems <cit.> — this assumption is unphysical and therefore problematic. Bassett et al. <cit.> defined a null model that accounts explicitly for geographical (and hence spatial) constraints in granular materials, in which each particle can contact only its nearest neighbors <cit.>. In the context of granular networks with nodes representing particles and edges representing forces between those particles, the geographical null model P in <cit.> has matrix elementsP_ij = ρ A_ij ,where ρ is the mean edge weight in the network and A is the binary adjacency matrix of the network. In this particular application, ρ = f := ⟨ f_ij⟩ is the mean inter-particle force. As we illustrate in Fig. <ref>, modularity maximization with the geographical null model [Eq. (<ref>)] produces different communities than modularity maximization with the NG null model [Eq. (<ref>)] <cit.>.  Generalization of modularity maximization to multilayer networks. Although studying community structure in a given granular packing can provide important insights, one is also typically interested in how such mesoscale structures reconfigure as a material experiences external perturbations, such as those from applied compression or sheer. To examine these types of questions, one can optimize a multilayer generalization of modularity to study multilayer granular force networks in which each layer represents a network at a different step in the evolution of the system (for example, at different time steps or for different packing fractions) <cit.>. In Fig. <ref>, we show a schematic of a multilayer construction that has been employed in such investigations. See <cit.> for reviews of multilayer networks (including generalizations of this construction).One way to detect multilayer communities in a network is to use a generalization of modularity maximization <cit.>, which was derived for multilayer networks with interlayer edges between counterpart nodes in different layers. For simplicity, we suppose that all edges are bidirectional. One maximizesQ_multi = 1/2 η∑_ijqr [(𝒲_ijq - γ_q𝒫_ijq) δ_qr + ω_jqrδ_ij] δ(g_iq,g_jr) , where 𝒲_ijq is the (i,j)th component of the qth layer of the adjacency tensor 𝒲 <cit.> associated with the multilayer network, 𝒫_ijq is the (i,j)th component of the qth layer of the null-model tensor, γ_q is a resolution parameter (sometimes called a structural resolution parameter) for layer q, and ω_jqr is the interlayer coupling between layers q and r. (In the context of multilayer representations of temporal networks, if ω_jqr = ω for all j, q, and r, one can interpret ω as a temporal resolution parameter.) More specifically, ω_jqr is the strength of the coupling that links node j in layer q to itself in layer r. (This type of interlayer edge, which occurs between counterpart nodes in different layers, is called a diagonal edge <cit.>.) The quantities g_iq and g_jr, respectively, are the community assignments of node i in layer q and node j in layer r. The intralayer strength of node j in layer q is s_jq = ∑_i𝒲_ijq, and the interlayer strength of node j in layer q is ζ_jq = ∑_rω_jqr, so the multilayer strength of node j in layer q is given by κ_jq = s_jq + ζ_jq. Finally, the normalization factor η = 1/2∑_jqκ_jq is the total strength of the adjacency tensor.[In the study of multilayer networks, it is common to use the term “tensor” to refer to a multidimensional array <cit.> (as is common in some disciplines <cit.>), and proper tensorial structures have been explored briefly in adjacency tensors <cit.>.] Maximizing multilayer modularity [Eq. (<ref>)] allows one to examine phenomena such as evolving communities in time-dependent networks or communities that evolve with respect to some other parameter, and communities in networks with multiple types of edges. Capturing such behavior has been useful in many applications, including financial markets <cit.>, voting patterns <cit.>, international relations <cit.>, international migration <cit.>, disease spreading <cit.>, human brain dynamics <cit.>, and more. In the context of granular matter, multilayer community detection allows one to examine changes in community structure of a force network, in which communities can either persist or reconfigure, with respect to both particle content and the mean strength of nodes inside a community, due to applied loads on a system.§.§.§ Flow networks. One can examine many natural and engineered systems — such as animal and plant vasculature, fungi, and urban transportation networks <cit.> — from the perspective of flow networks (which are often directed) that transport a load (of fluids, vehicles, and so on) along their edges. It is of considerable interest to examine how to optimize flow through a network <cit.>. A well-known result from optimization theory is the maximum-flow–minimum-cut theorem <cit.>: for a suitable notion of flow and under suitable assumptions, the maximum flow that can pass from a source node to a sink node is given by the total weight of the edges in the minimum cut, which is the set of edges with smallest total weight that, when removed, disconnect the source and the sink. A related notion, which applies to networks in which there is some cost associated with transport along network edges, is that of maximum-flow–minimum-cost. In this context, one attempts to find a route through a network that maximizes flow transmission from source to sink, while minimizing the cost of flow along network edges <cit.>. The maximum-flow–minimum-cut and maximum-flow–minimum-cost problems are usually examined under certain constraints, such as flow conservation at each node and an upper bound (e.g., limited by a capacitance) on flow through any edge. One can examine granular networks from such a perspective by considering a flow of force along a network formed by contacting grains. We discuss relevant studies in Sec. <ref>. §.§.§ Connected components and percolation.Sometimes it is possible to break a network into connected subgraphs called components (which we introduced briefly in Sec. <ref>). A component, which is sometimes called a cluster, is a subgraph G_C of a graph G such that at least one path exists between each pair of nodes in G_C <cit.>. Components are maximal subsets in the sense that the addition of another node of G to it destroys the property of connectedness. An undirected graph is connected when it consists of a single component. Networks with more than one component often have one component that has many more nodes than the other components, so there can be one large component and many small components. One can find the components of a graph using a breadth-first search (BFS) algorithm <cit.>, and one can determine the number of components by counting the number of 0 eigenvalues of a graph's combinatorial Laplacian matrix <cit.>. To study graph components, one can also use methods from computational algebraic topology. Specifically, the zeroth Betti number β_0 indicates the number of connected components in a graph <cit.> (see Sec. <ref>). Percolation theory <cit.>, which builds on ideas from subjects such as statistical physics and probability theory, is often used to understand the emergence and behavior of connected components in a graph <cit.>. For example, in the traditional version of what is known as bond percolation (which is also traditionally studied on a lattice rather than on a more general network) <cit.>, edges are occupied with probability p, and one examines quantities such as the size distributions of connected components as a function of the parameter p, called the bond occupation probability. It is especially interesting to determine a critical value p_c, called the percolation threshold, at which there is a phase transition: below p_c, there is no percolating component (or cluster), which spans the system and connects opposite sides; above p_c, there is such a cluster <cit.>. In the latter scenario, it is common to say that there is a “percolating network". In percolation on more general networks, one can study how the size of the largest component, as a fraction of the size of a network, changes with p. Related ideas also arise in the study of components in Erdős–Rényi random graphs G(N,p), in which one considers an ensemble of N-node graphs and p is the independent probability that an edge exists between any two nodes <cit.>. In the limit N →∞, the size of the largest connected component (LCC) undergoes a phase transition at a critical probability p_c = 1/N. When p < p_c, the ER graph in expectation does not have a giant connected component (GCC); at p = p_c, a GCC emerges whose size scales linearly with N for p > p_c. Similarly, for bond percolation on networks, a transition occurs at a critical threshold p_c, such that for p > p_c, there is a GCC (sometimes also called a “giant cluster” or “percolating cluster”) whose size is a finite fraction of the total number N of nodes as N →∞ <cit.>. When studying percolation on networks, quantities of interest include the fraction of nodes in the LCC, the mean component size, the component-size distribution, and critical exponents that govern how these quantities behave just above the percolation threshold <cit.>.We will see in Sec. <ref> that it can be informative to use ideas from percolation theory to study the organization of granular networks. For example, it is particularly interesting to examine how quantities such as the number and size of connected components evolve as a function of packing density (or another experimental parameter). <cit.>. Some studies have considered connectivity percolation transitions, which are characterized by the appearance of a connected component that spans a system (i.e., a percolating cluster, as reflected by an associated GCC in the infinite-size limit of a network); or rigidity percolation transitions, which can be used to examine the transition to jamming <cit.>. Rigidity percolation is similar to ordinary bond percolation (which is sometimes used to study connectivity percolation), except that edges represent the presence of rigid bonds between network nodes <cit.> and one examines the emergence of rigid clusters in the system as a function of the fraction of occupied bonds.One can also study percolation in force networks by investigating the formation of connected components and the emergence of a percolating cluster of contacts as a function of a force threshold, which is a threshold applied to a force-weighted adjacency matrix (representing contact forces between particles) to convert it to a binary adjacency matrix<cit.>. (See Sec. <ref> for additional discussion.) However, it is important to note that when studying networks of finite size, one needs to be careful with claims about GCCs and percolation phase transitions, which are defined mathematically only in the limit of infinite system size. §.§.§ Methods from algebraic topology and computational topology. The tools that we have described thus far rely on the notion of a dyad (i.e., a 2-node subgraph) as the fundamental unit of interest (see Fig. <ref>a). However, recent work in algebraic topology and computational topology <cit.> offers a complementary view, in which the fundamental building blocks that encode relationships between elements of a system are k-simplices (each composed of k+1 nodes), rather than simply nodes and dyadic relations between them (see Fig. <ref>b). These structures can encode “higher-order” interactions and can be very useful for understanding the architecture and function of real-world networks (e.g., they yield a complementary way to examine mesoscale network features), and they have been insightful in studies of sensor networks <cit.>, contagion spreading <cit.>, protein interactions <cit.>, neuronal networks <cit.>, and many other problems. See <cit.> for further discussion and pointers to additional applications. The discussion in <cit.> is also useful.A collection of simplices that are joined in a compatible way is called a simplicial complex, which is a generalization of a graph that can encode non-dyadic relations <cit.>. More precisely, and following <cit.>, we define an (abstract) simplicial complex 𝒳 as a pair of sets: V_𝒳, called the vertices (or nodes); and S_𝒳, called the simplices, each of which is a finite subset of V_𝒳, subject to the requirement that if σ∈ S_𝒳, then every subset τ of σ is also an element of S_𝒳. A simplex with k elements is called a (k-1)-simplex, and subsets τ⊂σ are called faces of σ. Using this notation, a 0-simplex is a node, a 1-simplex is an edge and its two incident nodes (i.e., a dyad), a 2-simplex is a filled triangle, and so on (see Fig. <ref>b). One type of simplicial complex that can be used to encode the information in a graph is a clique complex (sometimes also called a flag complex); we show an example in Fig. <ref>. To construct the clique complex of a graph G, one associates every k-clique (a complete — i.e., fully connected — subgraph of k nodes) in G with a (k-1)-simplex. One can thus think of building the clique complex of a graph G as “filling in” all of the k-cliques in G (see Fig. <ref>c). Note that we use the terms k-simplex and k-clique because they are standard, but it is important not to confuse the use of k in this context with the use of k as the (also standard) notation for node degree.One important feature of a simplicial complex is the potential presence of cycles.[Although we use the term cycle, which is standard in algebraic topology, note that this concept of a cycle is distinct from (though related to) the standard network-science use of the word “cycle” (see Sec. <ref>). The latter is sometimes called a circuit, a term that we will use occasionally for clarity (especially given our focus on connected graphs).] A cycle can consist of any number of nodes, and a k-dimensional cycle is defined as a closed arrangement of k-simplices, such that a cycle has an empty boundary[The precise mathematical definition of a cycle requires a more detailed presentation than what we include in our present discussion. For more information and further details from a variety of perspectives, see <cit.>.]. For example, Fig. <ref>d illustrates a closed arrangement of 1-simplices (i.e., edges) that forms a 1-dimensional cycle. It important to distinguish between cycles that encircle a region that is filled by simplices with cycles that enclose a void (which is often called a “hole” for the case of 1-dimensional cycles). For example, the set of purple edges in the object in the upper portion of Fig. <ref>d constitute a 1-dimensional cycle that surrounds a region filled by 2-simplices (i.e., filled triangles), whereas the purple edges in the object in the lower portion of Fig. <ref>d constitute a 1-dimensional cycle that encloses a hole. Characterizing the location and prevalence of void-enclosing cycles in the clique complex of a network representation of a granular packing can offer fascinating insights into the packing's structure <cit.>. One way to do this is by computing topological invariants such as Betti numbers <cit.>.The kth Betti number β_k counts the number of inequivalent k-dimensional cycles that enclose a void, where two k-dimensional cycles are equivalent if they differ by a boundary of a collection of (k+1)-simplices. In other words, the kth Betti number β_k counts the number of nontrivial equivalence classes of k-dimensional cycles and can thus also be interpreted as counting the number of voids (i.e., “holes" of dimension k).[In the literature, it is common to abuse terminology and refer to an equivalence class of k-dimensional cycles simply as a k-dimensional cycle.] The zeroth Betti number β_0 gives the number of connected components in a network, the first Betti number β_1 gives the number of inequivalent 1-dimensional cycles that enclose a void (i.e., it indicates loops), the second Betti number β_2 gives the number of inequivalent 2-dimensional cycles that enclose a void (i.e., it indicates cavities), and so on. Another useful way to examine the topological features that are determined by equivalence classes of k-dimensional cycles (i.e., components, loops, cavities, and so on) is to compute persistent homology (PH) of a network. For example, to compute PH for a weighted graph, one can first decompose it into a sequence of binary graphs. One way to do this is to begin with the empty graph and add one edge at a time in order of decreasing edge weights (see Fig. <ref>e). More formally and following <cit.>, this process can translate information about edge weights into a sequence of binary graphs as an example of what is called a filtration <cit.>. The sequence G_0 ⊂ G_1 ⊂…⊂ G_|ℰ| of unweighted graphs begins with the empty graph G_0, and one adds one edge at a time (or multiple edges, if some edges have the same weight) in order from largest edge weight to smallest edge weight. (One can also construct filtrations in other ways). Constructing a sequence of unweighted graphs in turn yields a sequence of clique complexes <cit.>, allowing one to examine equivalence classes of cycles as a function of the edge weight θ (or another filtration parameter). Important values of θ include the weight θ_birth associated with the first graph in which an equivalence class (i.e., a topological feature) occurs (i.e., its birth coordinate) and the edge weight θ_death associated with the first graph in which the feature disappears (i.e., its death coordinate), such as by being filled in with higher-dimensional simplices or by merging with an older feature. One potential marker of the relative importance of a particular feature (a component, a loop, and so on) in the clique complex is how long it persists, as quantified by its lifetime θ_birth - θ_death (although short-lived features can also be meaningful <cit.>). A large lifetime indicates robust features that persist over many values of a filtration parameter. Persistence diagrams (PDs) are one useful way to visualize the evolution of k-dimensional cycles with respect to a filtration parameter. PDs encode birth and death coordinates of features as a collection of persistence points (θ_birth,θ_death) in a planar region. One can construct a PD for each Betti number: a β_0 PD (denoted by PD_0) encodes the birth and death of components in a network, a β_1 PD (denoted by PD_1) encodes the birth and death of loops, and so on.To demonstrate some key aspects of a filtration, the birth and death of topological features, and PDs, we borrow and adapt an example from <cit.>. Consider the small granular force network in Fig. <ref>a; the nodes represent particles in a 2D granular packing, and the colored edges represent the magnitude of the inter-particle forces (of which there are four distinct values) between contacting particles. In a 2D system like this one, the only relevant Betti numbers are β_0 and β_1, as all others are 0. In Fig. <ref>b, we show the flag complex (which is essentially the same as a clique complex <cit.>) of the granular network, where the color of a triangle indicates the value corresponding to the minimum force along any of its edges. Computing PH on a flag complex (which has been done in several studies of PH in granular force networks <cit.>) only counts loops that include 4 or more particles. That is, it does not count 3-particle loops (which are sometimes called “triangular loops”). Loops with 4 or more particles are associated with defects, because they would not exist in a collection of monosized disks that are packed perfectly and as densely as possible into a “crystalline" structure (which has only triangular loops) <cit.>. In Fig. <ref>c–f, we show the sequence of complexes that correspond to the filtration over the flag complex. One descends the four threshold levels (i.e., edge weights), beginning with the largest (θ_4) and ending with the smallest (θ_1). In Fig. <ref>g,h, we show the corresponding PDs for β_0 and β_1. It is helpful to discuss a few features of these diagrams. In PD_0, we observe four points that are born at θ_4; these points correspond to the four connected components that emerge at the first level of the filtration in Fig. <ref>c. Two of the components merge into one component at θ_3 (see Fig. <ref>d); this corresponds to the point at (θ_4,θ_3). A new component forms at θ_3 and dies at θ_2; this is represented by the point at (θ_3,θ_2) (see Fig. <ref>d). Additionally, two components born at θ_4 die at theta_2, corresponding to the two points at (θ_4,θ_2). One can continue this process until the end of the filtration, where there is just a single connected component (see Fig. <ref>f). This component is born at θ_4; it persists for all thresholds, and we use <cit.>'s convention to give it a death coordinate of -1; this yields the persistence point at (θ_4,-1). In PD_1, we observe that a loop emerges at θ_3 (see Fig. <ref>d), and it is then filled by triangles at θ_2 (see Fig. <ref>e), leading to the point at (θ_3,θ_2). Three more loops are born at θ_2 and never die (see Fig. <ref>e); using the convention in <cit.>, we assign these features a death coordinate of 0, so there are three persistence points at (θ_2,0). Finally, one more loop appears at θ_1 and does not die (see Fig. <ref>e); this is represented by a point at (θ_1,0).<cit.> gave an in-depth exposition of how to apply PH to granular networks, and we refer interested readers to this paper for more information. Because studying PH is a general mathematical approach, it can be applied to different variations of force networks and can also be used on networks constructed from different types of experimental data (e.g., digital image data, particle-position data, or particle-interaction data). <cit.> also discussed a set of measures that can be used to compare and contrast the homology of force networks both within a single system (e.g., at two different packing fractions) and across different systems (e.g., if one uses particles of different sizes or shapes), and they explored the robustness of PH computations to noise and numerical errors. In Sec. <ref>, we further discuss applications of methods from algebraic and computational topology to granular materials.§.§ Some considerations when using network-based methodsBecause there are many methods that one can use to analyze granular networks and many quantities that one can compute to measure properties of these networks, it is useful to discuss some relationships, similarities, and distinctions between them. Naturally, the meaning of any given network feature depends on how the network itself is defined, so we focus the present discussion on the most common representation of a granular system as a network. (See Sec. <ref> for discussions of other representations.) In this representation (see Fig. <ref>), nodes correspond to particles and edges correspond to contacts between particles. Edge weights can represent quantities such as normal or tangential forces between particles. In this type of granular network, it is important to be aware of which network quantities explicitly take into account spatial information or physical constraints in a system, which consider only network topology, and which consider only network geometry (i.e., both topology and edge weights, but not other information). Granular materials have a physical nature and are embedded in real space, so such considerations are extremely important. For discussions of how such issues manifest in spatial networks more generally, see <cit.>.One way to explicitly include spatial or physical information into network analysis is to calculate quantities that are defined from some kind of distance (e.g., a Euclidean distance between nodes), whether directly or through a latent metric space, rather than a hop distance. For example, as discussed in Sec. <ref>, one can define the edge length between two adjacent nodes from the physical distance between them, which allows quantities such as mean shortest path length, efficiency, and some centrality measures to directly incorporate spatial information. However, traditional network features such as degree and clustering coefficient depend only on network connectivity, although their values are influenced by spatial effects. In Sec. <ref>, we also saw that one can incorporate physical constraints from granular networks into community-detection methods by using a geographical null model, rather than the traditional NG null model, in modularity maximization.Different computations in network analysis can also probe different spatial, topological, or geometrical scales. For example, measures such as degree, strength, and clustering coefficients are local measures that quantify information about the immediate neighborhood of a node. However, measures such as the mean shortest path length and global efficiency are global in nature, as they probe large-scale network organization. In between these extremes are mesoscale structures. A network-based framework can be very helpful for probing various types of intermediate-scale structures, ranging from very small ones (e.g., motifs, such as small cycles) to larger ones (e.g., communities), and tools such as PH were designed to reveal robust structural features across multiple scales. Crucially, although there are some clear qualitative similarities and differences between various network-analysis tools (and there are some known quantitative relationships between some of them <cit.>), it is a major open issue to achieve a precise understanding of the relationships between different network computations. Moreover, in spatially-embedded systems (as in any situation where there are additional constraints) one can also expect some ordinarily distinct quantities to become more closely related to each other <cit.>. Furthermore, the fact that a granular particle occupies a volume in space (volume exclusion) gives constraints beyond what arises from embeddedness in a low-dimensional space.§ GRANULAR MATERIALS AS NETWORKSWe now review network-based models and approaches for studying granular materials. Over the past decade, network analysis has provided a novel view of the structure and dynamics of granular systems, insightfully complementing and extending traditional perspectives. See <cit.> for reviews of non-network approaches.Perhaps the greatest advantages of using network representations and associated tools are their natural ability to (1) capture and quantify the complex and intrinsic heterogeneity that manifests in granular materials (e.g., in the form of force chains), and to (2) systematically and quantitatively investigate how the structure and organization of a granular system changes when subjected to external loads or perturbations (such as compression, shear, or tapping). In particular, network science and related subjects provide a set of tools that help quantify structure (and changes in structure) over a range of scales — including local, direct interactions between neighboring particles; larger, mesoscale collections of particles that can interact and reconfigure via more complicated patterns; and system-wide measurements of material (re)organization. It is thought that local, intermediate, and system-wide scales are all important for regulating emergent, bulk properties of granular systems. Because structure at each of these scales can play a role in processes such as acoustic transmission and heat transfer, it can be difficult to obtain a holistic, multiscale understanding of granular materials. For example, microscale particle-level approaches may not take into account collective organization that occurs on slightly larger scales, and continuum models and approaches that rely on averaging techniques may be insensitive to interesting and important material inhomogeneities <cit.>. Network representations also provide a flexible medium for modeling different types of granular materials (and other particulate matter). For example, network analysis is useful for both simulation and experimental data of granular materials, and methods from complex systems and network science can help improve understanding of both dense, quasistatically-deforming materials as well as granular flows. In any of these cases, one often seeks to understand how a system evolves over the course of an experiment or simulation. To study such dynamics, one can examine a network representation of a system as a function of a relevant physical quantity that parameterizes the system evolution. For example, for a granular system in which the packing fraction increases in small steps as the material is compressed quasistatically, one can extract a network representation of the system at each packing fraction during the compression process and then study how various features of that network representation change as the packing fraction increases. Even a particular type of granular system is amenable to multiple types of network representations, which can probe different aspects of the material and how it evolves under externally applied loads. For instance, one can build networks based only on knowledge of the locations of particles (which, in some cases, may be the only information available) or by considering the presence or absence of physical contacts between particles. If one knows additional information about the elements in a system or about their interactions, one can construct more complicated network representations of it. For example, it has long been known that granular materials exhibit highly heterogeneous patterns of force transmission, with a small subset of the particles carrying a majority of the force along force chains <cit.>. Recall from Sec. <ref> that, broadly speaking, a force chain (which is also sometimes called a force network) is a set of contacts that carry a load that is larger than the mean load <cit.>, and the mean orientation of a force chain often encodes the direction of the applied stress <cit.>. We illustrated an example of force chain structure in Fig. <ref>, and we further discuss force-chain organization in Sec. <ref>. Because of the nature of the distribution of force values and the interesting way in which forces are spatially distributed in a material, it is often very useful to consider network representations of granular materials that take into account information about inter-particle forces (see Sec. <ref>) and to use network-based methods that allow one to quantitatively investigate how the structure of a force network changes when one includes only contacts that carry at least some threshold force. See Sec. <ref> and Sec. <ref>.In our ensuing discussion, we describe several network constructions that have been used to study granular materials, discuss how they have been investigated using many of the concepts and diagnostics introduced in Sec. <ref>, and review how these studies have improved scientific understanding of the underlying, complex physics of granular systems. §.§ Contact networksA contact network is perhaps the simplest way to represent a granular system. Such networks (as well as the term “contact network”) were used to describe granular packings long before explicitly network science-based approaches were employed to study granular materials; see, for example, <cit.>. The structure of a contact network encodes important information about a material's mechanical properties. As its name suggests, a contact network embodies the physical connectivity and contact structure of the particles in a packing (see Fig. <ref>). In graph-theoretic terms, each particle in the packing is represented as a node, and an edge exists between any two particles that are in physical contact with one another. Note that it may not always be possible to experimentally determine which particles are in physical contact, and one may need to approximate contacts between particles using information about particle positions, radii, and inter-particle distances. (See Sec. <ref> for details.) By definition (and however it is constructed), a contact network is unweighted and undirected, and it can thus be described with an unweighted and undirected adjacency matrix (see Sec. <ref>):A_ij = {[ 1 ,if particle i and j are in contact ,;0 ,otherwise . ]. Because the organization of a contact network depends on and is constrained by the radii of the particles and their locations in Euclidean space, a contact network is a spatially-embedded graph <cit.>. In Sec. <ref>, we will see that this embedding in physical space has important consequences for the extraction of force-chain structures via community-detection techniques (see Sec. <ref>). In Fig. <ref>, we show an example of a contact network generated from a discrete-element-method (DEM) simulation (see Sec. <ref>) of biaxial compression <cit.>. The granular system in this figure is polydisperse, as it has more than two types of particles. (In this case, the particles have different sizes.) If all particles are identical in a granular system, it is called monodisperse; if there are two types of particles in a system, it is called bidisperse. In practice, although the presence or absence of a contact is definitive only in computer simulations, one can set reasonable thresholds and perform similar measurements in experiments <cit.> (see Sec. <ref>).It is also important to note that packing geometry and the resulting contact network do not completely define a granular system on their own. In particular, one can associate a given geometrical arrangement of particles with several configurations of inter-particle forces that satisfy force and torque balance constraints and the boundary conditions of a system <cit.>. This is a crucial concept to keep in mind when conducting investigations based only on contact networks, and it also motivates the inclusion of contact forces to construct more complete network representations of granular systems (see Sec. <ref>).In the remainder of this subsection, we review some of the network-based approaches for characterizing contact networks of granular materials and how these approaches have been used to help understand the physical behavior of granular matter. We primarily label the following subsubsections according to the type of employed methodology. However, we also include some subsubsections about specific applications to certain systems.§.§.§ Coordination number and node degree. One can study a contact network in several ways to investigate different features of a granular system. We begin our discussion by associating the mean node degree of a contact network with the familiar and well-studied coordination number (i.e., contact number) Z. Although early investigations of granular materials did not consciously make this connection, the mean degree and coordination number are synonymous quantities. The contact degree k_i of particle i is the number of particles with which i is directly in contact, and one can calculate it easily from an adjacency matrix [see Eq. (<ref>)]. A contact network is undirected, so its adjacency matrix A is symmetric, and its row sum and column sum each yield a vector of node degrees. The mean degree ⟨ k ⟩ of a contact network [ Eq. (<ref>)] is then what is usually known as the mean coordination number (i.e., contact number) Z, and it gives the mean number of contacts per particle. As we noted previously (see Sec. <ref>), Z is an important quantity in granular systems because of its connection with mechanical stability and rigidity — which, loosely speaking, is the ability of a system to withstand deformations — in these systems and its characterization of the jamming transition <cit.> and other mechanical properties. In particular, the condition for mechanical stability — i.e., the condition to have all translational and rotational degrees of freedom constrained so that there is force and torque balance — in a packing of frictionless spheres in d dimensions <cit.> is Z ≥ 2d ≡ Z_iso .The isostatic number Z_iso indicates the condition for isostaticity, which is defined as the minimum contact number that is needed for mechanical stability. One can use the coordination number (which is often tuned by changing the packing fraction ϕ) as an order parameter to describe the jamming transition for frictionless spheres in two and three dimensions <cit.>. Specifically, there is a critical packing fraction ϕ_c such that below ϕ_c, the contact number for these systems is Z = 0 (i.e. there are no load-bearing contacts), and at the critical packing fraction ϕ_c, the contact number jumps to the critical value Z_c = Z_iso = 2d. One can also generalize the use of the coordination number in order to examine mechanical stability and jamming in granular systems of frictional spheres. In these systems, the condition for stability isZ≥ Z^m_iso , Z^m_iso ≡ (d+1) + 2N_m/d ,where N_m is the mean number of contacts that have tangential forces f_t equal to the so-called Coulomb threshold — i.e., N_m is the mean number of contacts with f_t = μ f_n, where μ is the coefficient of friction and f_n is the normal force <cit.> — and Z^m_iso again designates the condition for isostaticity. Results from experimental systems have demonstrated that contact number also characterizes the jamming transition in frictional, photoelastic disks <cit.>. The coordination number has been studied for several years in the context of granular materials and jamming, and it is fruitful to connect it directly with ideas from network science. Several recent studies have formalized the notion of a contact network, and they deliberately modeled granular systems as such networks to take advantage of tools like those described in Sec. <ref>. Such investigations of contact networks allow one to go beyond the coordination number and further probe the rich behavior and properties of granular materials — including stability and the jamming transition <cit.>, force chains <cit.>, and acoustic propagation <cit.>.Perhaps the simplest expansion of investigations into the role of the coordination number is the study of the degree distribution P(k) of the contact network of a packing. Calculating degree distributions can provide potential insights into possible generative mechanisms of a graph <cit.>, although one has to be very careful to avoid over-interpreting the results of such calculations <cit.>. In granular physics, it has been observed that the degree distribution of a contact network can track changes in network topology past the jamming transition in isotropically compressed simulations of a 2D granular system <cit.>. Specifically, the peak of P(k) shifts from a lower value of k to a higher value of k near the transition. Moreover, changes in the mean degree ⟨ k ⟩ and its standard deviation can anticipate the onset of different stages of deformation in DEM simulations (i.e., molecular-dynamics simulations) of granular systems under various biaxial compression tests <cit.>. §.§.§ Investigating rigidity of a granular system using a contact network. An important area of research in granular materials revolves around attempts to (1) understand how different types of systems respond when perturbed, and (2) determine what features of a system improve its integrity under such perturbations. As we noted in Sec. <ref>, it is well-known that coordination number (and hence node degree) is a key quantity for determining mechanical stability and understanding jamming in granular materials. However, contact networks obviously have many other structural features, and examining them can be very helpful for providing a more complete picture of these systems.To the best of our knowledge, the stability of granular materials was first studied from a graph-theoretic standpoint in the context of structural rigidity <cit.>, and it has since been applied to amorphous solids more generally <cit.>. In structural rigidity theory, thought to have been studied originally by James Clerk Maxwell <cit.>, rods of fixed length are connected to one another by hinges, and one considers the conditions under which the associated structural graphs are able to resist deformations and support applied loads (see Fig. <ref>). A network is said to be minimally rigid (or isostatic) when it has exactly the number of bars needed for rigidity. This occurs when the number of constraints is equal to the number of degrees of freedom in the system (i.e., when Laman's minimal-rigidity criterion is satisfied). The network is flexible if there are too few rods, and it is overconstrained (i.e., self-stressed) if there are more rods than needed for minimal rigidity. Triangles are the smallest isostatic structures in two dimensions <cit.>; there are no allowed motions of the structure that preserve the lengths and connectivity of the bars, so triangles (i.e., 3-cycles) do not continuously deform due to an applied force. In comparison, a 4-cycle is structurally flexible and can continuously deform from one configuration to another while preserving the lengths and connectivity of the rods (see Fig. <ref>). Extending a traditional network of rods and hinges, concepts from structural rigidity yield interesting insights into contact networks of particulate matter. See <cit.> for a discussion of some of the earliest applications of such ideas to disordered systems and granular materials. <cit.> used structural rigidity theory to derive conditions for the isostaticity of a granular packing; and he tied the fact that random packings of particles tend to be isostatic to the origin of instabilities in granular piles. Later, similar concepts were used to show that in granular networks, cycles with an even number of edges allow contacting grains to roll without slipping when subject to shear; however, these relative rotations are “frustrated" in cycles with an odd number of edges, so such cycles can act as stabilizing structures in a network <cit.>. Several later studies (such as <cit.>) have confirmed that contact loops are often stabilizing mesoscale features in a contact network of a granular material. We specifically consider the role of cycles in granular contact networks in Sec. <ref>. Another type of network approach for understanding rigidity in granular systems is rigidity percolation <cit.> (see Sec. <ref>). <cit.> conducted an early investigation of an idealized version of bond percolation in a granular context. It is now known that hallmarks of this bond-percolation transition occur below isostaticity: <cit.> identified that a percolating (i.e., system-spanning) cluster of non-load-bearing contacts forms at a packing density below the jamming point. In modern contexts, the rigidity-percolation approach can be used to determine if a network is both percolating and rigid (see Sec. <ref>). Note that a rigid granular network is also percolating, but a percolating network need not be rigid. Rigidity percolation relies on tabulating local constraints via a pebble game <cit.>, which reveals connected, rigid regions (sometimes called “clusters”) in a network. In a series of papers <cit.> on simulated packings, Schwarz and coworkers went beyond Laman's minimal-rigidity criterion to investigate local versus global rigidity in a network, the size distribution of rigid clusters, the important role of spatial correlations, and the necessity of force balance. Building on the above work, <cit.> recently utilized a rigidity-percolation approach to identify floppy versus rigid regions in slowly-sheared granular materials and to characterize the nature of the phase transition from an underconstrained system to a rigid network. See also the recent paper <cit.>.§.§.§ Exploring the role of cycles.We now consider the role of circuits (i.e., the conventional network notion of cycles, which we discussed in Sec. <ref>) in granular contact networks. Cycles in a contact network can play crucial stabilizing roles in several situations. Specifically, as we will discuss in detail in this section, simulations (and some experiments) suggest that (1) odd cycles (especially 3-cycles) can provide stability to granular materials by frustrating rotation among grains and by providing lateral support to surrounding particles, and that (2) a contact network loses these stabilizing structures as the corresponding granular system approaches failure. Noting that 3-cycles are the smallest arrangement of particles that can support (via force balance) a variety of 2D perturbations to a compressive load without deforming the contact structure,<cit.> studied the effects of friction and tilting on the evolution of contact-loop organization in a granular bed. In their simulations, they implemented tilting by incrementally increasing the angle of a gravity vector with respect to the vertical direction, while preserving the orientation of the granular bed and maintaining quasistatic conditions. In untilted granular packings, they observed that lowering inter-particle friction yields networks with a higher density of 3-cycles and 4-cycles, where they defined the “density” of an l-cycle to be the number of l-cycles divided by the total number of particles. By examining the contact network as a function of tilting angle, <cit.> also observed that the density of 4-cycles increases prior to failure — likely due to the fracture of stabilizing 3-cycles — and that this trend was distinguishable from changes in coordination number alone. Cycles have also been studied in the context of DEM simulations of dense, 2D granular assemblies subject to quasistatic, biaxial compression tests <cit.>. In many of these studies, the setup consists of a collection of disks in 2D that are compressed slowly at a constant strain rate in the vertical direction, while allowed to expand under constant confining pressure in the horizontal direction <cit.>. In another variation of boundary-driven biaxial compression, a sample can be compressed with a constant volume but a varying confining pressure <cit.>). Before describing specifics of the network analysis for these systems, it is important to note that for the previously described conditions, the axial strain increases in small increments (i.e., “steps”) as compression proceeds, and one can extract the inter-particle contacts and forces at each strain value during loading to examine the evolution of a system as a function of strain. Additionally, these systems undergo a change in behavior from a solid-like state to a liquid-like state and are characterized by different regimes of deformation as a function of increasing axial strain <cit.>. In particular, the granular material first undergoes a period of strain hardening, followed by strain softening, after which it enters a critical state. In the strain-hardening regime, the system is stable and the shear stress increases monotonically with axial strain up to a peak value. After the peak shear stress, strain softening sets in; this state is marked by a series of steep drops in the shear stress that indicate reduced load-carrying capacity. Finally, in the critical state, a persistent shear band has fully formed, and the shear stress fluctuates around a steady-state value. The shear band is a region of localized deformation and gives one signature of material failure <cit.>. Inside the shear band, force chains can both form and buckle <cit.>. One can also associate increases in the energy dissipation rate of the system with particle rearrangements (such as those that occur during force-chain buckling) and loss of stability in the material.Examining the temporal evolution of cycles in an evolving granular contact network can reveal important information about changes that occur in a material during deformation. Using DEM simulations (see the previous paragraph), <cit.> computed the total number of cycles of different lengths in a minimal cycle basis (see Sec. <ref>) of a contact network at each strain state during loading, and they observed that there are many more 3-cycles and 4-cycles than longer cycles in the initial, solid-like state of the system. However, as axial strain increases and one approaches the maximum shear stress, the total number of 3-cycles falls off steeply. (The same is true for 4-cycles, though it is less dramatic.) Additionally, during axial-strain steps (i.e., axial-strain “intervals”) corresponding to drops in shear stress, <cit.> observed large increases in the number of 3-cycles and 4-cycles that open up to become longer cycles. In Fig. <ref>a, we show an example of the evolution of cycle organization with increasing axial strain for a subset of particles from a DEM simulation of a granular material under biaxial compression, carried out by <cit.>. The authors observed that in this system, both the global clustering coefficient C [Eq. (<ref>)] and the mean subgraph centrality Y decrease with increasing axial strain, drop sharply at peak shear stress, and then level out (see Fig. <ref>b,c). Recalling that C is a measure of triangle density in a graph and that subgraph centrality measures participation of nodes in closed walks (with more weight given to shorter paths), these results also imply that the loss of small cycles co-occurs with the deformation and failure of a system due to increasing load. <cit.> also computed the network bipartivity R <cit.> of the contact network to quantify the contribution to mean subgraph centrality Y from closed walks of even length [see Eq. <ref>]. They observed that R increases with increasing axial strain, revealing that closed walks of even length become more prevalent during loading (see Fig. <ref>c). The authors suggested that this trend may be due to a decrease in the prevalence of 3-cycles (which are stabilizing, as discussed in Sec. <ref> and elsewhere). <cit.> also examined the stability of cycles of various lengths in both DEM simulations and experimental data, and they observed that, during loading, 3-cycles tend to be more stable (as quantified by a measure of stability based on a structural-mechanics framework <cit.>) than cycles of other lengths in a minimal cycle basis of the network. Minimal cycle bases and the easier-to-compute subgraph centrality have also been used to examine fluctuations in kinetic energy in simulations of deforming sand. <cit.> computed a minimal cycle basis and then constructed cycle-participation vectors (see Sec. <ref>) from a contact network after each strain step (i.e., at each strain state) during loading. They observed that temporal changes in the cycle-participation vectors of the particles between consecutive strain steps are correlated positively with temporal changes in kinetic energy over those steps. They also observed that large values in the temporal changes of particle cycle-participation vectors and particle subgraph centrality occur in the shear-band region. <cit.> also studied a minimal cycle basis and corresponding cycle-participation vectors to examine structural transitions in a 3D experimental granular system of hydrogel spheres under uniaxial compression. As pointed out in <cit.>, developing quantitative predictors that are based on topological information alone is extremely important for furthering understanding of how failure and rearrangements occur in systems in which energy or force measurements are not possible. Examining cycles in contact networks can also shed light on the behavior of force chains. The stability, load-bearing capacity, and buckling of force chains depend on neighboring particles (so-called spectator grains) to provide lateral support <cit.>. Because 3-cycles appear to be stabilizing features, it is interesting to consider the co-evolution of force chains and 3-cycles in a contact network. Such an investigation requires a precise definition of what constitutes a force chain, so that it is possible to (1) extract these structures from a given packing of particles and (2) characterize and quantify force-chain properties. Several definitions of force chains have been proposed; see, e.g., <cit.>. The studies that we describe in the next three paragraphs used a notion of “force chains” from <cit.>, in which force chain particles are identified based on their particle-load vectors (where each particle is associated with a single particle-load vector that indicates the main direction of force transmission). More specifically, a single chain is a set of three or more particles for which the magnitude of each of their particle-load vectors is larger than the mean particle-load vector magnitude over all particles, and for which the directions of the particle load vectors are, within some tolerance, aligned with one another (i.e., they are “quasilinear”.) We note that an important point for future work is to conduct network-based studies of force-chain structure for different definitions of force chains, and to investigate if there are qualitative differences in their associated network properties.Using DEM simulations of a densely packed system of polydisperse disks under biaxial loading — i.e., compressed quasistatically at a constant strain rate in the vertical direction, while allowed to expand under constant confining pressure in the horizontal direction — <cit.> quantified the co-evolution of force chains and 3-cycles in several ways. For example, they computed a minimal cycle basis (see Sec. <ref>) of a contact network and then examined (1) the ratio of 3-cycles to the total number of cycles in which particles from a force chain participate and (2) the force chain's 3-cycle concentration, which is defined as the ratio of 3-cycles involving force-chain particles to the total number of particles in the force chain. When averaged over all force chains, the above two measures decrease rapidly with increased loading. Additionally, <cit.> observed that force chains that do not fail by buckling (see <cit.> for how “buckling" was defined) have a larger ratio of 3-cycle participation to total cycle participation than force chains that do buckle. <cit.> observed, in both DEM simulations of biaxial loading (see above) and 2D photoelastic disk experiments under pure shear, that a particular measure (developed by <cit.>) of structural stability of force chains is correlated positively with the mean of the local clustering coefficient [Eq. (<ref>)] over force-chain particles. Their results also suggest that 3-cycles are more stable structures than cycles of longer length during loading and that force chains with larger 3-cycle participation tend to be more structurally stable. These observations suggest that cycles — and especially 3-cycles — in contact networks are stabilizing structures that can provide lateral support to force chains. It would be interesting to study these ideas further, and to relate them to structural rigidity theory (see Fig. <ref> and Sec. <ref>), especially in light of the difference between 3-cycles (which are rigid) and deformable cycles (e.g., 4-cycles). DEM simulations of 3D, ellipsoidal-particle systems subject to triaxial compression also suggest that 3-cycles are important features in granular contact networks <cit.>.Similar to the aforementioned results from simulations of 2D systems with disk-shaped particles, the number of 3-cycles in a minimal cycle basis of the contact networks (and the global clustering coefficient [Eq. (<ref>)]) initially decrease and then saturate with increasing load, and particles in force chains have a larger number of 3-cycles per particle than particles that are not in force chains. <cit.> also observed that the set of 3-cycles that survive throughout loading tend to lie outside the strain-localization region (where force chains buckle). The dearth of 3-cycles in certain regions in a material may thus be a signature of strain-localization zones.Another paper to note is <cit.>, which examined and compared the temporal evolution of cycles (and several other contact-network diagnostics) in a set of DEM simulations using a variety of different material properties and boundary conditions.In another interesting study, <cit.> examined the phenomenon of aging <cit.> — a process in which the shear strength and stiffness of a granular material increase with time — in collections of photoelastic disks subject to multiple cycles of pure shear under constant volume. Because aging is a slow process, it can be difficult both to uncover meaningful temporal changes in dynamics and to characterize important features in packing structure that accompany aging. To overcome these challenges, <cit.> first analyzed the time series of the stress ratio (using techniques from dynamical-systems theory) to uncover distinct temporal changes in the dynamics of the system. (See <cit.> for details.) After each small, quasistatic strain step, they also extracted the contact network of the packing at that time to relate aging to changes in topological features of the network structure. As one approaches the shear-jammed regime during prolonged cyclic shear, they observed on average that force chains are associated with more 3-cycles and 4-cycles from the minimal cycle basis. We have just discussed many papers that concern transitions in granular matter from a solid-like regime to a liquid-like regime. One can also use changes in the loop structure of a contact network to describe the opposite transition, in which a granular material goes from an underconstrained, flowing state to a solid-like state during the process known as jamming (see Sec. <ref>). Studying 2D frictional simulations of isotropically compressed granular packings, <cit.> examined a granular contact network as a function of packing fraction. They observed that the number of cycles (which were called polygons in <cit.>) in the contact network grows suddenly when the packing fraction approaches the critical value ϕ_c that marks the transition to a rigid state (see Fig. <ref>). They also observed that 3-cycles appear to be special: they continue to grow in number above the jamming point, whereas longer cycles slowly decrease in number after jamming. Although they observed a nonlinear relationship near the jamming point between Z (the contact number, which is the usual order parameter for the jamming transition) and the number of 3-cycles <cit.>, these quantities appear to depend linearly on each other after the transition. These results suggest that one can use the evolution of contact loops to understand the transition to a rigid state and to characterize subsequent changes in the system. Application to tapped granular materials.Properties of contact networks have also been used to study tapped granular materials, in which a packing of grains is subject to external pulses of excitation against it. In most studies of tapped granular materials, the packing and pulses are both vertical. The intensity Γ of these mechanical perturbations (so-called “taps") is usually quantified as a dimensionless ratio of accelerations, such as the ratio of the peak acceleration of the excitation to the acceleration of gravity <cit.>. Tapped granular materials are interesting, because the packing fraction ϕ is not a monotonic function of the tapping intensity Γ <cit.>. It reaches a minimum value ϕ_min at an intensity of Γ_min, and it then increases as the tap intensity increases (see Fig. <ref>a). Consequently, one can achieve steady states with the same packing fraction by using different tap intensities (i.e., both a “low" tap intensity, which is smaller than Γ_min, and a “high" tap intensity, which is larger than Γ_min). These steady states are not equivalent to each other, as they have different force-moment tensors <cit.>, for example. An interesting question is thus the following: What features of a granular packing distinguish between states at the same packing fraction that are reached by using different tap intensities? Recent work has suggested that properties of contact networks — especially cycles (which, in this case, are particle contact loops) — can distinguish between steady-state configurations at the same packing fraction but that are generated from different tap intensities in simulated 2D granular packings subjected to tapping <cit.> (see Fig. <ref>b). For example, as Γ is increased in the regime Γ < Γ_min, the number of 3-cycles (i.e., triangles) and the number of 4-cycles (i.e., squares) both decrease. As Γ is increased in the regime Γ > Γ_min, the opposite trend occurs, so the numbers of 3-cycles and 4-cycles increase. This makes it possible to differentiate configurations at the same ϕ obtained from low and high tap intensities. (See Fig. <ref>e,f for a plot of the number of triangles versus Γ and ϕ.) However, geometrical measures like the pair-correlation function, distributions of Voronoi tessellation areas, or bond orientational order parameters do not seem to be as sensitive to differences in these two different states of the system (see Fig. <ref>c,d), perhaps because they quantify only local proximity rather than directly examining contacts. (See <cit.> and references therein for details of these descriptors.) These results suggest that topological features (e.g., mesoscale features) of a contact network can capture valuable information about the organization of granular packings.§.§.§ Other subgraphs in contact networks.When studying contact networks, it can also be helpful to explore network motifs other than cycles. Recall from Sec. <ref> that motifs are subgraphs that appear often in a network (e.g., in comparison to a null model) and are thus often construed as being related to system function <cit.>. Network motifs, which traditionally refer to small subgraphs, are a type of mesoscale feature, and it can be insightful to examine how their prevalences change in a granular material as it deforms.One system in which motifs and their dynamics have been studied is frictional, bidisperse, photoelastic disks subject to quasistatic cyclic shear <cit.>. After each small strain increment (i.e., strain step) in a shear cycle, the authors considered the contact network of the granular packing. For each particle i in the contact network, they extracted the subgraph of particles (nodes) and contacts (edges) formed by the central particle i and particle i's contacting neighbors. This process results in a set of N subgraphs[In some degenerate cases (e.g., when there is a network component that consists of a clique), this includes duplications of subgraphs, even when the nodes of a network are labeled.] (which, borrowing terminology from <cit.>, we call conformation subgraphs), where N is the number of particles in the network.To examine packing rearrangements as a system is sheared, <cit.> represented each distinct conformation subgraph present at any time during loading as one “state" in a Markov transition matrix, and they studied transitions between the conformation subgraphs as a discrete-time Markov process. More specifically, each element in the n_c× n_c transition matrix (where n_c is the total number of unique conformation subgraphs and hence the number of unique states) captured the fraction of times that each conformation subgraph transformed to each other conformation subgraph (including itself) after four quasistatic steps of the shearing process. <cit.> reported that force-chain particles typically occur in network regions with high mean degree, high mean local clustering coefficients, and many 3-cycles. (Note that this study, as well as the others that we describe in the present and the following paragraph, define force-chains as in <cit.>.) Furthermore, when considering the conformation subgraphs of particles in force chains that fail by buckling (see <cit.> for details on the definition of “buckling"), the most likely transformations to occur tend either to maintain the topology of those conformation subgraphs or to involve transitions from conformation subgraphs in which the central particle has a larger degree or is part of more 3-cycles to conformation subgraphs in which the degree of the central particle is smaller or in which it participates in fewer 3-cycles. <cit.> also used force information to compute a measure of structural stability (based on a structural-mechanics framework <cit.> and summarized in a single number) for each conformation subgraph. They then split the full range of the stability values into several smaller “stability intervals” (i.e., small ranges of contiguous structural-stability values) and modeled transitions between stability intervals as a Markov chain. They examined the number of conformation subgraphs that occupy each stability interval and observed pronounced peaks in some intervals that persist during loading. They also reported that conformation subgraphs whose central particles belong to force chains tend to be more stable and that conformation subgraphs whose central particles are part of buckling force chains have a higher probability of transitioning from high-stability states to low-stability states than vice versa. (For details, see Fig. 7 of <cit.> and the corresponding discussions.) <cit.> used similar methods to study self-assembly in an almost-frictionless, 3D system of hydrogel spheres under quasistatic, cyclic uniaxial compression (see Fig. <ref>a–c). After every compression step, they constructed the contact network for the system and examined two types of subgraphs for each particle: (1) conformation subgraphs, which (as discussed earlier) consist of a single central particle i and that particle's contacts; and (2) the cycle-participation vector of each particle (see Sec. <ref>). <cit.> determined the set of all unique conformation subgraphs that exist during the above compression process. They then used each of those conformation subgraphs as one state in a transition matrix, the elements of which give the fraction of times (across the whole experiment) that a particle in one state transitions to any other state in consecutive compression steps.To focus on the presence or absence of a particle in an l-cycle (using cycle lengths up to l = 10), they binarized each element of the cycle-participation vectors. (The new vectors thus indicate, for each particle, whether a particle is part of at least one l-cycle.) They then constructed a transition matrix in which each state is a unique binarized cycle-participation vector that occurs during the experiment. The two transition matrices capture useful information about the most likely transformations that occur between different conformation subgraphs and cycle-participation vectors as one compresses or decompresses the granular system. For both types of mesoscale structures, <cit.> used their transition matrices to extract almost-invariant sets, which indicate sets of conformation subgraphs or cycle-participation vectors (i.e., states) that tend to transition among themselves more than to states in another almost-invariant set. (See <cit.> for details.) In Fig. <ref>d, we show the most common conformation subgraphs in each almost-invariant set of thesubgraphs. The conformation subgraphs formed by force-chain particles belong mostly to Set 3 (see Fig. <ref>d), which consists of densely-connected conformation subgraphs in which there are many contacts between particles. To characterize structural changes that occur in a packing as it moves towards or away from a jammed configuration, <cit.> tracked the number of conformation subgraphs (and cycle-participation vectors) in each almost-invariant set across time. In Fig. <ref>e, we show the temporal evolution of the numbers of elements in the almost-invariant sets of the conformation subgraphs. <cit.> also proposed transition pathways (see Fig. <ref>f) that may be useful for thermo-micro-mechanical constitutive-modeling efforts <cit.>. (A transition pathway consists of a sequence of conformation subgraphs in different almost-invariant sets, and transitions between them.) Another way to study various types of subgraphs in granular materials is through the classification of superfamilies <cit.> (see Sec. <ref>). A recent investigation by <cit.> considered superfamilies that result from examining 4-particle subgraphs (see Fig. <ref>a) in a variety of different granular systems, including experimental packings of sand and photoelastic disks, and DEM simulations for different types of loading and in different dimensions. In their study, the authors defined a superfamily as a set of networks in which the prevalence of the different 4-node subgraphs have the same rank-ordering. (They did not consider whether the subgraph was a motif in the sense of occurring more frequently than in a random-graph null model.) Despite the diversity of system types, they observed several trends in the transitions between superfamilies that occur as a system transitions from a pre-failure regime to a failure regime. The most important change in the superfamilies appears to be a switch in relative prevalence of 4-edge motifs with 3 edges arranged as a triangle to acyclic 3-edge motifs (see Fig. <ref>). This observation highlights the important role that small mesoscale structures can play as building blocks in granular systems. It also suggests that examining the prevalence and temporal evolution of such motifs can (1) help characterize the macroscopic states of a granular system and (2) help quantify what structural changes occur as a system transitions between different states. Notably, although calculating the prevalence of cycles and small motifs can be useful for gaining insights into contact-network structure, it is also important to employ other types of network analysis that examine structure on larger scales. For example, in simulations of 2D packings of disks under isotropic compression, <cit.> observed that the mean shortest-path length (<ref>) of a contact network reflects changes in the organization of a packing as one approaches the jamming point and changes that occur after the jamming transition takes place. The path length appears to reach a maximum value at a packing fraction below ϕ_c. With further increases in ϕ below ϕ_c, the path length then decreases rapidly, likely due to the formation of new contacts that shorten the distance between particles as the system nears the jamming point. After the jamming transition, the path length decreases further.Before moving on, we note that because it can be difficult to measure force information accurately in many experimental granular systems, continuing to develop relevant measures for studying contact topology (i.e., without incorporating weights) remains an important area of investigation. §.§ Force-weighted networksAlthough studying only a contact network can be rather informative (see Sec. <ref>), it is important to incorporate more information into network representations to more fully capture the rich behavior of granular materials. Many of the approaches for quantifying unweighted networks can be generalized to weighted networks (see Sec. <ref>), although significant complications often arise (e.g., because typically there are numerous choices for precisely how one should do the generalizing). From both physics and network-science perspectives, it is sensible to construct weighted networks that incorporate information about the forces between particles. This can shed considerable light on phenomena that have eluded understanding from studying only unweighted contact networks.One important physical motivation for incorporating information about inter-particle forces is that photoelastic-disk experiments and numerical simulations have both highlighted that, particularly just above isostaticity (see the bottom of Sec. <ref>), loads placed on granular systems are not shared evenly across all particles. Instead, forces are carried primarily by a backbone of force chains. It has often been claimed that the statistical distribution of the forces are approximately exponential <cit.>, but in fact, it depends on whether forces are recorded at a granular system's boundary or in its bulk <cit.>, as well as on the loading history <cit.>. Illuminating how force-chain structures arise provides crucial information for understanding how one can control the elastic modulus and mechanical stability <cit.> and acoustic transmission <cit.> in granular materials.However, despite the ability of humans to see force chains easily in photoelastic images, it is difficult to characterize quantitatively what is or is not a force chain, and it can also be difficult to quantify how force chains evolve under compression or shear. Part of the challenge lies in the fact that force chains are spatially anisotropic, can exhibit long-range spatial correlations <cit.>, and can have complex temporal fluctuations <cit.>. Consequently, understanding emergent phenomena in granular systems is typically difficult using continuum theories or approaches based only on local structure. On the other hand, a network-theoretic perspective provides a fruitful way to explore interesting material properties and organization that arise from inter-particle contact forces in granular materials. Importantly, in addition to data from simulations, multiple techniques now exist for measuring inter-particle forces between grains in experiments; these include photoelasticity <cit.>, x-ray diffraction measurements of microscopic <cit.> or macroscopic <cit.> deformations, and fluorescence with light sheets <cit.>. As we will see, incorporating information about inter-particle forces into network-based investigations has yielded fascinating insights into the organization and collective structure in granular packings (and other particulate materials) for both numerically-simulated and experimental systems.The most common method for constructing a network that captures the structure of forces in a granular system is to let a node represent a particle and then weight the edge between two particles that are in contact according to the value of the force between them. One can describe such a network with a weighted adjacency matrix (see Sec. <ref>) W with elementsW_ij = {[ f_ij ,if particles i and j are in contact ,;0 ,otherwise , ].where f_ij is the inter-particle force between particles i and j. Such a force network also encodes the information in the associated contact network, and one can recover a contact network from a force-weighted network by setting all non-zero weights in the latter to 1. Although most work has determined edge weights using the normal component of the inter-particle force, one could alternatively weight the edges by inter-particle tangential forces. With the advent of high-performance computational capabilities, one can determine inter-particle forces from DEM simulations <cit.> of hundreds to millions of particles. In experiments, it is possible to determine inter-particle forces using photoelastic disks (in 2D) <cit.> or x-ray tomography (in 3D) <cit.>, although typically these techniques are limited to systems of hundreds to thousands of particles.We now review network-based approaches for investigating force-weighted networks constructed from granular materials and discuss the resulting insights that these approaches have provided into the physical behavior of such systems. We label most of the following subsubsections according to the type of employed methodology, although we also include a subsubsection about some specific applications to different systems.§.§.§ Examining weighted cycles and other structural features. In Sec. <ref>, we discussed why examining cycles can be useful for studying granular contact networks. It is also useful to examine cycles when investigating force networks, which are weighted. For example, <cit.> studied the evolution of weighted contact loops in a simulation of a quasistatically tilted granular packing. They used topological information (i.e., which particles are in contact) to define the presence of a cycle, and they defined a notion of loop stability, ξ_l = 1/f^l∏_i=1^lf^edge_i , to quantify the range of compressive loads that a given loop can support. In Eq. (<ref>), l is the number of edges in the loop (i.e., its length), f^edge_i is the contact force of the i^th edge, and f is the mean edge weight (i.e., mean force) over all of the edges in the loop. See Fig. <ref> for a schematic of this stability measure for a 3-cycle. For l = 3, the quantity ξ_3≈ 1 corresponds to having approximately equal contact forces on all edges and is the most stable configuration (see Fig. <ref>a). The value of ξ_3 approaches 0 as the contact force on one edge becomes much smaller than those on the other two edges. As illustrated in Fig. <ref>b, this situation is rather unstable. Both the density of 3-cycles (specifically, the number of 3-cycles in the system divided by the total number of particles) and a normalized 3-cycle loop stability ξ^*_3 = ⟨ξ_3(θ_g) ⟩/⟨ξ_3(θ_g = 0) ⟩ (where the brackets denote means over all 3-cycles in a network) tend to decrease with increasing tilting angle θ_g (see Fig. <ref>c). <cit.> also reported that the effect of tilting on loop stability is largely independent from the effect of tilting on mean coordination number (i.e., mean degree).<cit.> examined what they called a force cycle, which is a cycle of physically-connected particles in which each contact carries a force above the global mean. Using DEM simulations of a biaxially compressed, dense granular system — the sample was compressed quasistatically at a constant strain rate in the vertical direction, while allowed to expand under constant confining pressure in the horizontal direction — they studied the evolution of 3-force cycles (i.e., force cycles with 3 particles) in a minimal cycle basis of the contact network with respect to axial strain. They observed that 3-force cycles initially decrease in number during strain hardening, before increasing in number at the onset of force-chain buckling <cit.>, and finally leveling out in number in the critical-state regime. (See the third paragraph of Sec. <ref> for a brief description of these different regimes of deformation.)In Fig. <ref>, we show a plot of the number of 3-force cycles and the shear stress versus axial strain. The 3-force cycles that arise at the onset of buckling are often part of force chains (using the definition from <cit.>). Additionally, these 3-force cycles tend to concentrate in the region of the shear band, where they may act as stabilizing structures both by frustrating relative rotations and by providing strong lateral support to force chains. However, with increased loading, the system eventually fails, and <cit.> suggested that the increase in the number of 3-force cycles may be an indicator of failure onset. Qualitatively similar results have been observed when examining the evolution of 3-force cycles in three DEM simulations (each with slightly different material properties and boundary conditions) <cit.> and in DEM simulations of 3D ellipsoidal-particle packings subject to triaxial compression <cit.>. Using similar DEM simulations for biaxial compression as those described in the previous paragraph, <cit.> examined the evolution of force-weighted networks with axial strain using several of the network concepts that we discussed in Sec. <ref>. Unsurprisingly, they found that including contact forces in their network analysis yields a more complete characterization of a granular system than ignoring them.One measure that they felt was particularly useful is a weighted version of subgraph centrality (see Sec. <ref>). From a contact network, <cit.> first extracted all conformation subgraphs. As described in Sec. <ref>, a conformation subgraph is a subgraph that consists of a given particle i and that particle's immediate contacts. (Each particle in a network thus yields one conformation subgraph.) To incorporate inter-particle force information, <cit.> generated force-weighted conformation subgraphs by weighting each edge in the conformation subgraphs by the magnitude of the normal force component along that contact. They then computed a weighted subgraph centrality Y^w_i for each force-weighted conformation subgraph, and they computed changes |Ỹ^w| in magnitude between consecutive strain steps of a suitably averaged version (Ỹ^w) of this quantity (see <cit.> for details). They observed that the temporal evolution of |Ỹ^w| with strain step is effective at tracking large changes in energy dissipation rate that can occur due to rearrangement events (e.g., force-chain buckling) associated with the loss of inter-particle contacts. They also observed that the central particle in the conformation subgraphs that undergo the largest changes in weighted subgraph centrality seems to be associated with locations with much dissipation, such as in the shear band and in buckling force chains or the neighboring particles of those force chains.See <cit.> for the employed definition of force chains and <cit.> for the employed specification of force-chain buckling. <cit.> highlighted that network analysis — and especially examination of mesoscale features — can be helpful for gaining insights into mechanisms that regulate deformation in granular materials. Such studies can perhaps also help guide efforts in thermo-mechanical constitutive modeling <cit.>.§.§.§ Extracting multiscale architectures from a force network using community detection.A major benefit of studying a network representation of a granular system (and using associated computational tools) is that it provides a natural framework in which to probe structure and dynamics across several spatial scales. One can examine different spatial scales in multiple ways, including both physically (e.g., using distance in Euclidean space or in some other metric space) or topologically (e.g., using the hop distance along edges in a network). In these studies, one can use network diagnostics and approaches like the ones discussed in Sec. <ref>. The ability to successfully study mesoscale architecture, which manifests at intermediate scales between particle-level and system-wide organization, is an especially important contribution of network analysis. One of the most common ways to examine mesoscale structures is with community detection (see Sec. <ref>), which one can use to extract sets of densely-connected nodes in a network <cit.>. One can also tune community-detection methods to examine sets of nodes of a range of sizes, from very small sets (with a lower limit of one node per set) to large sets (with an upper limit of all nodes in a single set).By applying multiscale community-detection methods to force-weighted contact networks of photoelastic disks, <cit.> identified chain-like structures that are visually reminiscent of force chains in granular packings. Notably, the algorithmic extraction of these “distributed” mesoscale structures <cit.>, in contrast to “compact”, densely-connected geographical domains in a material <cit.>, required the development of a geographical null model, which can be used in modularity maximization and which encodes the fact that a given particle can exert force only on other particles with which it is in direct contact <cit.> (see Sec. <ref>). The different type of mesoscale organization extracted by this geographical null model highlights the fact that using physically motivated network-based approaches, which incorporate spatial and/or physical constraints, may give different information about a granular system than what one obtains when doing network calculations based only on network structure (i.e., without considering known context about a network, so that “one size fits all”). In a modularity-maximization approach to community detection, one can also tune a resolution parameter of a modularity objective function to identify and characterize network communities of different sizes. This can also be combined with inference procedures to determine particularly important scales.One interesting result from <cit.> is that properties of force chain-like communities can distinguish frictional, laboratory packings from frictionless, simulated ones, allowing a quantification of structural differences between these systems. In later work, <cit.> used similar techniques to examine friction-dependence and pressure-dependence of community structure in 3D simulations of compressed granular materials. To further quantify such mesoscale organization and examine how it changes with compression, <cit.> extracted communities using the geographical null model, and used ideas from algebraic topology to define a topological compactness factor that quantifies the amount of branching — versus compact, densely-interconnected regions — in communities of a force network from 2D granular systems. The approach from <cit.> was extended to multilayer networks (see Sec. <ref>) in <cit.>, providing a way to link particulate communities across compression steps (rather than extracting new ones at each step) when examining how such communities reconfigure. These studies helped lay groundwork to improve understanding of how the multiscale nature of force-chain architecture impacts bulk material properties.Various community-detection approaches have also been used for identifying other types of inhomogeneities in granular matter <cit.>.Before moving on, it is important to note that — although related — the definition of force-chain structure using the community-detection approaches that we described above <cit.> differs from the definitions of force chains that have been used in some other studies (e.g., see <cit.>). In future work, it is important to examine how the properties of force chains differ when they are defined in different ways. §.§.§ Some applications. Comminution processes. A network-based approach can give fascinating insights into comminution, the fragmentation of a material into smaller pieces. <cit.> used DEM simulations to study comminution in a granular material under uniaxial compression and reported that the degree distribution of the system's contact network (which we recall is unweighted) evolves towards a power law during this process. This is consistent with the development of a power-law grain-size distribution, in which large particles are hubs that have many smaller, neighboring particles, which make up the majority of a packing. <cit.> also examined several other features (such as measures of network efficiency, node betweenness, and cycle populations) of both contact networks and networks weighted by the normal force between particles as a function of increasing strain to examine what changes occur in a granular system during comminution.  Heat transfer. Another problem that has been examined using network-based methods is heat transfer in granular matter. Using a heat-transport model on simulations of a compressed granular material, <cit.> probed the effects of heterogeneity in the force distribution and the spatial arrangements of forces in a system on heat transfer through the material. Specifically, they compared measures of transport in the (normal) force-weighted network of a granular system to two null-model networks with the same contact topology but with either (1) homogenous, uniform edge weights that were equal to the mean force of the packing; or (2) the same set of heterogeneous edge weights from the actual granular network, but assigned uniformly at random to the edges. <cit.> estimated the thermal diffusivity and effective conductivity from simulations on each network, and they observed that the real granular system has significantly higher diffusivity and effective conductivity than the homogenous null model. Additionally, comparing the results from the real material to the null model with randomly reassigned edge weights demonstrated that the qualitative differences between the real granular network and the homogenous null model could not be explained by the heterogeneity in the force distribution alone, as the authors observed that this second null model (with randomly reassigned edge weights) was also not a good medium for heat transfer. To investigate what features of a granular network facilitate efficient heat transfer, <cit.> defined a weighted network distance (see Sec. <ref>) between particles i and j as d^w_ij = 1/H_ij, where H_ij is the local heat-transfer coefficient, such that the network distance between two particles in contact is proportional to that contact's resistance to heat transfer. Note that H_ij∝ f_ij^ν, where f_ij is the magnitude of the normal force between i and j, and ν≥ 0 is a constant. They then defined a network-based (but physically-motivated) measure of heat transport efficiency as the weighted efficiency E^w (see Sec. <ref>) computed using the distances d^w_ij. In a comparison between the real granular system and the two null models, E^w gave the same quantitative results as the effective conductivity. In particular, the calculations in <cit.> revealed that the real granular system has a larger efficiency than that of either null model, suggesting that the spatial distribution of force-chain structure in the granular network appears to facilitate heat transport. Finally, iterative edge removals in decreasing order of geodesic edge betweenness centrality [Eq. (<ref>)] yield a faster decrease in effective conductivity than either edge removals done uniformly at random or edge removals in decreasing order of the local heat-transport coefficient, further illustrating the utility of network-theoretic measures for examining transport phenomena in granular systems. Acoustic transmission. One can also examine the effect of network structure on properties such as electrical conductivity in systems composed of metallic particles or on other types of transport (such as sound propagation) through a particulate material. The transmission of acoustic signals through granular materials is poorly understood <cit.>, and it is particularly challenging to explain using continuum or particulate models <cit.>. A few years ago, <cit.> represented compressed, 2D packings of bidisperse, photoelastic disks as force-weighted contact networks, and found that some network diagnostics are able to identify injection versus scattering phases of acoustic signals transmitted through a granular material. Among the diagnostics that they computed, the authors observed that network efficiency (see Eq. (<ref>) in Sec. <ref>) is correlated positively with acoustic transmission during the signal-injection phase, suggesting that high-amplitude and high-energy signals are transmitted preferentially along short paths (and perhaps even shortest paths) through a force-weighted contact network. In contrast, low-amplitude and low-energy signals that reverberate through a packing during the subsequent signal-scattering phase correlate positively with the intra-community strength z-score, which characterizes how strongly a node connects to other nodes in its own community. These results suggest that one can use network diagnostics that probe diverse spatial scales in a system to describe different bulk properties. Because <cit.> did not use community-detection approaches informed by a geographical null model (see Sec. <ref>), it did not address (and it is not yet fully understood) how acoustic transmission depends on the multiscale architecture of chain-like structures reminiscent of force chains. This remains an open issue, and network-based approaches — e.g., using geographical null models and other ideas that pay attention to the role of space — are likely to be important in future work on this topic.§.§.§ Thresholded force networks.Before researchers started using network-based methods as a common perspective for studying granular materials, <cit.> reported that under stress, the force network in simulations of granular matter organizes into two subsets: one set with “strong" contacts and another set with “weak" contacts. The strong subnetwork of forces forms a backbone of chain-like structures that carry most of a system's load and which tend to align approximately with the direction of compression. Between these strong force chains, there is a weak subnetwork of contacts that carry forces that are less than the mean. This weak subnetwork tends to have an anisotropy that is orthogonal to the compression, and it may provide support to the backbone of strong forces. Such heterogeneity in a force network is an interesting feature of granular materials, and network-based approaches provide a direct way to examine how strong and weak contacts can play important roles in material properties and stability. Theses ideas have been explored using force-thresholded networks <cit.>, in which one retains only contacts that carry a force of at least some threshold f_th. That is,A^th_ij = {[ 1 ,if particles i and j are in contact andf_ij≥ f_th ,;0 ,otherwise. ]. The threshold f_th should be lower-bounded by the smallest non-zero contact force f_min in the system being considered. (Note that when f_th = f_min, one includes all contacts in the force-thresholded network.) It is common to use a system's mean force ⟨ f ⟩ as a reference point and to systematically vary f_th to be different fractions of the mean force. Whether one uses the normal or the tangential component of the force, varying f_th results in a series of thresholded networks that one can subsequently characterize using traditional network or percolation-based analyses (which we discuss in the following three paragraphs of this subsubsection) or using methods from computational algebraic topology (which we discuss in Sec. <ref>). See Fig. <ref> for an example depicting the contacts and forces in a simulation of a 2D granular material at two different values of the force threshold. There is also a connection between the idea of force-thresholded networks and carrying out modularity maximization with the geographical null model [Eq. (<ref>)] and a resolution parameter γ (see Sec. <ref>). Similar to how increasing the threshold f_th selects the subset of particles in a network for which inter-particle forces exceed or equal the threshold, in modularity maximization, increasing the resolution-parameter value γ tends to yield communities of particles such that within a community, the inter-particle forces are at least as large as the threshold γ⟨ f ⟩.<cit.> examined several network diagnostics — e.g., mean degree, shortest-path length, diameter, LCC size, and component-size distributions — as a function of the force threshold f_th (and at different values of the packing fraction) in DEM simulations of 2D granular packings under isotropic compression. The computations in <cit.> suggest that the way in which many of these measures change as a function of f_th depends on packing fraction, and many of the measures can thus potentially be used to help understand differences in the organization of force networks above versus below the jamming point. For example, for packing fractions above the jamming point, the LCC size and the shortest-path length undergo qualitative changes in behavior near a threshold f_th≈⟨ f ⟩, signifying interesting structural changes in the organization of the force networks at that threshold. The relationship between the number of 3-particle contact cycles (i.e., triangles) and the force threshold was examined in <cit.>. In the jammed state, they observed a steep decline in the number of 3-cycles in the networks as they increased the threshold and considered only progressively larger forces. (As we discuss in Sec. <ref>, one can also use methods from computational algebraic topology to examine the evolution of cycle organization in dense granular materials.) This observation suggests that triangles (or at least one of their contacts) belong primarily to the weak subnetwork of forces that help support strong, filamentary force-chain structures. See Fig. <ref> and Secs. <ref> and  <ref> for other discussions of the roles of cycles and their relationship to force chains.Another way to study force-thresholded granular networks is using a percolation-like approach (see Sec. <ref>). For example, one can examine the sizes and number of connected components in a thresholded network as a function of f_th <cit.>. For dense packings, the intuition is that when f_th is very large, a force-thresholded granular network splits up into several disconnected components; and as one decreases f_th, these components begin to merge until eventually all contacts are in a single component. In this type of bond percolation, which edges are included in the network thus depends on the force that the edges carry. One can look for a critical threshold f^c_th such that for f_th > f_th^c, the network fragments into many small components, but as one lowers the threshold towards f^c_th, a large, percolating cluster forms in the system (see Sec. <ref>). Quantities that are often investigated when studying this type of force-percolation transition include f^c_th and “critical exponents” for the transition (see Sec. <ref>). Because (both experimental and computational) granular systems are finite in practice, such investigations often use finite-size scaling techniques.Several simulation-based studies of granular systems have deployed percolation analyses based on force-thresholded networks to quantify the organization of granular force networks, how such organization changes with increasing compression, and other phenomena <cit.>. For example, <cit.> studied force percolation in the q-model <cit.> of anisotropic granular force networks, in which there is a preferred direction of force propagation. They concluded that the asymmetry in the model has a significant effect on the percolation transition, and they found that the critical exponents differ from those of isotropically compressed granular force networks. <cit.> investigated force percolation in a variety of simulations of slowly compressed, 2D granular systems. They examined the effects of polydispersity and friction, finding that these factors can qualitatively influence various features of the percolation transition. Very recently, <cit.> also investigated the force-percolation transition in simulations of jammed granular packings at fixed pressures.§.§.§ Methods from computational algebraic topology. In addition to traditional approaches for network analysis, one can also study the architecture of granular networks using ideas from algebraic topology. Persistent homology (PH) <cit.> (see Sec. <ref>) seems especially appropriate, and over the past several years, it has provided a fruitful perspective on the structure of compressed <cit.>, and tapped <cit.> granular materials. Very recently, it has also been used to study responses of granular materials to impact <cit.>. One way to characterize the organization and evolution of granular force networks is to examine how Betti numbers (see Sec. <ref>)) change as a function of a force threshold (and also as a function of packing fraction) in compressed granular systems. This includes studying the birth and death of components (determined by β_0) and loops (determined by β_1) as a function of a force threshold by computing and analyzing persistence diagrams (see Fig. <ref> of Sec. <ref>). Examining when and how long different features persist in a network provides a detailed characterization of the structure of granular force networks, and one can quantify differences between two networks by defining measures of “distance" between their associated persistence diagrams. These capabilities allow a PH framework to provide a distinct set of contributions that complement more traditional network-based analyses of components, cycle structure, and other features.<cit.> investigated how simulated 2D granular force networks evolve under slow compression as they cross the jamming point. They first demonstrated that one can identify the jamming transition by a significant change in behavior of β_0 (specifically, there is an increase in the number of components at a force threshold approximately equal to the mean force ⟨ f ⟩), and that structural properties of the network — such as the size of the connected components — continue to change above jamming. <cit.> also demonstrated that β_0 and β_1 can quantitatively describe the effects of friction and polydispersity on the organization of force networks (and can distinguish how friction and polydispersity alter the structure of a force network). This work was extended in <cit.>, who examined numerical simulations of 2D, slowly compressed, dense granular materials using PH. In addition to examining the values of the Betti numbers, they also computed β_0 and β_1 persistence diagrams (PD_0 and PD_1, respectively) as the system was compressed to quantify the appearance, disappearance, and lifetimes of components and loops. In <cit.>, they defined a filtration over the clique complex of the networks (see Sec. <ref>), so only loops with four or more particles were counted. To extract useful information from the PDs, they binned the persistence points in each diagram into different regions corresponding to features that (1) are born at any force threshold but have relatively short lifetimes compared to those that are born at either (2) strong, (3) medium, or (4) weak forces and that persist for a large range of thresholds. Their persistence analysis led to several insights into the structure of a normal-force network as a granular system is compressed, as well as insights into differences in the structure of a normal-force network for systems with different amounts of friction and different polydispersities. For example, <cit.> observed that, near the jamming point, frictionless packings appear to have more “extreme” features than frictional packings, in the sense that frictionless packings have many more β_0 persistence points that are born at either weak or strong forces and that are relatively long-lived. Observations on the effects of polydispersity and friction may be difficult to observe using traditional measures such as the probability density function of the normal forces. <cit.> also used PH to study force networks from simulations of slowly compressed, polydisperse packings of disks in 2D as they traverse the jamming transition (see Fig. <ref>a). As they compressed the systemthrough a range of different packing fractions ρ∈ [0.63,0.9], they extracted force information at approximately fixed time intervals during the simulation, and they then computed PDs of components and loops (i.e., PD_0 and PD_1, respectively) for each force network sampled during compression (see Fig. <ref>b). As in other studies, <cit.> used the clique complex of the force networks to avoid counting 3-particle loops. They then used the bottleneck distance d_B and two variants, d_W1 and d_W2, of the Wasserstein distance <cit.> to quantify differences between two PDs and thereby help quantify differences in local and global features between two granular force networks. As described in <cit.>, the bottleneck distance captures only the largest difference between two PDs, whereas the Wasserstein distances include all differences between two PDs, with d_W1 more sensitive than d_W2 to small changes. (See <cit.> for more details.) Calculating the various types of distance between the two β_0 PDs and the two β_1 PDs for consecutive samples (i.e., consecutive states) of a force network allows one to characterize different kinds of variations in the geometry of force networks as a function of packing fraction (see Fig. <ref>c). Using the employed distance measures, <cit.> observed that in the unjammed state, there can be significant (but localized) reorganization in force geometry as a packing is compressed. They also concluded that the jamming transition is characterized by rapid and dramatic global rearrangements of a force network, followed by smoother and less dramatic reconfiguration in the system above jamming, where the distances between consecutive states of a packing are much smaller than in the unjammed state. <cit.> also found that tangential-force networks seem to exhibit similar behavior (in most respects) to that of normal-force networks. They also observed that friction can have a significant impact on how the geometry of the forces reconfigures with compression. For example, they showed that the rate of change of loop features (as measured by the distances between consecutive PD_1s) is larger for a frictional system than for a frictionless one below the jamming point, but just before jamming and thereafter (i.e., during the entire jammed state) differences in loop structure between consecutive packing fractions are larger for frictionless systems than for frictional ones. In very recent work, <cit.> used PH to examine the temporal scales on which granular force networks evolve during slow compression. They simulated dense 2D granular materials and studied the influence of the externally-imposed time scale — set by the rate of compression — on how frequently one should sample a system during compression to be able to evaluate its dynamics near the jamming transition. By varying the sampling rate and carrying out a persistence analysis to quantify the distance between consecutive sampled states of the system, their results indicate that close to jamming, a force network evolves on a time scale that is much faster than the one imposed by the external compression. See <cit.> for further details.One can also use PH to study force networks in tapped granular systems. <cit.> examined DEM simulations of two different 2D systems exposed to tapping. One type of packing consisted of monosized disk-shaped particles, and the other type consisted of monosized pentagon-shaped particles. <cit.>'s investigation suggested that particle shape can play an important role in mechanical responses, which is consistent with observations from classical investigations of granular materials <cit.>. More specifically, <cit.> computed β_0 and β_1 as a function of force threshold in both normal-force networks and tangential-force networks. They observed for both types of force-weighted networks (but particularly for the tangential one) that the first two Betti numbers are able to clearly distinguish between disks and pentagons, where β_0 (respectively, β_1) is consistently larger (respectively, smaller) for pentagons across a wide range of force thresholds. However, using only β_0 and β_1, <cit.> were unable to clearly differentiate states with similar packing fractions but that result from different tap intensities. In a follow-up investigation, <cit.> simulated a series of several taps to granular packings and used PH to examine how normal and tangential force-weighted networks vary between individual tap realizations. Specifically, they computed distances between PD_0s and between PD_1s of force networks associated either with individual realizations of tapping to the same system or with individual realizations of tapping to two different systems. In one part of their study, they examined systems of disks exposed to a series of taps at two different tap intensities. (See Sec. <ref> for a rough delineation of “low" tapping intensity versus “high" tapping intensity.) They observed that in terms of loop structure, the set of networks generated from a series of taps at low intensity differ far more substantially from each other — as quantified by the distribution of distances between the PD_1s for different realizations of the tapping — than do the set of force networks from a series of taps at high intensity. They also observed that the distances between different realizations of low-intensity tapping are as large as the distances between low-intensity tapping and high-intensity tapping realizations. Therefore, although the high-intensity tapping and low-intensity tapping regimes yield networks with approximately the same packing fraction (see Sec. <ref>), one can use methods from PH to help explain some of the differences between the packing structure in the two regimes. In another part of their study, <cit.> carried out a persistence analysis of tapped packings of disks and tapped packings of pentagons, and they observed clear distinctions between the two systems based on calculations of β_1 PDs. For example, for each system, they computed the PD_1s for a set of networks associated with several individual realizations of the same tapping intensity, and they then computed a distance between each pair of PD_1s for realizations within the same packing and across the two types of packings. They observed that the distribution of distances between the PD_1s of individual tapping realizations to the packing of pentagons is narrower and centered at a smaller value than the distribution of distances between individual realizations of taps for the packing of disks. They also observed that the distances between the disk and pentagon systems are much larger than those between different realizations of the disk system. Thus, <cit.> were able to distinguish clearly between tapped disk packings and tapped pentagon packings using PH, especially when considering properties of loop structures. Past work using 2D experiments has also been able to distinguish between the dynamics of disk and pentagon packings using conventional approaches <cit.>.One can also use methods from computational algebraic topology to study granular networks in which one uses edge weights from something other than a force. For example, <cit.> used only particle positions (in the form of point clouds) and computed Betti numbers to distinguish states at the same density but that are at different mechanical equilibria. Using both experimental and simulated 2D granular packings of monodisperse particles, they constructed networks by locating the center of each particle and then introducing a filtration parameter δ, such that any two particles separated by a Euclidean distance less than or equal to δ are adjacent to each other in a graph. They considered δ∈ [d,1.12d], where d is the particle diameter and the domain for δ resembles the choices that are used for determining if particles are in physical contact with each other. The authors computed, as a function of δ, the first Betti number β_1 on the whole network to count the total number of loops at a given δ. They also computed β_1 on the flag complex (see Sec. <ref>), thus counting the number of loops with four or more nodes at a given value of δ. For values of δ that are slightly larger than d, <cit.> were able to separate states at the same packing fraction that are generated by tapping at different intensities. They observed for a fixed packing fraction that states that arise from lower-intensity tapping have a larger value of β_1 when computed on the whole network and a lower value of β_1 when computed on the flag complex. Their results were robust to both noise and errors in particle-position data. §.§ Other network representations and approaches§.§.§ Network-flow models of force transmission.Another technique for gaining insight into the organization of forces in deforming granular systems, and how microscale aspects of a force network lead to macroscale phenomena such as shear bands and material failure, is to view force transmission from the perspective of maximum-flow–minimum-cut and maximum-flow–minimum-cost problems (see Sec. <ref>). To examine a granular system using such a perspective, one can consider the “flow" of force through a contact network (with some contacts able to transmit more force than others), which in turn yields a “cost” to the system in terms of energy dissipation at the transmitting contacts <cit.>. One can calculate flow and costs in routes through a network (and hence determine bottlenecks in force transmission) to gain understanding of how contact structure relates to and constrains a system's ability to transmit forces in a material. For example, <cit.> constructed flow networks from DEM simulations of a system of polydisperse particles compressed quasistatically under a constant strain rate in the vertical direction and allowed to expand under constant confining pressure in the horizontal direction. At a given axial-strain value (i.e., “strain state”), they assigned uniform capacities u_ij to each edge of the contact network to reflect the maximum flow that can be transmitted through each contact, and they assigned costs c_ij to each edge to model dissipation of energy at each contact. After each axial-strain increment during loading, they then solved the maximum-flow–minimum-cost problem for the network at the given strain state, finding that edges in the minimum cut (yielding bottlenecks in the force-transmission networks) localize in the material's shear band. By using costs c_ij that reflected the type of inter-particle contact (specifically, elastic contacts versus various types of plastic contacts) <cit.> were able to track different stages of deformation (i.e., strain-hardening, strain-softening, and the critical-state regime). They also computed a minimal cycle basis and observed that a large majority of force-chain particles and particles in 3-cycles are involved in the set of contacts that comprise the maximum-flow–minimum-cost routes.One can use the above approach with various definitions of force capacity and cost functions. Using simulations of the same type of system as that in the previous paragraph, <cit.> constructed networks — one for each strain state as a system was loaded until it failed — that incorporated information about both the inter-particle contacts at a given strain state and the particle displacements that occur between consecutive strain steps. Specifically, if nodes i and j are in contact at a given strain state, the weight of the edge between them is the inverse of the absolute value of the magnitude of the relative displacement vector of particles i and j, where one computes the displacement for each particle from the previous strain state to the current one. The distance between nodes i and j is 0 if the associated particles are not in contact. The intuition behind this capacity function is that, when there is more relative motion between a pair of particles, one expects those particles to have less capacity to transmit force to each other. <cit.> used capacities that incorporate 3-cycle memberships of edges in a study of minimum cuts of a flow network in two samples of 3D sand under triaxial compression and in a 3D DEM simulation of simple shear. Grains in the bottlenecks localize early during loading, and are indicative of subsequent shear-band formation. Other work <cit.> studied DEM simulations of compressed, 3D bonded granular materials (where bonded signifies that the grains are connected via solid bonds of some strength) and a system of 2D photoelastic disks under shear stress with the goal of testing the hypothesis that an appropriate maximum-flow–minimum-cost approach can identify experimentally-determined load-bearing particles and force-chain particles without relying on knowledge of contact forces. <cit.> examined different combinations of force-transmission capacity and cost functions, and they examined the fraction of force-chain particles that are part of the associated maximum-flow–minimum-cost network for a given capacity and cost function. In both cases, costs based on 3-cycle membership of edges seem to yield large values of these fractions, and <cit.> were able to successfully forecast most of the particles that eventually become part of force chains without using information about contact forces.§.§.§ Broken-link networks.In previous discussions (see Secs. <ref>, <ref>), we have seen that one way to investigate the evolution of a granular system under an applied load is (1) to compute contact networks or force-weighted networks for the system as a function of packing fraction, strain, or some other control parameter, and then (2) to study how different features and properties of the networks emerge and change as one varies that parameter. One can also use other network constructions to explore different mesoscale features and examine system dynamics. For example, <cit.> designed a broken-link network (see Fig. <ref>) to study the dynamics of 3D granular flows. They conducted an experiment on a collection of acrylic beads immersed in a box of liquid medium, shearing the system at a constant rate (Ω≈ 1.05 × 10^-3 rad/s) by a rotating circular disk at the bottom of the box. (See <cit.> for details about their experiments.) First, they constructed proximity networks (a variant of a contact network) as a function of time. (Specifically, they constructed one network every 3-degree increment of rotation.) In their proximity network, they assigned a “contact" (edge) to each pair of particles whose distance from one another in the given frame was within a specified distance threshold, which they chose to be a conservative upper bound for particle contact. They then defined a broken link (relative to some reference frame) as an existing edge between two particles in the reference frame that was subsequently absent in at least two later time frames (due to the particles having moved apart). A broken-link network for the frame in which a pair of particles moved apart had an edge between the two particles that were initially in contact, and broken links were not allowed to reform later. In Fig. <ref>a, we illustrate this procedure for constructing a broken-link network. Studying the temporal evolution of a broken-link network provides a quantitative approach for examining particle rearrangement events in granular matter. To examine the temporal evolution of a granular system, <cit.> examined the size of the LCC in a sequence of broken-link networks as a function of applied shear, drawing an analogy between the fraction χ_b of broken links and the occupation probability in traditional percolation problems (see Sec. <ref>). They observed that the fraction s_g of nodes in the LCC of the broken-link network grows with χ_b in a way that suggests that there is a continuous phase transition in s_g, and they approximated the value of χ_b at which this transition occurs. (However, as we noted in Sec. <ref>, because these networks have finite sizes, one needs to be cautious regarding statements about percolation and phase transitions.)From a physical standpoint, this transition region corresponds to a characteristic deformation at which broken links — which are due to particle rearrangements — start to become globally connected, relative to a reference proximity network. By examining χ_b as they applied shear to the system, <cit.> approximated a characteristic amount of strain associated with this transition region (by mapping the value of χ_b associated with the transition region to a corresponding value of strain), and they suggested that the determined strain scale may be useful for identifying the onset of global reorganization in the system. In a later study, <cit.> used a similar approach to examine progression from reversible to irreversible dynamics in granular suspensions under oscillatory shear strain. In experiments similar to the one described above <cit.>, the authors considered a series of 20 shear cycles of the following form: for each cycle in one experiment, a suspension was sheared with a rotating disk up to a strain amplitude of θ_r, after which the direction of rotation was reversed and the disk was rotated by -θ_r back to its original location. The authors performed experiments for θ_r values of 2^∘, 4^∘, 10^∘, 20^∘, and 40^∘. In a qualitative sense, measuring “reversibility" of the dynamics in this setup entails determining the extent to which particles return to their original positions after some number of shear cycles. To quantify this idea, one can compute the mean square displacement (MSD) of the particles after each cycle, where smaller MSD values correspond to more reversible dynamics. To study this system, <cit.> adjusted the idea of a broken-link network to include “healing", so that broken links that are repaired in later frames do not contribute edges in the broken-link network, and they studied the temporal evolution of this broken-link network as a function of the cyclic shear and for different amplitudes θ_r. In addition to the extent of spatial (ir)reversibility measured by calculating the MSD, <cit.> also proposed a notion of topological (ir)reversibility by examining the temporal evolution of the size of the LCC in their broken-link networks. For low values of θ_r (specifically, for θ_r ≤ 20^∘), the system appears to be almost reversible: proximity-based contacts break, but they reform after shear reversal, and the fraction of particles in the LCC of the broken-link network thus grows before subsequently shrinking to almost 0 after reversal. However, for a higher shearing amplitude (specifically, for θ_r = 40^∘), the system shows signatures of irreversibility. Many broken links do not reform after a shear cycle, and after reversal, the LCC of the broken-link network remains at a value that constitutes a substantial fraction of the total system size.§.§.§ Constructing networks from time series of node properties or from kinematic data.Another way to examine the organization of deforming granular materials is to construct networks based on the temporal evolution of particle properties <cit.>, an idea that draws from earlier work in complex systems on constructing networks from different kinds of time-series data. (See, for example, <cit.>.) In one type of construction, which yields what are sometimes called functional networks <cit.>, one records the time series of some property of each particle and places an edge (which can potentially be weighted and/or directed) between two particles according to some relationship (e.g., some type of distance or other measure of similarity) between the particle-property time series. When using a different property, the nodes are the same, but the edges and especially edge weights will in general be different. Once networks have been constructed, one can examine them using techniques such as those in Sec. <ref>. Some authors have also used the generic term particle-property networks to describe networks that they constructed in some way from particle properties.<cit.> used particle-property networks to study the evolution of DEM simulations of a quasistatically deforming, 2D granular material under biaxial compression. They constructed time series for two features (as well as their coevolution) for each particle — membership in force chains (determined as in <cit.>) and membership in 3-cycles of a minimal cycle basis — by recording a 1 if a particle has the given property and a 0 if it does not. They then quantified the similarity of particle-property evolution using the Hamming distance <cit.>, and they added (undirected and unweighted) edges between each particle and its k closest (i.e., most similar) particles until eventually obtaining a single network with one connected component. <cit.> then extracted sets of particles that exhibit similar dynamic behavior by detecting communities (see Sec. <ref>) in the particle-property networks. This uncovered distinct regions in the material — including the shear band and different subnetworks composed of primarily force chain or primarily non-force chain particles — as well as interlaced regions in the shear band that alternate between jammed and unjammed configurations. See <cit.> for additional studies that used membership in cycles of length up to l = 7 for the construction of particle-property networks. One can also construct particle-property networks from data that do not rely on knowledge of contact or force networks. For example, <cit.> studied deformation in sand subject to plane-strain compression using measurements of grain-scale displacements from digital image correlations (DICs) <cit.>. From these data, they constructed kinematic networks, a type of network that arises from some measurement of motion (such as displacement or a rotation) over a small time increment. <cit.> considered each observation grid point of digital images of a sample to be a node, and they placed edges between nodes with similar displacement vectors during a small axial-strain increment. This yields a collection of (undirected and unweighted) time-ordered kinematic networks. In another study, <cit.> calculated particle rotations and displacements for triaxial compression tests on sand using x-ray micro-tomography scanning <cit.>. They generated an ordered set of networks from these data for several strain steps by treating each grain as a node and linking nodes with similar kinematics during the specified interval. More specifically, they represented the displacements and rotations of each particle as points in a state space, and they connected particles that are nearby in that state space according to Euclidean distance. In the network that they constructed, each particle is adjacent to k nearest neighbors in state space, where k is as small as possible so that the (unweighted and undirected) network is connected. Various notions of what it means to be “similar”, and thus how to quantitatively define edges, are discussed in <cit.>. To probe the collective dynamics of interacting groups of particles for the network in each strain step, <cit.> detected communities corresponding to mesoscale regions in the material that exhibit similar dynamic behavior. Calculating the mean shortest-path length between pairs of particles in the same community yields a potentially important intermediate spatial scale (that they concluded is consistent with the shear-band diameter) of a granular system. For each strain step, they computed a variant of closeness centrality — it is similar to the one in Sec. <ref>, but it is not exactly the same — of each particle in the corresponding network, and observed that particles with large closeness centrality localize in the region of the shear band early in loading (and, in particular, before the shear band develops). This study highlights the potential of network analysis to provide early warning to detect regions of failure in particulate systems. Methods from nonlinear time-series analysis have also been used in network-based studies of stick-slip dynamics in a granular packing sheared by a slider <cit.>. Using so-called phase-space networks (see <cit.> for a description of them) to construct networks from measurements of a slider time series, <cit.> associated network communities with slip events. §.§ Comparing and contrasting different network representations and approaches Network representations of granular and other particulate systems, in combination with methods from network science and related disciplines, provide a plethora of ways to analyze granular materials. For these tools to be optimally useful in furthering understanding of the physics of these systems, it is also important to draw connections between the many existing approaches. The application of network-based frameworks to the study of granular matter is relatively young, and little work thus far has focused specifically on exploring such connections, but it is crucial to determine (1) how conclusions from different approaches relate to one another and (2) how similarities and differences in these conclusions depend on the system being studied. In this short subsection, we point out a few relationships and places to begin thinking about such questions.First, it is important to consider the network representation itself. We have discussed several different representations in this review, and some are more related to each other than others. Broadly speaking, one class of granular networks tries to encode the physical structure of a material — in the sense that edges exist only when there is an inter-particle contact — at a given point in an experiment or simulation. Such networks include contact networks (see Sec. <ref>) and force-weighted networks (see Sec. <ref>). One can obtain a contact network from an associated force-weighted network by discarding the edge weights and keeping information only about connectivity. Force-weighted networks thus contain much more information, and they allow one to investigate phenomena — such as force-chain organization — that may not be possible to probe with the contact network alone. However, it is still important to develop tools for the analysis of contact networks and understand what phenomena arises from features of the connectivity alone (or what one can learn and explain from connectivity alone), as force information may not always be available or may not be necessary to understand certain behaviors of a granular system. Generalizing some quantities (e.g., betweenness centralities) from unweighted networks to weighted networks also involves choices, and it is often desirable to conduct investigations that have as few of these potentially confounding factors as possible. Other types of granular networks do not encode physical connectivity, but instead directly represent something about the dynamics or changes that occur in a system during an experiment or simulation. Examples of such networks include broken-link networks (see Sec. <ref>) and particle-property networks (see Sec. <ref>). These different classes of network representations offer distinct ways of studying granular materials, and utilizing each of them should improve understanding of these systems. For a given network representation, it is reasonable to expect that conclusions that arise from similar network quantities or methods of analysis are related to one another, whereas conclusions that result from tools designed to probe very different kinds of organization in a network provide rather different information about the underlying granular system <cit.>. For example, some studies have suggested that in deforming granular materials, results based on calculations of clustering coefficients are similar to those from studying 3-cycles. This is intuitively reasonable, given that calculating a local clustering coefficient yields one type of local 3-cycle density. We have also observed that conclusions drawn from examinations of small subgraphs in a deforming granular system may also be related to whether or not those subgraphs contain cycles of certain lengths. As we discussed in Sec. <ref>, another way to draw connections between different approaches is to consider the spatial, topological, or other scales that are probed by the different approaches. For instance, in a force-weighted network, node strength is a particle-scale property, and it encompasses only very local information about a granular material. However, granular systems exhibit collective organization and dynamics on several larger scales that may be difficult to understand by exclusively computing local measures and distributions of such measures. To obtain an understanding of larger-scale structures, it is necessary to also employ different methods, such as community detection, persistent homology, and the examination of conformation subgraphs that are composed of more than just a single particle. Utilizing such approaches has provided insights into force-chain structure, shear band formation, and reconfiguration in granular systems under load that one may not be able to obtain by considering only local network quantities. It is thus important to continue to use and develop methods to analyze granular networks across multiple scales, as doing so can provide important and new information about a system. Finally, we note that even a single approach, depending on how it is used, can provide multiple types of information about a granular network. A good illustration of this is community detection. In Sec. <ref>, for example, we saw that using different null models in modularity maximization allows one to probe rather different types of mesoscale architecture in granular force networks.We look forward to forthcoming studies that directly compare the results and assumptions in different approaches (both network-based and traditional ones) and different network representations of granular and other particulate systems. Conducting principled investigations into how conclusions from various network-based approaches are related is indeed an important direction for future work. §.§ Limitations and practicalities of simulations and experiments Network-based studies of granular materials have examined inter-particle contact and force data (and associated dynamics, such as in the presence of external loading) from both experiments and simulations. A recent summary of the many experimental techniques available for obtaining data about inter-particle contacts is available in the focus issue <cit.>. Such techniques include laser-sheet scanning <cit.>, photoelasticity <cit.>, x-ray tomography <cit.>, and nuclear magnetic resonance <cit.>. Using each of the first three approaches, it is possible to measure both particle positions and inter-particle forces. If one is careful, it is sometimes possible to measure the forces as vectors (i.e., including both the normal and tangential components), but some techniques or systems do not have sufficient resolution to allow more than scalar or coarse-grained values. Determining the forces also helps experimentalists to confidently construct contact (i.e., unweighted) networks of particulate materials. In deciding whether or not two particles are actually in contact, rather than merely being adjacent in space, it is necessary to perform a detailed study of the effects of thresholding the data <cit.>. Any experimental technique will imperfectly report the presence versus absence of the weakest contacts in a system. Additionally, because of the difficulty of accessing the interior of granular materials, much more data is available for 2D force networks than for 3D force networks <cit.>.The most widely-used simulation techniques are discrete element methods (DEMs) <cit.>, in which the dynamics of individual particles (usually spheres) are determined by their pairwise interactions under Newton's laws of motion. The normal forces are typically determined from a Hertzian-like contact law (see, e.g., the sidebar in <cit.> for an introduction to Hertzian contacts) via an energy penalty for the overlap of two particles. The tangential (frictional) forces are most commonly modeled using the Cundall–Strack method <cit.> of a spring and a dashpot, but they have also been modeled via the surface roughness created by a connected set of smaller spheres <cit.>. For a given application, it is not known whether these simplified models capture all of the salient features of inter-grain contacts, and the situation likely differs for different applications. For example, experimental measurements of sound propagation in photoelastic disks <cit.> suggest that the amplitude of sounds waves may be largest along a force chain network, an effect not observed in DEM simulations <cit.>. This is likely a consequence of real particles physically deforming their shape to create an increased contact area through which sound can be transmitted; existing DEM simulations do not account for this effect. Another important use of particle simulations is to provide a means to investigate the robustness of network-based analyses to various amounts of experimental error <cit.>. Simulations provide an important check on experimental uncertainties in the determination of force-weighted networks and other network representations of granular materials. Conversely, network-based approaches provide a means to compare how faithfully simulations are able to reproduce experimental results. § OPEN PROBLEMS AND FUTURE DIRECTIONSWe now discuss a few open problems and research directions for which we anticipate important progress in the near future. We divide our comments into three main areas: the construction of different types of networks that encode various physical relationships or other properties (see Sec. <ref>), the application of network analysis to additional types of materials (see Sec. <ref>), and the application of network-based approaches to the design of materials (see Sec. <ref>). Network tools can provide valuable insights — both explanatory and predictive — into particulate materials and their dynamics, and a lot of fascinating research is on the horizon.§.§ Network representations and computationsTo briefly explore the potential of different approaches for constructing granular (and other particulate) networks for providing insights into the physics of granular materials (and particulate matter more generally), we discuss choices of nodes, choices of edges, edge-to-node dual networks, multilayer networks, and annotated networks.[Other ideas that are worth considering include memory networks <cit.>, adaptive networks <cit.>, and various representations of temporal networks <cit.>.] It is also worth thinking about what calculations to do once one has constructed a network representation of a particulate system, so we also briefly consider the important issue of developing physically-informed methods and diagnostics for network analysis.§.§.§ Definitions of nodes and edges.There are many choices — both explicit and implicit — for constructing a network <cit.>, and these choices can impact the physics that one can probe in granular networks <cit.>. Perhaps the most obvious choices lie in how one defines nodes and edges. In the study of granular materials, a common definition is to treat individual particles as nodes and to treat contacts as edges (often with weights from the inter-particle forces). A natural set of open questions lies in how contact network architectures depend on different features of the grains in a system. For example, there have been several recent studies on systems composed of particles that are neither spheres nor disks — including ones with U-shaped particles <cit.>, Z-shaped particles <cit.>, squares and rods <cit.>, dimers and ellipses <cit.>, and others <cit.>. It would be interesting to build network representations of these systems, examine how different grain geometries affect network organization, and investigate how that organization relates to the mechanical properties of a system <cit.>. It seems particularly important to develop an understanding of which (quantitative and qualitative) aspects of network structure depend on features of grains (such as shape, polydispersity, friction, cohesiveness, and so on <cit.>) and which are more universal.One can also consider defining particulate networks in a variety of other ways. For example, when determining edges and edge weights, one can examine the tangential (rather than, or in addition to, the usual normal) component of the force between two grains. Such extensions may facilitate increasingly detailed investigations into a packing's organization <cit.>. It may also be useful to retain information about both the magnitude and direction of forces when defining edges. One may even wish to construct signed networks, for which edges can take either positive or negative values, thereby conveying further information about the relationship between nodes. In such studies, one can perhaps take advantage of advancements in community-detection techniques, such as by using signed null models <cit.>. Additionally, as we discussed in Sec. <ref>, particle-property networks <cit.> and networks constructed from particle-displacement information <cit.> are other informative ways to build networks for particulate systems. One can also construct edges (and determine edge weights) by incorporating information about inter-grain relationships based on similarities in particle properties such as orientation <cit.> (see Fig. <ref>), coefficient of friction <cit.>, or size <cit.>. Constructing networks whose edges are determined or weighted by inter-particle similarities may be particularly useful for achieving a better understanding of mesoscale physics in polydisperse packings, which are thought to depend on the spatial distributions of particles of different types <cit.>. A perhaps nonintuitive choice is to use a bipartite representation of a granular network, such as the approach used in <cit.>. The above choices for network construction give a grain-centric view of the physics of particulate materials. One can also consider edge-to-node “duals” of such networks to provide a contact-centric perspective. In a contact-centric approach, one treats contacts between physical objects as nodes and the objects themselves as edges between contacts. (Compare this idea to the notion of a line graph <cit.> of a network G.) A contact-centric network approach was used recently in the study of nanorod dispersions <cit.>. Contacts between rods were treated as nodes, and the effective conductance of the rod was treated as a weighted edge. The treatment of a grain or other physical object as an edge rather than as a node is also a particularly appealing way to describe networks of fibers in both human-made and natural systems. Recent examples of such fiber networks that have benefited from network analysis include collagen networks <cit.> (see Fig. <ref>), fibrin networks <cit.>, and axonal-fiber networks <cit.>.Another way to study granular systems (especially porous ones) is to consider a network constructed from the pore space <cit.>, in which pores (i.e., empty volumes between contacting grains) are nodes and throats (i.e., flow pathways that connect pores) are edges (which can be weighted in various ways). Conveniently, there are several methods to precisely determine pores and grains <cit.>. Studying pore networks is a common way to examine flow through porous materials or to understand the responses of granular materials to external stresses, but only more recently have such networks been studied explicitly from a network-science perspective. See <cit.> for some recent examples, and see <cit.> for a study of force chains in porous media. Given how pore networks are formulated, we expect that they can also be studied using PH (see Sec. <ref>).§.§.§ Multilayer networks.When considering different ways to construct a network, it is also natural to ask whether it is beneficial to combine two or more approaches in an integrated way to more accurately (and elaborately) represent a particulate system as a network. Ideally, doing network analysis using a more complicated representation will also lead to improved physical understanding. One way to incorporate heterogeneous node types, multiple types of relationships between nodes, and time-dependence into network representations is with multilayer networks <cit.>. Recall that <cit.> (see Sec. <ref>) studied one type of multilayer community structure in a compressed granular material. Another type of multilayer network that may be useful for the study of particulate systems are multiplex networks, in which one can encode different types of relationships between nodes in different layers of a network. For example, one can construct a multiplex network in which two particles are linked both by normal force-weighted contacts and also according to another relationship (such as the tangential force between them, their contact angle, or by a measure of similarity of one or several particle-properties, as discussed in Sec. <ref>). One can also envision using multilayer networks to study particulate systems with multiple types of particles. For example, if a system consists of particles of different shapes or sizes, one possibility is that each particle of a given shape (or size) is in a different layer, intralayer edges represent interactions between particles of the same type, and interlayer edges represent interactions between particles of different types. Another possibility is to let each layer represent a time window, perhaps with intralayer edges representing a mean interaction strength during that window. The study of multilayer networks is one of the most popular areas of network science, and we expect that they will be very illuminating for studies of particulate matter.§.§.§ Annotated graphs.The network-construction techniques that we have been discussing throughout this review represent data in the form of an adjacency matrix (for an ordinary graph) or an adjacency tensor (for a multilayer network) <cit.>. However, it may be desirable to encode not only relationships between grains, but also features of the grains and/or their interactions. One option is to use annotated graphs (one can also use multilayer networks) to encode inter-node relationships, individual-node properties, and individual-edge properties <cit.>. One can use annotated graphs — also sometimes called labeled graphs or tagged graphs <cit.> — to study properties of interaction patterns (such as force-weighted contacts) that may be driven by local particle properties (e.g., size, shape, spatial position in a material, membership in cycles or force chains, and so on). Available tools for annotated graphs include clustering techniques in the form of stochastic block models (SBMs) that combine information from both connectivity and annotations to infer densely-connected sets of nodes (e.g., including when there is sparse or missing data) <cit.>. §.§.§ Beyond pairwise interactions.It is also desirable to explicitly examine interactions between three or more particles, rather than restricting one's study to representations of pairwise interactions. We discussed this idea briefly in Sec. <ref> in the context of simplicial complexes <cit.>, and we note that one can also encode edges among three or more nodes by using hypergraphs <cit.>. §.§.§ Physically-informed network calculations. In addition to exploring different choices of how to construct particulate networks, it is also important to consider ways to generalize the tools of network science to incorporate physical constraints <cit.>. A natural way to begin is to build null models of spatially-embedded graphs that obey simple mechanical constraints like force balance <cit.>. One can also develop network diagnostics that incorporate physical geometry, spatial embedding, and latent spatial structure of networks <cit.>. See <cit.> for an example of a kinetic approach. It should also be useful to incorporate ideas of flow and other forms of dynamics into community detection <cit.>. Additionally, it is desirable to develop methods to more explicitly characterize the relationship between a network's structure and the geometry of a space in which it is embedded <cit.> (as well as latent geometry) and to use such techniques to better understand the plurality of packings that are consistent with a single force distribution via a force-network ensemble, which is the space of allowed force configurations for a fixed contact geometry <cit.>. We also point out that a spatial embedding can induce correlations between network diagnostics <cit.>, and it is therefore important to develop a better understanding of the extent to which this occurs in networks that arise from particulate systems. §.§ Beyond granular materialsAlthough we have focused primarily on network analysis of the canonical type of granular systems, which consist of discrete, macroscopic particles that interact via contact forces, one can potentially also use network-based approaches to characterize granular materials with more complex interactions as well as soft materials more broadly <cit.>. As reviewed recently in <cit.>, these materials include colloids, glasses, polymers, and gels. In these systems (and others) the particles (or other entities) can interact via various (and often complicated) means.[One can also examine particulate networks of hard materials that admit Hamiltonian descriptions <cit.>.] For example, system components can have attractive and/or repulsive long-range interactions <cit.>, can be cohesive <cit.>, and/or can interact with one another via chemical gradients <cit.>, electric charges <cit.>, or through other molecular or mechanical processes <cit.>. In each of these cases, interaction strengths can yield edge weights, either as the sole relationship studied between entities in a graph or as one of several inter-entity relationships in a multilayer network. One particularly interesting avenue for future work may be to study polymer and fiber networks <cit.>, which are important in biological systems. For example, in biopolymer assemblies, cross-linking can glue filaments together into large-scale web-like structures (e.g., as in the cytoskeleton of a cell) <cit.>. Such cross-linked actin filaments are critical for cellular function and integrity <cit.>, and it is thus important to understand the structural organization of these kinds of networks, their mechanical properties, and how force transmission is regulated in them <cit.>. Indeed, there has already been some work examining network representations of gels and polymers, and employing graph-theoretic analyses to quantify the structural properties of these systems (e.g.<cit.>). One can also use network analysis to study systems at smaller spatial scales. Because traditional network-based approaches are agnostic to the physical scale of a system, off-the-shelf calculations and algorithms are directly portable to microscale, nanoscale, and even smaller-scale systems <cit.>. However, despite the technical portability of common tools, the investigation of much smaller-scale systems should benefit from the extension of classical network-based tools in ways that incorporate additional underlying physics. For example, we have described extensions of network tools to assimilate ideas and constraints from classical physics (e.g., spatial embeddedness) <cit.>. For such investigations, we expect that ideas from the study of random geometric graphs (RGGs) and their extensions will be helpful <cit.>. One can also consider employing ideas that take into account principles from quantum physics <cit.> or other areas. The study of quantum networks is a new and exciting area of network science, and there are many opportunities for progress <cit.>.§.§ Implications for material designAs is the case with mathematics more generally, network analysis gives a flexible approach for studying many systems, because it is agnostic to many details of their physical (or other) nature <cit.>. Such versatility supports the application of network-science techniques to both living and non-living materials to examine the architectures and dynamics of these systems and to gain insights into relationships between structure and function. The tools and approaches of network science also have the potential to inform the design of new materials. For example, it should be possible to use network theory (e.g., via the tuning of a system's network architecture) to provide guidance for how to engineer a material to exhibit specific mechanical, electrical, or other properties. Material design has become increasingly popular with recent advances in the study and development of metamaterials <cit.>. Metamaterials can take advantage of precisely-defined component shapes, geometries, orientations, arrangements, and connectivity patterns (rather than specific material and physical characteristics of individual units) to produce tailored mechanical <cit.>, acoustic <cit.>, and electromagnetic <cit.> properties. The control of a single unit or component is relatively straightforward, but the question of how to link many components in ways that yield complex material properties is a very challenging one. Approaches that use ideas from network science have the potential to offer guidance for construction patterns of material units that support desired bulk properties. There are likely many ways to use network-based approaches to inform the design of new materials. One reasonable possibility is to employ evolutionary and genetic computer algorithms <cit.> and other tools from algorithmic game theory <cit.>. For example, the combination of multi-objective functions and Pareto optimality <cit.> can offer a targeted path through the space of possible network architectures (i.e., through a network morphospace <cit.>). If exact simulations are not computationally tractable, one can use machine-learning techniques to offer fast estimates of material properties and behavior <cit.>. One can perhaps begin with a single material network structure that is physically realizable and then rewire the initial network architecture with a cost function that drives the system towards an arrangement that enhances a desired property. One can perform such a rewiring explicitly along Pareto-optimal fronts using a set of rewiring rules that preserve physical constraints and laws. This approach, which builds on prior applications to other types of spatially-embedded networks <cit.>, selects for physically-feasible network designs that purposely maximize specific properties. A network analysis of these “evolved” (and evolving) networks may help elucidate relationships between structural features of system architecture (e.g., clustering-coefficient values) and material properties (e.g., stability under load), providing a link between structure and function. Such an evolutionary-design approach can complement recent efforts to identify optimal shapes with which to construct a packing <cit.>, to design rules with which to pack those shapes <cit.>, and to construct “allosteric materials” with specific functionalities via evolution according to fitness functions <cit.>.One set of problems for which network-based tools may be useful are those related to rigidity and mechanical responses of disordered material networks, which have been studied previously using a rigidity-percolation framework <cit.>. In terms of material design, a particularly interesting line of future work may be to use network analysis in conjunction with methods such as tuning-by-pruning, in which individual bonds are selectively removed from a disordered material network to engineer a specific property into a system <cit.>. For example, beginning with simulated, disordered spring networks (derived from jammed particle packings), <cit.> used tuning-by-pruning to design networks with different ratios of the shear modulus to the bulk modulus. Motivated by allosteric responses in proteins, <cit.> developed an approach that allows careful control of local mechanical responses in disordered elastic networks. In particular, with the removal of very few bonds, one can tune the local strain in one region of a network via a response to an applied strain in a distant region of a system. <cit.> studied three different spring networks and continuously tuned their mechanical rigidity (measured by different parameters, such as the distance above isostaticity) to examine the effect of such tuning on material failure. They observed that for a fixed amount of disorder (which the authors measured using a scalar, following the approach in <cit.>), the width of the failure zone in a stressed material increases as the rigidity decreases. Recently, <cit.> studied a model of amorphous networks that incorporates angle-bending forces, and employed pruning-based methods to design auxetic metamaterials. In light of these findings, it is natural to ask what network quantities and methods may be related to the above types of global or local mechanical properties, and whether they can inform which bonds to remove (or rearrange <cit.>) to invoke particular types of functional responses.In developing a network-based approach for designing and building new materials, it is desirable to capitalize on the ability of network analysis to quantify multiscale structures (with respect to space, time, and network architectures) in a wide variety of systems (regardless of the exact details of their composition). For example, network analysis has revealed mesoscale architectures that are often crucial for determining material properties in disordered media, but such heterogeneities also appear to be important for biological materials, including networks of collagen fibers <cit.>, tissues and tendons <cit.>, muscle fibers <cit.>, and axonal fibers <cit.>. Tools from network science should be useful (and, in principle, flexible) for designing nontrivial structural organizations that yield desired material functions. One can imagine using a network-theoretic framework to design localized, mesoscale, and/or system-level properties, to design and manipulate human-made and natural (including biological) materials, and to precisely control both static and dynamic material properties. § CONCLUSIONS Network science is an interdisciplinary subject — drawing on methods from physics, mathematics, statistics, computer science, social science, and many other disciplines — that has been used successfully to help understand the structure and function of many complex systems. Much recent work on networks has yielded fascinating insights into granular materials, which consist of collections of discrete, macroscopic particles whose contact interactions give rise to many interesting behaviors and intricate organization on multiple spatial and temporal scales. These insights have increased scientific understanding of the structure and dynamics of the heterogeneous material architecture that manifests in granular matter (as well as network-based approaches to quantify such architecture) and the response of granular systems to external perturbations such as compression, shear, tapping, and tilting. In this paper, we have reviewed the increasingly fertile intersection of network science and granular materials. Future efforts should help provide a better understanding of the physics of particulate matter more generally, elucidate the roles of mesoscale interaction patterns in mechanical failure, inform the design of new materials with desired properties, and further scientific understanding of numerous important problems and applications in granular physics, soft-matter physics more generally, and even biophysics. § ACKNOWLEDGEMENTS We thank Alejandro J. Martínez, Ann E. Sizemore, Konstantin Mischaikow, Jen Schwarz, and an anonymous referee for helpful comments. We also thank our collaborators, students, and mentors, who have shaped our views on the subjects of this review. KD is grateful for support from the National Science Foundation (DMR-0644743, DMR-1206808) and the James S. McDonnell Foundation. LP is grateful to the National Science Foundation for a Graduate Research Fellowship. DSB is grateful to the Alfred P. Sloan Foundation, the John D. and Catherine T. MacArthur Foundation, the Paul G. Allen Foundation, and to the National Science Foundation (PHY-1554488). The content is solely the responsibility of the authors and does not necessarily represent the official views of any of the funding agencies. 389 urlstyle[Jaeger et al.(1996)Jaeger, Nagel, and Behringer]Jaeger1996 H M Jaeger, S R Nagel, and R P Behringer. Granular solids, liquids, and gases. Rev Mod Phys, 680 (4):0 1259–1273, 1996. [Duran(1999)]duran1999sands J Duran. Sands, Powders, and Grains: An Introduction to the Physics of Granular Materials. Springer-Verlag, 1999. [Mehta(2007)]Mehta2007 A Mehta. Granular Physics. Cambridge University Press, 2007. [Franklin and Shattuck(2015)]Franklin2015 S V Franklin and M D Shattuck, editors. Handbook of Granular Materials. CRC Press, 2015. [Andreotti et al.(2013)Andreotti, Forterre, and Pouliquen]Andreotti2013 B Andreotti, Y Forterre, and O Pouliquen. Granular Media: Between Solid and Fluid. Cambridge University Press, 2013. [Nagel(2017)]Nagel:2017a S R Nagel. Experimental soft-matter science. Rev Mod Phys, 890 (2):0 025002, 2017. [Mort et al.(2015)Mort, Michaels, Behringer, Campbell, Kondic, Kheiripour Langroudi, Shattuck, Tang, Tardos, and Wassgren]Mort2015 P Mort, J N Michaels, R P Behringer, C S Campbell, L Kondic, M Kheiripour Langroudi, M Shattuck, J Tang, G I Tardos, and C Wassgren. Dense granular flow — A collaborative study. Powder Technol, 284:0 571–584, 2015. [Liu et al.(1995)Liu, Nagel, Schecter, Coppersmith, Majumdar, Narayan, and Witten]Liu:1995aa C-H Liu, S R Nagel, D A Schecter, S N Coppersmith, S Majumdar, O Narayan, and T A Witten. Force fluctuations in bead packs. Science, 2690 (5223):0 513–515, 1995. [Mueth et al.(1998)Mueth, Jaeger, and Nagel]Mueth:1998a D M Mueth, H M Jaeger, and S R Nagel. Force distribution in a granular medium. Phys Rev E, 570 (3):0 3164–3169, 1998. [Coppersmith et al.(1996)Coppersmith, Liu, Majumdar, Narayan, and Witten]Coppersmith:1996a S N Coppersmith, C-h Liu, S Majumdar, O Narayan, and T A Witten. Model for force fluctuations in bead packs. Phys Rev E, 530 (5):0 4673–4685, 1996. [Claudin et al.(1998)Claudin, Bouchaud, Cates, and Wittmer]Claudin:1998a P Claudin, J-P Bouchaud, M E Cates, and J P Wittmer. Models of stress fluctuations in granular media. Phys Rev E, 570 (4):0 4441–4457, 1998. [Sexton et al.(1999)Sexton, Socolar, and Schaeffer]Sexton:1999a M G Sexton, J E S Socolar, and D G Schaeffer. Force distribution in a scalar model for noncohesive granular material. Phys Rev E, 600 (2):0 1999–2008, 1999. [Socolar et al.(2002)Socolar, Schaeffer, and Claudin]Socolar:2002a J E S Socolar, D G Schaeffer, and P Claudin. Directed force chain networks and stress response in static granular materials. Eur Phys J E, 70 (4):0 353–370, 2002. [Peters et al.(2005)Peters, Muthuswamy, Wibowo, and Tordesillas]Peters:2005aa J F Peters, M Muthuswamy, J Wibowo, and A Tordesillas. Characterization of force chains in granular material. Phys Rev E, 720 (4):0 041307, 2005. [Howell et al.(1999)Howell, Behringer, and Veje]Howell1999 D Howell, R P Behringer, and C Veje. Stress fluctuations in a 2D granular Couette experiment: A continuous transition. Phys Rev Lett, 820 (26):0 5241–5244, 1999. [Majmudar and Behringer(2005)]Majmudar:2005aa T S Majmudar and R P Behringer. Contact force measurements and stress-induced anisotropy in granular materials. Nature, 4350 (1079):0 1079–1082, 2005. [Geng et al.(2001)Geng, Howell, Longhi, Behringer, Reydellet, Vanel, Clément, and Luding]Geng:2001a J Geng, D Howell, E Longhi, R P Behringer, G Reydellet, L Vanel, E Clément, and S Luding. Footprints in sand: The response of a granular material to local perturbations. Phys Rev Lett, 870 (3):0 035506, 2001. [Radjai et al.(1998)Radjai, Wolf, Jean, and Moreau]Radjai:1998aa F Radjai, D E Wolf, M Jean, and J-J Moreau. Bimodal character of stress transmission in granular packings. Phys Rev Lett, 800 (1):0 61–64, 1998. [Cates et al.(1999)Cates, Wittmer, Bouchaud, and Claudin]cates1999jamming M E Cates, J P Wittmer, J-P Bouchaud, and P Claudin. Jamming and static stress transmission in granular materials. Chaos, 90 (3):0 511–522, 1999. [Bassett et al.(2012)Bassett, Owens, Daniels, and Porter]bassett2012influence D S Bassett, E T Owens, K E Daniels, and M A Porter. Influence of network topology on sound propagation in granular materials. Phys Rev E, 860 (4):0 041306, 2012. [Richard et al.(2005)Richard, Nicodemi, Delannay, Ribière, and Bideau]richard2005slow P Richard, M Nicodemi, R Delannay, P Ribière, and D Bideau. Slow relaxation and compaction of granular systems. Nat Mater, 40 (2):0 121–128, 2005. [Owens and Daniels(2011)]Owens:2011 E T Owens and K E Daniels. Sound propagation and force chains in granular materials. EPL (Europhysics Letters), 940 (5):0 54005, 2011. [Smart et al.(2007)Smart, Umbanhowar, and Ottino]Smart:2007a A Smart, P Umbanhowar, and J Ottino. Effects of self-organization on transport in granular matter: A network-based approach. EPL (Europhysics Letters), 790 (2):0 24002, 2007. [Gervois et al.(1989)Gervois, Ammi, Travers, Bideau, Messager, and Troadec]Gervois:1989a A Gervois, M Ammi, T Travers, D Bideau, J-C Messager, and J-P Troadec. Importance of disorder in the conductivity of packings under compression. Physica A, 1570 (1):0 565–569, 1989. [Combe et al.(2015)Combe, Richefeu, Stasiak, and Atman]combe2015experimental G Combe, V Richefeu, M Stasiak, and A P F Atman. Experimental validation of a nonextensive scaling law in confined granular media. Phys Rev Lett, 1150 (23):0 238301, 2015. [Behringer et al.(2014)Behringer, Bi, Chakraborty, Clark, Dijksman, Ren, and Zhang]Behringer2014:Statistical R P Behringer, D Bi, B Chakraborty, A Clark, J Dijksman, J Ren, and J Zhang. Statistical properties of granular materials near jamming. Journal of Statistical Mechanics: Theory and Experiment, 20140 (6):0 P06004, 2014. [Bassett et al.(2015a)Bassett, Owens, Porter, Manning, and Daniels]bassett2015extraction D S Bassett, E T Owens, M A Porter, M L Manning, and K E Daniels. Extraction of force-chain network architecture in granular materials using community detection. Soft Matter, 110 (14):0 2731–2744, 2015a. [Herrera et al.(2011)Herrera, McCarthy, Slotterback, Cephas, Losert, and Girvan]Herrera:2011aa M Herrera, S McCarthy, S Slotterback, E Cephas, W Losert, and M Girvan. Path to fracture in granular flows: Dynamics of contact networks. Phys Rev E, 830 (6):0 061303, 2011. [Newman(2010)]newman2010networks M E J Newman. Networks: An Introduction. Oxford University Press, 2010. [Bollobás(1998)]Bollobas:1998a B Bollobás. Modern Graph Theory. Springer-Verlag, 1998. [Fortunato and Hric(2016)]Fortunato2016 S Fortunato and D Hric. Community detection in networks: A user guide. Phys Rep, 659:0 1–44, 2016. [Porter et al.(2009)Porter, Onnela, and Mucha]Porter2009 M A Porter, J-P Onnela, and P J Mucha. Communities in networks. Not Amer Math Soc, 560 (9):0 1082–1097, 1164–1166, 2009. [Fortunato(2010)]Fortunato2010 S Fortunato. Community detection in graphs. Phys Rep, 4860 (3–5):0 75–174, 2010. [Csermely et al.(2013)Csermely, London, Wu, and Uzzi]csermely2013 P Csermely, A London, L-Y Wu, and B Uzzi. Structure and dynamics of core–periphery networks. Journal of Complex Networks Complex Networks, 10 (2):0 93–123, 2013. [Newman(2011)]newman2011complex M E J Newman. Complex systems: A survey. Am J Phys, 79:0 800–810, 2011. [Kivelä et al.(2014)Kivelä, Arenas, Barthelemy, Gleeson, Moreno, and Porter]kivela2014multilayer M Kivelä, A Arenas, M Barthelemy, J P Gleeson, Y Moreno, and M A Porter. Multilayer networks. Journal of Complex Networks, 20 (3):0 203–271, 2014. [Barthélemy(2011)]barth2011 M Barthélemy. Spatial networks. Phys Rep, 4990 (1):0 1–101, 2011. [Cruz Hidalgo et al.(2002)Cruz Hidalgo, Grosse, Kun, Reinhardt, and Herrmann]Hidalgo2002 R Cruz Hidalgo, C U Grosse, F Kun, H W Reinhardt, and H J Herrmann. Evolution of Percolating Force Chains in Compressed Granular Media. Phys Rev Lett, 890 (20):0 205501, 2002. [Candelier et al.(2009)Candelier, Dauchot, and Biroli]candelier2009building R Candelier, O Dauchot, and G Biroli. Building blocks of dynamical heterogeneities in dense granular media. Phys Rev Lett, 1020 (8):0 088001, 2009. [Mehta et al.(2008)Mehta, Barker, and Luck]mehta2008heterogeneities A Mehta, G C Barker, and J M Luck. Heterogeneities in granular dynamics. Proc Natl Acad Sci, 1050 (24):0 8244–8249, 2008. [Keys et al.(2007)Keys, Abate, Glotzer, and Durian]Keys:2007a A S Keys, A R Abate, S C Glotzer, and D J Durian. Measurement of growing dynamical length scales and prediction of the jamming transition in a granular material. Nat Phys, 30 (4):0 260–264, 2007. [Digby(1981)]Digby:1981 PJ Digby. The effective elastic moduli of porous granular rocks. J Appl Mech, 480 (4):0 803–808, 1981. [Velický and Caroli(2002)]Velicky:2002 B Velický and C Caroli. Pressure dependence of the sound velocity in a two-dimensional lattice of Hertz–Mindlin balls: Mean-field description. Phys Rev E, 650 (2):0 021307, 2002. [Goddard(1990)]Goddard:1990 J D Goddard. Nonlinear elasticity and pressure-dependent wave speeds in granular media. Proc R Soc A, 4300 (1878):0 105–131, 1990. [Makse et al.(1999)Makse, Gland, Johnson, and Schwartz]Makse-1999-WEM H A Makse, N Gland, D L Johnson, and L M Schwartz. Why effective medium theory fails in granular materials. Phys Rev Lett, 830 (24):0 5070–5073, 1999. [Goldenberg and Goldhirsch(2005)]Goldenberg-2005-FEE C Goldenberg and I Goldhirsch. Friction Enhances Elasticity In Granular Solids. Nature, 435:0 188–191, 2005. [Smart and Ottino(2008a)]Smart:2008b A Smart and J M Ottino. Granular matter and networks: Three related examples. Soft Matter, 40 (11):0 2125–2131, 2008a. [Holme and Saramäki(2012)]Holme2011 P Holme and J Saramäki. Temporal networks. Phys Rep, 5190 (3):0 97–125, 2012. [Sayama et al.(2013)Sayama, Pestov, Schmidt, Bush, Wong, Yamanoi, and Gross]thilo-adaptive H Sayama, I Pestov, J Schmidt, B J Bush, C Wong, J Yamanoi, and T Gross. Modeling complex systems with adaptive networks. Computers & Mathematics with Applications, 650 (10):0 1645–1664, 2013. [Newman et al.(2006)Newman, Barabási, and Watts]Newman:2006aa M E J Newman, A-L Barabási, and D J Watts. The Structure and Dynamics of Networks. Princeton University Press, 2006. [Giusti et al.(2016a)Giusti, Ghrist, and Bassett]giusti2016twos C Giusti, R Ghrist, and D S Bassett. Two's company, three (or more) is a simplex: Algebraic-topological tools for understanding higher-order structure in neural data. J Comput Neurosci, 410 (1):0 1–14, 2016a. [Porter and Gleeson(2016)]porter2016dynamical M A Porter and J P Gleeson. Dynamical systems on networks: A tutorial. In Frontiers in Applied Dynamical Systems: Reviews and Tutorials, volume 4. Springer-Verlag, 2016. [Liben-Nowell and Kleinberg(2008)]liben2008tracing D Liben-Nowell and J Kleinberg. Tracing information flow on a global scale using internet chain-letter data. Proc Natl Acad Sci, 1050 (12):0 4633–4638, 2008. [Scott(2012)]Scott:2012a J Scott. Social Network Analysis. Sage Publications, 2012. [Hurd et al.(2017)Hurd, Gleeson, and Melnik]hurd2017framework T R Hurd, J P Gleeson, and S Melnik. A framework for analyzing contagion in assortative banking networks. PLoS One, 120 (2):0 e0170579, 2017. [Sporns(2013)]Sporns:2013aa O Sporns. Structure and function of complex brain networks. Dialogues Clin Neurosci, 150 (3):0 247–262, 2013. [Bassett and Sporns(2017)]Bassett:2017a D S Bassett and O Sporns. Network neuroscience. Nat Neurosci, 200 (3):0 353–364, 2017. [Walker and Tordesillas(2010)]Walker:2010aa D M Walker and A Tordesillas. Topological evolution in dense granular materials: A complex networks perspective. Int J Solids Struct, 470 (5):0 624–639, 2010. [Barrat et al.(2004)Barrat, Barthélemy, Pastor-Satorras, and Vespignani]Barrat:2004aa A Barrat, M Barthélemy, R Pastor-Satorras, and A Vespignani. The architecture of complex weighted networks. Proc Natl Acad Sci, 1010 (11):0 3747–3752, 2004. [Newman(2004)]Newman:2004aa M E J Newman. Analysis of weighted networks. Phys Rev E, 700 (5):0 056131, 2004. [Alexander(1998)]Alexander:1998aa S Alexander. Amorphous solids: Their structure, lattice dynamics and elasticity. Phys Rep, 2960 (2–4):0 65–236, 1998. [Wyart(2005)]Wyart:2005a M Wyart. On the rigidity of amorphous solids. Annales De Physique, 300 (3):0 1, 2005. [Liu and Nagel(2010)]Liu:2010aa A J Liu and S R Nagel. The jamming transition and the marginally jammed solid. Annu Rev Condens Matter Phys, 10 (1):0 347–369, 2010. [van Hecke(2010)]VanHecke2010 M van Hecke. Jamming of soft particles: Geometry, mechanics, scaling and isostaticity. J Phys Condens Matter, 220 (3):0 033101, 2010. [Masuda et al.(2017)Masuda, Porter, and Lambiotte]masuda2016 N Masuda, M A Porter, and R Lambiotte. Random walks and diffusion on networks. Phys Rep, 716-717:0 1–58, 2017. [Krioukov et al.(2010)Krioukov, Papadopoulos, Kitsak, Vahdat, and Boguñá]boguna2010 D Krioukov, F Papadopoulos, M Kitsak, A Vahdat, and M Boguñá. Hyperbolic geometry of complex networks. Phys Rev E, 820 (3):0 036106, 2010. [Skiena(2008)]skiena2008algorithm S Skiena. The Algorithm Design Manual. Springer-Verlag, 2008. [Watts and Strogatz(1998)]Watts:1998aa D J Watts and S H Strogatz. Collective dynamics of `small-world' networks. Nature, 3930 (6684):0 440–442, 1998. [Latora and Marchiori(2001)]latora2001efficient V Latora and M Marchiori. Efficient behavior of small-world networks. Phys Rev Lett, 870 (19):0 198701, 2001. [Rubinov and Sporns(2009)]Rubinov2009 M Rubinov and O Sporns. Complex network measures of brain connectivity: Uses and interpretations. NeuroImage, 520 (3):0 1059–1069, 2009. [Latora and Marchiori(2003)]Latora2003 V Latora and M Marchiori. Economic small-world behavior in weighted networks. Eur Phys J B, 32:0 249–263, 2003. [Estrada and Hatano(2016)]estrada2016 E Estrada and N Hatano. Communicability angle and the spatial efficiency of networks. SIAM Review, 580 (4):0 692–715, 2016. [Arévalo et al.(2010a)Arévalo, Zuriguel, and Maza]Arevalo:2010aa R Arévalo, I Zuriguel, and D Maza. Topology of the force network in the jamming transition of an isotropically compressed granular packing. Phys Rev E, 810 (4):0 041302, 2010a. [Gross and Yellen(2005)]Gross:2005a J L Gross and J Yellen. Graph theory and its Applications. CRC Press, 2005. [Kavitha et al.(2009)Kavitha, Liebchen, Mehlhorn, Michail, Rizzi, Ueckerdt, and Zweig]Kavitha:2009a T Kavitha, C Liebchen, K Mehlhorn, D Michail, R Rizzi, T Ueckerdt, and K A Zweig. Cycle bases in graphs characterization, algorithms, complexity, and applications. Comp Sci Rev, 30 (4):0 199–243, 2009. [Griffin(2017)]Griffin:2017a C Griffin. Graph theory: Penn State Math 485 lecture notes. Webpage, 2017. URL <http://www.personal.psu.edu/cxg286/Math485.pdf>. (This manuscript includes contributions by S Shekhar.). [Horton(1987)]Horton:1987a J D Horton. A polynomial-time algorithm to find the shortest cycle basis of a graph. SIAM Journal on Computing, 160 (2):0 358–366, 1987. [Mehlhorn and Michail(2007)]Mehlhorn:2006a K Mehlhorn and D Michail. Implementing minimum cycle basis algorithms. J Exp Algorithmics, 11, 2007. [Walker et al.(2014a)Walker, Tordesillas, and Froyland]Tordesillas:2014a D M Walker, A Tordesillas, and G Froyland. Mesoscale and macroscale kinetic energy fluxes from granular fabric evolution. Phys Rev E, 890 (3):0 032205, 2014a. [Walker et al.(2015a)Walker, Tordesillas, Brodu, Dijksman, Behringer, and Froyland]Walker:2015b D M Walker, A Tordesillas, N Brodu, J A Dijksman, R P Behringer, and G Froyland. Self-assembly in a near-frictionless granular material: Conformational structures and transitions in uniaxial cyclic compression of hydrogel spheres. Soft Matter, 11:0 2157–2173, 2015a. [Smart and Ottino(2008b)]Smart:2008aa A G Smart and J M Ottino. Evolving loop structure in gradually tilted two-dimensional granular packings. Phys Rev E, 770 (4):0 041307, 2008b. [Arévalo et al.(2009)Arévalo, Zuriguel, and Maza]Arevalo:2009aa R Arévalo, I Zuriguel, and D Maza. Topological properties of the contact network of granular materials. Int J Bifurc Chaos, 190 (02):0 695–702, 2009. [Arévalo et al.(2010b)Arévalo, Zuriguel, Trevijano, and Maza]Arevalo:2010ba R Arévalo, I Zuriguel, S A Trevijano, and D Maza. Third order loops of contacts in a granular force network. Int J Bifurc Chaos, 200 (03):0 897–903, 2010b. [Tordesillas et al.(2010a)Tordesillas, Walker, and Lin]Tordesillas:2010aa A Tordesillas, D M Walker, and Q Lin. Force cycles and force chains. Phys Rev E, 810 (1):0 011302, 2010a. [Rivier(2006)]rivier2006extended N Rivier. Extended constraints, arches and soft modes in granular materials. J Non-Cryst Solids, 3520 (42–49):0 4505–4508, 2006. [Newman(2003)]Newman:2003aa M E J Newman. The structure and function of complex networks. SIAM Review, 450 (2):0 167–256, 2003. [Barrat and Weigt(2000)]Barrat:2000aa A Barrat and M Weigt. On the properties of small-world network models. Eur Phys J B, 130 (3):0 547–560, 2000. [Saramäki et al.(2007)Saramäki, Kivelä, Onnela, Kaski, and Kertész]jari2007 J Saramäki, M Kivelä, J-P Onnela, K Kaski, and J Kertész. Generalizations of the clustering coefficient to weighted complex networks. Phys Rev E, 750 (2):0 027105, 2007. [Onnela et al.(2005)Onnela, Saramäki, Kertész, and Kaski]onnela2005 J-P Onnela, J Saramäki, J Kertész, and K Kaski. Intensity and coherence of motifs in weighted complex networks. Phys Rev E, 710 (6):0 065103, 2005. [Zhang and Horvath(2005)]zhang2005general B Zhang and S Horvath. A general framework for weighted gene co-expression network analysis. Stat Appl Genet Mol Biol, 40 (1):0 17, 2005. [Freeman(1977)]Freeman:1977aa L C Freeman. A set of measures of centrality based on betweenness. Sociometry, 400 (1):0 35–41, 1977. [Girvan and Newman(2002)]Girvan2002 M Girvan and M E J Newman. Community structure in social and biological networks. Proc Natl Acad Sci, 990 (12):0 7821–7826, 2002. [Estrada and Rodríguez-Velázquez(2005)]Estrada:2005aa E Estrada and J A Rodríguez-Velázquez. Subgraph centrality in complex networks. Phys Rev E, 710 (5):0 056103, 2005. [Estrada et al.(2012)Estrada, Hatano, and Benzi]estrada2012 E Estrada, N Hatano, and M Benzi. The physics of communicability in complex networks. Phys Rep, 5140 (3):0 89–119, 2012. [Estrada and Rodríguez-Velázquez(2005)]Estrada:2005ab E Estrada and J A Rodríguez-Velázquez. Spectral measures of bipartivity in complex networks. Phys Rev E, 720 (4):0 046105, 2005. [Milo et al.(2002)Milo, Shen-Orr, Itzkovitz, Kashtan, Chklovskii, and Alon]Milo:2002a R Milo, S S Shen-Orr, S Itzkovitz, N Kashtan, D Chklovskii, and U Alon. Network motifs: Simple building blocks of complex networks. Science, 2980 (5594):0 824–827, 2002. [Shen-Orr et al.(2002)Shen-Orr, Milo, Mangan, and Alon]shenorr2002network S S Shen-Orr, R Milo, S Mangan, and U Alon. Network motifs in the transcriptional regulation network of Escherichia coli. Nat Genet, 310 (1):0 64–68, 2002. [Milo et al.(2004)Milo, Itzkovitz, Kashtan, Levitt, Shen-Orr, Ayzenshtat, Sheffer, and Alon]milo2004superfamilies R Milo, S Itzkovitz, N Kashtan, R Levitt, S Shen-Orr, I Ayzenshtat, M Sheffer, and U Alon. Superfamilies of evolved and designed networks. Science, 3030 (5663):0 1538–1542, 2004. [Alon(2007)]alon2007network U Alon. Network motifs: Theory and experimental approaches. Nat Rev Genet, 80 (6):0 450–461, 2007. [Schreiber and Schwobbermeyer(2005)]schreiber2005frequency F Schreiber and H Schwobbermeyer. Frequency concepts and pattern detection for the analysis of motifs in networks. Transactions on Computational Systems Biology, III:0 89–104, 2005. [Wernicke(2006)]wernicke2006efficient S Wernicke. Efficient detection of network motifs. IEEE/ACM Trans Comput Biol Bioinform, 30 (4):0 347–359, 2006. [Grochow and Kellis(2007)]grochow2007network J A Grochow and M Kellis. Network motif discovery using sub-graph enumeration and symmetry-breaking. RECOMB, pages 92–106, 2007. [Omidi et al.(2009)Omidi, Schreiber, and Masoudi-Nejad]omidi2009moda S Omidi, F Schreiber, and A Masoudi-Nejad. MODA: An efficient algorithm for network motif discovery in biological networks. Genes Genet Syst, 840 (5):0 385–395, 2009. [Kashani et al.(2009)Kashani, Ahrabian, Elahi, Nowzari-Dalini, Ansari, Asadi, Mohammadi, Schreiber, and Masoudi-Nejad]kashani2009kavosh Z R Kashani, H Ahrabian, E Elahi, A Nowzari-Dalini, E S Ansari, S Asadi, S Mohammadi, F Schreiber, and A Masoudi-Nejad. Kavosh: A new algorithm for finding network motifs. BMC Bioinformatics, 100 (318):0 318, 2009. [Paulau et al.(2015)Paulau, Feenders, and Blasius]paulau2015motif P V Paulau, C Feenders, and B Blasius. Motif analysis in directed ordered networks and applications to food webs. Sci Rep, 5:0 11926, 2015. [Sporns and Kotter(2004)]sporns2004motifs O Sporns and R Kotter. Motifs in brain networks. PLoS Biol, 20 (11):0 e369, 2004. [Xu et al.(2008)Xu, Zhang, and Small]Xu:2008aa X Xu, J Zhang, and M Small. Superfamily phenomena and motifs of networks induced from time series. Proc Natl Acad Sci, 1050 (50):0 19601–19605, 2008. [Walker et al.(2014b)Walker, Tordesillas, Small, Behringer, and Tse]Walker:2014aa D M Walker, A Tordesillas, M Small, R P Behringer, and C K Tse. A complex systems analysis of stick-slip dynamics of a laboratory fault. Chaos, 240 (1):0 013132, 2014b. [Walker et al.(2015b)Walker, Tordesillas, Zhang, Behringer, Andò, Viggiani, Druckrey, and Alshibli]Walker:2015c D M Walker, A Tordesillas, J Zhang, R P Behringer, E Andò, G Viggiani, A Druckrey, and K Alshibli. Structural templates of disordered granular media. Int J Solids Struct, 54:0 20–30, 2015b. [Tordesillas et al.(2012a)Tordesillas, Walker, Froyland, Zhang, and Behringer]Tordesillas:2012aa A Tordesillas, D M Walker, G Froyland, J Zhang, and R P Behringer. Transition dynamics and magic-number-like behavior of frictional granular clusters. Phys Rev E, 860 (1):0 011306, 2012a. [Peixoto(2017)]peixoto2017 T P Peixoto. Bayesian stochastic blockmodeling. arXiv, arXiv:1705.10225 [stat.ML], 2017. [Giusti et al.(2016b)Giusti, Papadopoulos, Owens, Daniels, and Bassett]Giusti:2016a C Giusti, L Papadopoulos, E T Owens, K E Daniels, and D S Bassett. Topological and geometric measurements of force-chain structure. Phys Rev E, 940 (3):0 032909, 2016b. [Papadopoulos et al.(2016)Papadopoulos, Puckett, Daniels, and Bassett]papadopoulos2016evolution L Papadopoulos, J G Puckett, K E Daniels, and D S Bassett. Evolution of network architecture in a granular material under compression. Phys Rev E, 940 (3):0 032908, 2016. [Walker and Tordesillas(2012)]Walker:2012aa D M Walker and A Tordesillas. Taxonomy of granular rheology from grain property networks. Phys Rev E, 850 (1):0 011304, 2012. [Tordesillas et al.(2013a)Tordesillas, Walker, Andò, and Viggiani]Tordesillas:2013a A Tordesillas, D M Walker, E Andò, and G Viggiani. Revisiting localized deformation in sand with complex systems. Proc Math Phys Eng Sci, 4690 (2152), 2013a. [Walker and Tordesillas(2014)]Walker:2014ba D M Walker and A Tordesillas. Examining overlapping community structures within grain property networks. In 2014 IEEE International Symposium on Circuits and Systems (ISCAS), pages 1275–1278, 2014. [Walker et al.(2012)Walker, Tordesillas, Pucilowski, Lin, Rechenmacher, and Abedi]Walker:2012a D M Walker, A Tordesillas, S Pucilowski, Q Lin, A L Rechenmacher, and S Abedi. Analysis of grain-scale measurements of sand using kinematical complex networks. Int J Bifurc Chaos, 220 (12):0 1230042, 2012. [Newman and Girvan(2004)]NG2004 M E J Newman and M Girvan. Finding and evaluating community structure in networks. Phys Rev E, 690 (2):0 026113, 2004. [Newman(2006a)]Newman2006b M E J Newman. Finding community structure in networks using the eigenvectors of matrices. Phys Rev E, 740 (3):0 036104, 2006a. [Rosvall and Bergstrom(2008)]rosvall2008maps M Rosvall and C T Bergstrom. Maps of random walks on complex networks reveal community structure. Proc Natl Acad Sci, 1050 (4):0 1118–1123, 2008. [Clauset(2005)]clauset2005finding A Clauset. Finding local community structure in networks. Phys Rev E, 720 (2):0 026132, 2005. [Jeub et al.(2015)Jeub, Balachandran, Porter, Mucha, and Mahoney]jeub2015 L G S Jeub, P Balachandran, M A Porter, P J Mucha, and W M Mahoney. Think locally, act locally: The detection of small, medium-sized, and large communities in large networks. Phys Rev E, 910 (1):0 012821, 2015. [Ahn et al.(2010)Ahn, Bagrow, and Lehmann]ahn2010link Y Y Ahn, J P Bagrow, and S Lehmann. Link communities reveal multiscale complexity in networks. Nature, 4660 (7307):0 761–764, 2010. [Good et al.(2010)Good, de Montjoye, and Clauset]Good2010 B H Good, Y A de Montjoye, and A Clauset. Performance of modularity maximization in practical contexts. Phys Rev E, 810 (4):0 046106, 2010. [Fortunato and Barthélemy(2007)]Fortunato2007 S. Fortunato and M. Barthélemy. Resolution limit in community detection. Proc Natl Acad Sci, 1040 (1):0 36–41, 2007. [Bassett et al.(2013)Bassett, Porter, Wymbs, Grafton, Carlson, and Mucha]bassett2013robust D S Bassett, M A Porter, N F Wymbs, S T Grafton, J M Carlson, and P J Mucha. Robust detection of dynamic community structure in networks. Chaos, 230 (1):0 013142, 2013. [Newman(2006b)]Newman2006 M E J Newman. Modularity and community structure in networks. Proc Natl Acad Sci, 1030 (23):0 8577–8582, 2006b. [Bazzi et al.(2016)Bazzi, Porter, Williams, McDonald, Fenn, and Howison]Bazzi2016 M Bazzi, M A Porter, S Williams, M McDonald, D J Fenn, and S D Howison. Community detection in temporal multilayer networks, with an application to correlation networks. Multiscale Model Simul, 140 (1):0 1–41, 2016. [Sarzynska et al.(2016)Sarzynska, Leicht, Chowell, and Porter]Sarzynska:2015aa M Sarzynska, E A Leicht, G Chowell, and M A Porter. Null models for community detection in spatially embedded, temporal networks. Journal of Complex Networks, 4:0 363–406, 2016. [Brandes et al.(2008)Brandes, Delling, Gaertler, Görke, Hoefer, Nikoloski, and Wagner]Brandes2008 U Brandes, D Delling, M Gaertler, R Görke, M Hoefer, Z Nikoloski, and D Wagner. On modularity clustering. IEEE Trans on Knowl Data Eng, 200 (2):0 172–188, 2008. [Blondel et al.(2008)Blondel, Guillaume, Lambiotte, and Lefebvre]Blondel2008 V D Blondel, J L Guillaume, R Lambiotte, and E Lefebvre. Fast unfolding of community hierarchies in large networks. Journal of Statistical Mechanics: Theory and Experiment, 20080 (10):0 P10008, 2008. [Jeub et al.(2011–2016)Jeub, Bazzi, Jutla, and Mucha]genlouvain2016 L G S Jeub, M Bazzi, I S Jutla, and P J Mucha. A generalized Louvain method for community detection implemented in matlab, 2011–2016. URL <https://github.com/GenLouvain/GenLouvain>. [Lancichinetti and Fortunato(2012)]Lancichinetti2012 A Lancichinetti and S Fortunato. Consensus clustering in complex networks. Sci Rep, 2:0 336, 2012. [Jeub et al.(2017)Jeub, Sporns, and Fortunato]jeub-santo2017 L G S Jeub, O Sporns, and S Fortunato. Multiresolution consensus clustering in networks. arXiv, arXiv:1710.02249 [cs.SI], 2017. [Boccaletti et al.(2014)Boccaletti, Bianconi, Criado, Del Genio, Gómez-Gardeñes, Romance, Sendiña-Nadal, Wang, and Zanin]Boccaletti2014 S Boccaletti, G Bianconi, R Criado, C I Del Genio, J Gómez-Gardeñes, M Romance, I Sendiña-Nadal, Z Wang, and M Zanin. The structure and dynamics of multilayer networks. Phys Rep, 5440 (1):0 1–122, 2014. [Mucha et al.(2010)Mucha, Richardson, Macon, Porter, and Onnela]Mucha2010 P J Mucha, T Richardson, K Macon, M A Porter, and J-P Onnela. Community structure in time-dependent, multiscale, and multiplex networks. Science, 3280 (5980):0 876–878, 2010. [De Domenico et al.(2013)De Domenico, Solé-Ribalta, Cozzo, Kivelä, Moreno, Porter, Gomez, and Arenas]domenico2013mathematical M De Domenico, A Solé-Ribalta, E Cozzo, M Kivelä, Y Moreno, M A Porter, S Gomez, and A Arenas. Mathematical formulation of multi-layer networks. Phys Rev X, 30 (4):0 041022, 2013. [Kolda and Bader(2009)]Kolda2009Tensor T G Kolda and B W Bader. Tensor decompositions and applications. SIAM Rev, 510 (3):0 455–500, 2009. [Cranmer et al.(2015)Cranmer, Menninga, and Mucha]Cranmer:2014ut S J Cranmer, E J Menninga, and P J Mucha. Kantian fractionalization predicts the conflict propensity of the international system. Proc Nat Acad Sci, 1120 (38):0 11812–11816, 2015. [Danchev and Porter(2017)]danchev2016 V Danchev and M A Porter. Neither global nor local: Heterogeneous connectivity in spatial network structures of world migration. Social Networks, in press; <https://doi.org/10.1016/j.socnet.2017.06.003>, 2017. [Bassett et al.(2011)Bassett, Wymbs, Porter, Mucha, Carlson, and Grafton]Bassett2011b D S Bassett, N F Wymbs, M A Porter, P J Mucha, J M Carlson, and S T Grafton. Dynamic reconfiguration of human brain networks during learning. Proc Natl Acad Sci, 1080 (18):0 7641–7646, 2011. [Bassett et al.(2015b)Bassett, Yang, Wymbs, and Grafton]bassett2015learning D S Bassett, M Yang, N F Wymbs, and S T Grafton. Learning-induced autonomy of sensorimotor systems. Nat Neurosci, 180 (5):0 744–751, 2015b. [Braun et al.(2015)Braun, Schafer, Walter, Erk, Romanczuk-Seiferth, Haddad, Schweiger, Grimm, Heinz, Tost, Meyer-Lindenberg, and Bassett]braun2015dynamic U Braun, A Schafer, H Walter, S Erk, N Romanczuk-Seiferth, L Haddad, J I Schweiger, O Grimm, A Heinz, H Tost, A Meyer-Lindenberg, and D S Bassett. Dynamic reconfiguration of frontal brain networks during executive cognition in humans. Proc Natl Acad Sci, 1120 (37):0 11678–11683, 2015. [Blinder et al.(2013)Blinder, Tsai, Kaufhold, Knutsen, Suhl, and Kleinfeld]Blinder:2013aa P Blinder, P S Tsai, J P Kaufhold, P M Knutsen, H Suhl, and D Kleinfeld. The cortical angiome: An interconnected vascular network with noncolumnar patterns of blood flow. Nat Neurosci, 160 (7):0 889–897, 2013. [Katifori et al.(2010)Katifori, Szöll ősi, and Magnasco]Katifori:2010a E Katifori, G J Szöll ősi, and M O Magnasco. Damage and fluctuations induce loops in optimal transport networks. Phys Rev Lett, 1040 (4):0 048704, 2010. [Lee et al.(2017)Lee, Fricker, and Porter]shl2017 S H Lee, M D Fricker, and M A Porter. Mesoscale analyses of fungal networks as an approach for quantifying phenotypic traits. Journal of Complex Networks, 50 (1):0 145–159, 2017. [Bebber et al.(2007)Bebber, Hynes, Darrah, Boddy, and Fricker]Bebber:2007a D P Bebber, J Hynes, P R Darrah, L Boddy, and M D Fricker. Biological solutions to transport network design. Proc R Soc Lond B Biol Sci, 2740 (1623):0 2307–2315, 2007. [Banavar et al.(2000)Banavar, Colaiori, Flammini, Maritan, and Rinaldo]Banavar:2000a J R Banavar, F Colaiori, A Flammini, A Maritan, and A Rinaldo. Topology of the fittest transportation network. Phys Rev Lett, 840 (20):0 4745–4748, 2000. [Gastner and Newman(2006)]Newman:2006a M T Gastner and M E J Newman. Optimal design of spatial distribution networks. Phys Rev E, 740 (1):0 016117, 2006. [Kurant and Thiran(2006)]Kurant:2006a M Kurant and P Thiran. Extraction and analysis of traffic and topologies of transportation networks. Phys Rev E, 740 (3):0 036114, 2006. [Bertsekas(1998)]Bertsekas:1998 D P Bertsekas. Network Optimization: Continuous and Discrete Models. Athena Scientific, 1998. [Ahuja et al.(1993)Ahuja, Magnanti, and Orlin]Ahuja:flow R K Ahuja, T L Magnanti, and J B Orlin. Network Flows: Theory, Algorithms, and Applications. Prentice-Hall, Inc., 1993. [Kaczynski et al.(2004)Kaczynski, Mischaikow, and Mrozek]Kaczynski:2004aa T Kaczynski, K Mischaikow, and M Mrozek. Computational Homology. Springer-Verlag, 2004. [Kesten(2006)]kesten-whatis H Kesten. What is ... percolation? Not Am Math Soc, 530 (5):0 572–573, 2006. [Stauffer and Aharony(1994)]Stauffer:1994a D Stauffer and A Aharony. Introduction to Percolation Theory. CRC Press, 1994. [Saberi(2015)]saberi2015 A A Saberi. Recent advances in percolation theory and its applications. Phys Rep, 578:0 1–32, 2015. [Broadbent and Hammersley(1957)]broadbent1957percolation S Broadbent and J Hammersley. Percolation processes I. Crystals and mazes. Proc Camb Philos Soc, 53:0 629–641, 1957. [Albert and Barabási(2002)]Albert2002 R Albert and A-L Barabási. Statistical mechanics of complex networks. Rev Mod Phys, 740 (1):0 47–98, 2002. [Erdős and Reńyi(1959)]Erdos:1959aa P Erdős and A Reńyi. On random graphs I. Publicationes Mathematicae Debrecen, 6:0 290–297, 1959. [Erdős and Reńyi(1960)]Erdos:1960aa P Erdős and A Reńyi. On the evolution of random graphs. Publications of the Mathematical Institute of the Hungarian Academy of Sciences, 5:0 17–61, 1960. [Stauffer(1979)]Stauffer1979a D. Stauffer. Scaling theory of percolation clusters. Phys Rep, 540 (1):0 1–74, 1979. [Slotterback et al.(2012)Slotterback, Mailman, Ronaszegi, van Hecke, Girvan, and Losert]Slotterback:2012aa S Slotterback, M Mailman, K Ronaszegi, M van Hecke, M Girvan, and W Losert. Onset of irreversibility in cyclic shear of granular packings. Phys Rev E, 850 (2):0 021309, 2012. [Kondic et al.(2012)Kondic, Goullet, O'Hern, Kramár, Mischaikow, and Behringer]Kondic:2012aa L Kondic, A Goullet, C S O'Hern, M Kramár, K Mischaikow, and R P Behringer. Topology of force networks in compressed granular media. EPL (Europhysics Letters), 970 (5):0 54001, 2012. [Kramár et al.(2013)Kramár, Goullet, Kondic, and Mischaikow]Kramar:2013aa M Kramár, A Goullet, L Kondic, and K Mischaikow. Persistence of force networks in compressed granular media. Phys Rev E, 870 (4):0 042207, 2013. [Kramár et al.(2014a)Kramár, Goullet, Kondic, and Mischaikow]Kramar:2014b M Kramár, A Goullet, L Kondic, and K Mischaikow. Quantifying force networks in particulate systems. Physica D, 283:0 37–55, 2014a. [Kramár et al.(2014b)Kramár, Goullet, Kondic, and Mischaikow]Kramar:2014aa M Kramár, A Goullet, L Kondic, and K Mischaikow. Evolution of force networks in dense particulate media. Phys Rev E, 900 (5):0 052203, 2014b. [Ardanza-Trevijano et al.(2014)Ardanza-Trevijano, Zuriguel, Arévalo, and Maza]Ardanza-Trevijano:2014aa S Ardanza-Trevijano, I Zuriguel, R Arévalo, and D Maza. Topological analysis of tapped granular media using persistent homology. Phys Rev E, 890 (5):0 052212, 2014. [Kondic et al.(2016)Kondic, Kramár, Pugnaloni, Carlevaro, and Mischaikow]Kondic:2016a L Kondic, M Kramár, L A Pugnaloni, C M Carlevaro, and K Mischaikow. Structure of force networks in tapped particulate systems of disks and pentagons. II. Persistence analysis. Phys Rev E, 930 (6):0 062903, 2016. [Pugnaloni et al.(2016)Pugnaloni, Carlevaro, Kramár, Mischaikow, and Kondic]Pugnaloni:2016a L A Pugnaloni, C M Carlevaro, M Kramár, K Mischaikow, and L Kondic. Structure of force networks in tapped particulate systems of disks and pentagons. I. Clusters and loops. Phys Rev E, 930 (6):0 062902, 2016. [Feng(1985)]Feng1985 S Feng. Percolation properties of granular elastic networks in two dimensions. Phys Rev B, 320 (1):0 510–513, 1985. [Moukarzel and Duxbury(1995)]Moukarzel:1995a C Moukarzel and P M Duxbury. Stressed backbone and elasticity of random central-force systems. Phys Rev Lett, 750 (22):0 4055–4058, 1995. [Jacobs and Thorpe(1995)]Jacobs1995 D Jacobs and M F Thorpe. Generic rigidity percolation: The pebble game. Phys Rev Lett, 750 (22):0 4051–4054, 1995. [Aharonov and Sparks(1999)]Aharonov:1999a E Aharonov and D Sparks. Rigidity phase transition in granular packings. Phys Rev E, 600 (6):0 6890–6896, 1999. [Lois et al.(2008)Lois, Blawzdziewicz, and O'Hern]Lois:2008a G Lois, J Blawzdziewicz, and C S O'Hern. Jamming transition and new percolation universality classes in particulate systems with attraction. Phys Rev Lett, 1000 (2):0 028001, 2008. [Shen et al.(2012)Shen, O'Hern, and Shattuck]Shen:2012a T Shen, C S O'Hern, and M D Shattuck. Contact percolation transition in athermal particulate systems. Phys Rev E, 850 (1):0 011308, 2012. [Kovalcinova et al.(2015)Kovalcinova, Goullet, and Kondic]Kovalcinova:2015a L Kovalcinova, A Goullet, and L Kondic. Percolation and jamming transitions in particulate systems with and without cohesion. Phys Rev E, 920 (3):0 032204, 2015. [Henkes et al.(2016)Henkes, Quint, Fily, and Schwarz]Henkes2016 S Henkes, D A Quint, Y Fily, and J M Schwarz. Rigid Cluster Decomposition Reveals Criticality in Frictional Jamming. Phys Rev Lett, 1160 (2):0 028301, 2016. [Thorpe(1985)]Thorpe1985a M F Thorpe. Rigidity Percolation. Springer-Verlag, 1985. [Thorpe and Duxbury(1999)]Thorpe:1999aa M F Thorpe and P M Duxbury, editors. Rigidity Theory and Applications. Springer-Verlag, 1999. [Kovalcinova et al.(2016)Kovalcinova, Goullet, and Kondic]Kovalcinova:2016a L Kovalcinova, A Goullet, and L Kondic. Scaling properties of force networks for compressed particulate systems. Phys Rev E, 930 (4):0 042903, 2016. [Pastor-Satorras and Miguel(2012)]PastorSatorras2012 R Pastor-Satorras and M-C Miguel. Percolation analysis of force networks in anisotropic granular matter. Journal of Statistical Mechanics: Theory and Experiment, 20120 (02):0 P02008, 2012. [Pathak et al.(2017)Pathak, Esposito, Coniglio, and Ciamarra]pathak2017force S N Pathak, V Esposito, A Coniglio, and M P Ciamarra. Force percolation transition of jammed granular systems. Phys Rev E, 960 (4):0 042901, 2017. [Edelsbrunner(2010)]Edelsbrunner:2010a H Edelsbrunner. Computational Topology: An Introduction. American Mathematical Society, 2010. [Carlsson(2009)]Carlsson:2009a G Carlsson. Topology and data. Bull Am Math Soc, 460 (2):0 255–308, 2009. [Ghrist(2014)]ghrist2014elementary R Ghrist. Elementary Applied Topology. Createspace, 2014. Available at <https://www.math.upenn.edu/ ghrist/notes.html>. [Dlotko et al.(2012)Dlotko, Juda, Mrozek, and Ghrist]dlotko2012distributed P Dlotko, M Juda, M Mrozek, and R Ghrist. Distributed computation of coverage in sensor networks by homological methods. Appl Algebr Eng Comm, 23:0 29–58, 2012. [Taylor et al.(2015)Taylor, Klimm, Harrington, Kramár, Mischaikow, Porter, and Mucha]taylor2015topological D Taylor, F Klimm, H A Harrington, M Kramár, K Mischaikow, M A Porter, and P J Mucha. Topological data analysis of contagion maps for examining spreading processes on networks. Nat Commun, 6:0 7723, 2015. [Sizemore et al.(2017a)Sizemore, Giusti, and Bassett]sizemore2016classification A Sizemore, C Giusti, and D S Bassett. Classification of weighted networks through mesoscale homological features. Journal of Complex Networks, 50 (2):0 245, 2017a. [Sizemore et al.(2017b)Sizemore, Giusti, Kahn, Betzel, and Bassett]sizemore2016cliques A Sizemore, C Giusti, A E Kahn, R F Betzel, and D S Bassett. Cliques and cavities in the human connectome. Journal of Computational Neuroscience: https://doi.org/10.1007/s10827-017-0672-6, 2017b. [Otter et al.(2017)Otter, Porter, Tullmann, Grindrod, and Harrington]otter2015 N Otter, M A Porter, U Tullmann, P Grindrod, and H A Harrington. A roadmap for the computation of persistent homology. EPJ Data Science, 60 (1):0 17, 2017. [Patania et al.(2017)Patania, Vaccarino, and Petri]Patania2017 A Patania, F Vaccarino, and G Petri. Topological analysis of data. EPJ Data Science, 60 (1):0 7, 2017. [Stolz et al.(2017)Stolz, Harrington, and Porter]Stolz2016 B J Stolz, H A Harrington, and M A Porter. Persistent homology of time-dependent functional networks constructed from coupled time series. Chaos, 270 (4):0 047410, 2017. [Kozlov(2007)]kozlov2007combinatorial D Kozlov. Combinatorial Algebraic Topology, volume 21. Springer-Verlag, 2007. [Nanda and Sazdanović(2014)]nanda2014simplicial V Nanda and R Sazdanović. Simplicial Models and Topological Inference in Biological Systems, pages 109–141. Springer-Verlag, 2014. [Petri et al.(2013)Petri, Scolamiero, Donato, and Vaccarino]Petri2013a G Petri, M Scolamiero, I Donato, and F Vaccarino. Topological strata of weighted complex networks. PLoS One, 80 (6):0 1–8, 2013. [Dantu(1957)]Dantu1957 P Dantu. Contribution l'étude méchanique et géométrique des milieux pulvérulents. In Proceedings of the fourth International Conference on Soil Mechanics and Foundation Engineering, London, pages 144–148, 1957. [Drescher and de Josselin de Jong(1972)]Drescher1972 A Drescher and G de Josselin de Jong. Photoelastic Verification Of A Mechanical Model For Flow Of A Granular Material. Journal Of The Mechanics And Physics Of Solids, 200 (5):0 337–340, 1972. [Luding(1997)]luding1997stress S Luding. Stress distribution in static two-dimensional granular model media in the absence of friction. Phys Rev E, 550 (4):0 4720, 1997. [Silbert et al.(2002)Silbert, Grest, and Landry]silbert2002statistics L E Silbert, G S Grest, and J W Landry. Statistics of the contact network in frictional and frictionless granular packings. Phys Rev E, 660 (6):0 061303, 2002. [Tordesillas(2007)]Tordesillas:2007aa A Tordesillas. Force chain buckling, unjamming transitions and shear banding in dense granular assemblies. Philos Mag, 870 (32):0 4987–5016, 2007. [Majmudar et al.(2007)Majmudar, Sperl, Luding, and Behringer]Majmudar:2007aa T S Majmudar, M Sperl, S Luding, and R P Behringer. Jamming transition in granular systems. Phys Rev Lett, 980 (5):0 058001, 2007. [Snoeijer et al.(2004a)Snoeijer, Vlugt, van Hecke, and van Saarloos]Snoeijer-2004-FNE J H Snoeijer, T J H Vlugt, M van Hecke, and W van Saarloos. Force network ensemble: A new approach to static granular matter. Phys Rev Lett, 920 (5):0 54302, 2004a. [Snoeijer et al.(2004b)Snoeijer, Vlugt, Ellenbroek, Van Hecke, and Van Leeuwen]Snoeijer-2004-ETF J H Snoeijer, T J H Vlugt, W G Ellenbroek, M Van Hecke, and J M J Van Leeuwen. Ensemble theory for force networks in hyperstatic granular matter. Phys Rev E, 700 (6):0 61306, 2004b. [Tighe et al.(2010)Tighe, Snoeijer, Vlugt, and van Hecke]Tighe2010a B P Tighe, J H Snoeijer, T J H Vlugt, and M van Hecke. The force network ensemble for granular packings. Soft Matter, 60 (13):0 2908–2917, 2010. [Kollmer and Daniels(2017)]Kollmer2017 J E Kollmer and K E Daniels. An experimental investigation of the force network ensemble. In Powders and Grains 2017, volume 140, page 02024, 2017. [Liu et al.(2011)Liu, Nagel, Van Saarloos, and Wyart]Liu:2011aa A J Liu, S R Nagel, W Van Saarloos, and M Wyart. The jamming scenario — An introduction and outlook. Oxford University Press, 2011. [Henkes et al.(2010)Henkes, van Hecke, and van Saarloos]Henkes:2010aa S Henkes, M van Hecke, and W van Saarloos. Critical jamming of frictional grains in the generalized isostaticity picture. EPL (Europhysics Letters), 900 (1):0 14003, 2010. [Shundyak et al.(2007)Shundyak, van Hecke, and van Saarloos]Shundyak:2007aa K Shundyak, M van Hecke, and W van Saarloos. Force mobilization and generalized isostaticity in jammed packings of frictional grains. Phys Rev E, 750 (1):0 010301, 2007. [Stumpf and Porter(2012)]stumpf2012critical M P Stumpf and M A Porter. Mathematics. critical truths about power laws. Science, 3350 (6069):0 665–666, 2012. [Tordesillas et al.(2010b)Tordesillas, O'Sullivan, Walker, and Paramitha]Tordesillas:2010a A Tordesillas, P O'Sullivan, D M Walker, and Paramitha. Evolution of functional connectivity in contact and force chain networks: Feature vectors, k-cores and minimal cycles. Comptes Rendus Mécanique, 3380 (10):0 556–569, 2010b. [Duxbury et al.(1999)Duxbury, Jacobs, Thorpe, and Moukarzel]Duxbury1999 P Duxbury, D Jacobs, M Thorpe, and C Moukarzel. Floppy modes and the free energy: Rigidity and connectivity percolation on Bethe lattices. Phys Rev E, 590 (2):0 2084–2092, 1999. [Maxwell(1864)]Maxwell:1864aa J C Maxwell. On the calculation of the equilibrium and stiffness of frames. Philosophical Magazine Series 4, 270 (182):0 294–299, 1864. [Laman(1970)]Laman:1970aa G Laman. On graphs and rigidity of plane skeletal structures. J Eng Math, 40 (4):0 331–340, 1970. [Asimow and Roth(1978)]Asimow:1978aa L Asimow and B Roth. The rigidity of graphs. Trans Am Math Soc, 245:0 279–289, 1978. [Crapo(1979)]Crapo:1979aa H Crapo. Structural rigidity. Structural Topology, 1:0 26–45, 1979. [Guyon et al.(1990)Guyon, Roux, Hansen, Bideau, Troadec, and Crapo]Guyon:1990aa E Guyon, S Roux, A Hansen, D Bideau, J-P Troadec, and H Crapo. Non-local and non-linear problems in the mechanics of disordered systems: Application to granular media and rigidity problems. Rep Prog Phys, 530 (4):0 373, 1990. [Moukarzel(1998)]Moukarzel:1998aa C F Moukarzel. Isostatic phase transition and instability in stiff granular materials. Phys Rev Lett, 810 (8):0 1634–1637, 1998. [Tordesillas et al.(2011)Tordesillas, Lin, Zhang, Behringer, and Shi]Tordesillas:2011aa A Tordesillas, Q Lin, J Zhang, R P Behringer, and J Shi. Structural stability and jamming of self-organized cluster conformations in dense granular materials. J Mech Phys Solids, 590 (2):0 265–296, 2011. [Tordesillas et al.(2012b)Tordesillas, Pucilowski, Walker, Peters, and Hopkins]Tordesillas:2010b A Tordesillas, S Pucilowski, D M Walker, J Peters, and M Hopkins. A complex network analysis of granular fabric evolution in three-dimensions. Dynam Cont Dis Ser B, 190 (4–5):0 417–495, 2012b. [Walker et al.(2014c)Walker, Tordesillas, Ren, Dijksman, and Behringer]Walker:2014c D M Walker, A Tordesillas, J Ren, J A Dijksman, and R P Behringer. Uncovering temporal transitions and self-organization during slow aging of dense granular media in the absence of shear bands. EPL (Europhysics Letters), 1070 (1):0 18005, 2014c. [Jeng and Schwarz(2008)]Jeng2008 M Jeng and J M Schwarz. On the Study of Jamming Percolation. J Stat Phys, 1310 (4):0 575–595, 2008. [Jeng and Schwarz(2010)]Jeng2010 M Jeng and J M Schwarz. Force-balance percolation. Phys Rev E, 810 (1):0 011134, 2010. [Cao and Schwarz(2012)]Cao2012 L Cao and J M Schwarz. Correlated percolation and tricriticality. Phys Rev E, 860 (6):0 061131, 2012. [Lopez et al.(2013)Lopez, Cao, and Schwarz]Lopez:2013aa J H Lopez, L Cao, and J M Schwarz. Jamming graphs: A local approach to global mechanical rigidity. Phys Rev E, 880 (6):0 062130, 2013. [Heroy et al.(2017)Heroy, Taylor, Shi, Forest, and Mucha]heroy2017 S Heroy, D Taylor, F B Shi, M G Forest, and P J Mucha. Rigid graph compression: Motif-based rigidity analysis for disordered fiber networks. arXiv, arXiv:1711.05790 [cond-mat.dis-nn], 2017. [Oda and Kazama(1998)]Oda:1998a M Oda and H Kazama. Microstructure of shear bands and its relation to the mechanisms of dilatancy and failure of dense granular soils. Géotechnique, 480 (4):0 465–481, 1998. [Tordesillas et al.(2009)Tordesillas, Zhang, and Behringer]Tordesillas:2009b A Tordesillas, J Zhang, and R Behringer. Buckling force chains in dense granular assemblies: Physical and numerical experiments. Geomech Geoeng, 40 (1):0 3–16, 2009. [Bagi(2007)]Bagi:2007a K Bagi. On the concept of jammed configurations from a structural mechanics perspective. Granular Matter, 90 (1):0 109–134, 2007. [Tordesillas and Muthuswamy(2009)]Tordesillas:2009aa A Tordesillas and M Muthuswamy. On the modeling of confined buckling of force chains. J Mech Phys Solids, 570 (4):0 706–727, 2009. [Cates et al.(1998)Cates, Wittmer, Bouchaud, and Claudin]Cates:1998aa M E Cates, J P Wittmer, J-P Bouchaud, and P Claudin. Jamming, force chains, and fragile matter. Phys Rev Lett, 810 (9):0 1841–1844, 1998. [Muthuswamy and Tordesillas(2006)]Muthuswamy2006a M Muthuswamy and A Tordesillas. How do interparticle contact friction, packing density and degree of polydispersity affect force propagation in particulate assemblies? Journal of Statistical Mechanics: Theory and Experiment, 20060 (09):0 P09003, 2006. [Kob and Barrat(1997)]kob1997aging W Kob and J L Barrat. Aging effects in a Lennard-Jones glass. Phys Rev Lett, 780 (24):0 4581–4584, 1997. [Kabla and Debregeas(2004)]kabla2004contact A Kabla and G Debregeas. Contact dynamics in a gently vibrated granular pile. Phys Rev Lett, 920 (3):0 35501, 2004. [Steinhardt et al.(1983)Steinhardt, Nelson, and Ronchetti]Steinhardt:1983a P J Steinhardt, D R Nelson, and M Ronchetti. Bond-orientational order in liquids and glasses. Phys Rev B, 280 (2):0 784–805, 1983. [Arévalo et al.(2013)Arévalo, Pugnaloni, Zuriguel, and Maza]Arevalo:2013aa R Arévalo, L A Pugnaloni, I Zuriguel, and D Maza. Contact network topology in tapped granular media. Phys Rev E, 870 (2):0 022203, 2013. [Nowak et al.(1998)Nowak, Knight, Ben-Naim, Jaeger, and Nagel]Nowak:1998a E R Nowak, J B Knight, E Ben-Naim, H M Jaeger, and S R Nagel. Density fluctuations in vibrated granular materials. Phys Rev E, 570 (2):0 1971–1982, 1998. [Pugnaloni et al.(2010)Pugnaloni, Sánchez, Gago, Damas, Zuriguel, and Maza]Pugnaloni:2010aa L A Pugnaloni, I Sánchez, P A Gago, J Damas, I Zuriguel, and D Maza. Towards a relevant set of state variables to describe static granular packings. Phys Rev E, 820 (5):0 050301, 2010. [Pugnaloni et al.(2008)Pugnaloni, Mizrahi, Carlevaro, and Vericat]Pugnaloni:2008aa L A Pugnaloni, M Mizrahi, C M Carlevaro, and F Vericat. Nonmonotonic reversible branch in four model granular beds subjected to vertical vibration. Phys Rev E, 780 (5):0 051305, 2008. [Gago et al.(2009)Gago, Bueno, and Pugnaloni]Gago:2009aa P A Gago, N E Bueno, and L A Pugnaloni. High intensity tapping regime in a frustrated lattice gas model of granular compaction. Granular Matter, 110 (6):0 365–369, 2009. [Carlevaro and Pugnaloni(2011)]Carlevaro:2011a C M Carlevaro and L A Pugnaloni. Steady state of tapped granular polygons. Journal of Statistical Mechanics: Theory and Experiment, 20110 (01):0 P01007, 2011. [Arévalo et al.(2013)Arévalo, Pugnaloni, Maza, and Zuriguel]Arevalo:2013ba R Arévalo, L A Pugnaloni, D Maza, and I Zuriguel. Tapped granular packings described as complex networks. Philosophical Magazine, 930 (31-33):0 4078–4089, 2013. [Itzkovitz and Alon(2005)]itzkovitz2005subgraphs S Itzkovitz and U Alon. Subgraphs and network motifs in geometric networks. Phys Rev E, 710 (2):0 026117, 2005. [Shoval and Alon(2010)]shoval2010snap O Shoval and U Alon. SnapShot: Network motifs. Cell, 1430 (2):0 326–326.e1, 2010. [Brodu et al.(2015)Brodu, Dijksman, and Behringer]Brodu2015 N Brodu, J A Dijksman, and R P Behringer. Spanning the scales of granular materials through microscopic force imaging. Nat Commun, 6:0 6361, 2015. [Dijksman et al.(2017)Dijksman, Brodu, and Behringer]Dijksman2017 J A Dijksman, N Brodu, and R P Behringer. Refractive index matched scanning and detection of soft particles. Rev Sci Instrum, 880 (5):0 051807, 2017. [Sepiani and Ghazavi(2009)]sepiani2009thermo H A Sepiani and A Ghazavi. A thermo-micro-mechanical modeling for smart shape memory alloy woven composite under in-plane biaxial deformation. International Journal of Mechanics and Materials in Design, 50 (2):0 111, 2009. [Tighe and Vlugt(2011)]Tighe:2011a B P Tighe and T J H Vlugt. Stress fluctuations in granular force networks. Journal of Statistical Mechanics: Theory and Experiment, 20110 (04):0 P04002, 2011. [Daniels et al.(2017)Daniels, Kollmer, and Puckett]Daniels2017 K E Daniels, J E Kollmer, and J G Puckett. Photoelastic force measurements in granular materials. Rev Sci Instrum, 880 (5):0 051808, 2017. [Hurley et al.(2016)Hurley, Hall, Andrade, and Wright]Hurley:2016a R C Hurley, S A Hall, J E Andrade, and J Wright. Quantifying interparticle forces and heterogeneity in 3d granular materials. Phys Rev Lett, 1170 (9):0 098005, 2016. [Mukhopadhyay and Peixinho(2011)]Mukhopadhyay2011 S Mukhopadhyay and J Peixinho. Packings of deformable spheres. Phys Rev E, 840 (1):0 011302, 2011. [Saadatfar et al.(2012)Saadatfar, Sheppard, Senden, and Kabla]Saadatfar2012 M Saadatfar, A P Sheppard, T J Senden, and A J Kabla. Mapping forces in a 3D elastic assembly of grains. J Mech Phys Solids, 600 (1):0 55–66, 2012. [Pöschel and Schwager(2005)]Poschel2005 T Pöschel and T Schwager. Computational Granular Dynamics: Models and Algorithms. Springer-Verlag, 2005. [Weis and Schröter(2017)]weis:17 S Weis and M Schröter. Analyzing X-ray tomographies of granular packings. Rev Sci Instrum, 880 (5):0 051809, 2017. [Tordesillas and Muthuswamy(2008)]Tordesillas:2008a A Tordesillas and M Muthuswamy. A thermomicromechanical approach to multiscale continuum modeling of dense granular materials. Acta Geotechnica, 30 (3):0 225–240, 2008. [Huang and Daniels(2016)]Huang:2016a Y Huang and K E Daniels. Friction and pressure-dependence of force chain communities in granular materials. Granular Matter, 180 (4):0 85, 2016. [Navakas et al.(2014)Navakas, Džiugys, and Peters]Navakas:2014aa R Navakas, A Džiugys, and B Peters. A community-detection based approach to identification of inhomogeneities in granular matter. Physica A, 407:0 312–331, 2014. [Walker et al.(2011)Walker, Tordesillas, Einav, and Small]Walker:2011a D M Walker, A Tordesillas, I Einav, and M Small. Complex networks in confined comminution. Phys Rev E, 840 (2):0 021301, 2011. [Radjai et al.(1999)Radjai, Roux, and Moreau]Radjai:1999aa F Radjai, S Roux, and J J Moreau. Contact forces in a granular packing. Chaos, 90 (4):0 544–550, 1999. [Peña et al.(2009)Peña, Herrmann, and Lind]Pena:2009aa A A Peña, H J Herrmann, and P G Lind. Force chains in sheared granular media of irregular particles. AIP Conference Proceedings, 11450 (1):0 321–324, 2009. [Ostojic et al.(2007)Ostojic, Vlugt, and Nienhuis]Ostojic:2007a S Ostojic, T J H Vlugt, and B Nienhuis. Universal anisotropy in force networks under shear. Phys Rev E, 750 (3):0 030301, 2007. [Kondic et al.(2017)Kondic, Kramár, Kovalčinová, and Mischaikow]Kondic:2017a L Kondic, M Kramár, L Kovalčinová, and K Mischaikow. Evolution of force networks in dense granular matter close to jamming. EPJ Web Conf, 140:0 15014, 2017. [Tadanaga et al.(2017)Tadanaga, Clark, Majmudar, and Kondic]kondic-impact2017 T Tadanaga, A H Clark, T Majmudar, and L Kondic. Granular response to impact: Topology of the force networks. arXiv, arXiv:1709.06957 [cond-mat.soft], 2017. [Lim and Behringer(2017)]lim-impact2017 M X Lim and R P Behringer. Topology of force networks in granular media under impact. arXiv, arXiv:1709.01884 [cond-mat.soft], 2017. [Tordesillas et al.(2015a)Tordesillas, Tobin, Cil, Alshibli, and Behringer]Tordesillas:2015a A Tordesillas, S T Tobin, M Cil, K Alshibli, and R P Behringer. Network flow model of force transmission in unbonded and bonded granular media. Phys Rev E, 910 (6):0 062204, 2015a. [Tordesillas et al.(2015b)Tordesillas, Pucilowski, Tobin, Kuhn, Andò, Viggiani, Druckrey, and Alshibli]Tordesillas:2015b A Tordesillas, S Pucilowski, S Tobin, M R Kuhn, E Andò, G Viggiani, A Druckrey, and K Alshibli. Shear bands as bottlenecks in force transmission. EPL (Europhysics Letters), 1100 (5):0 58005, 2015b. [Tordesillas et al.(2013b)Tordesillas, Cramer, and Walker]Tordesillas:2013b A Tordesillas, A Cramer, and D M Walker. Minimum cut and shear bands. AIP Conference Proceedings, 15420 (1):0 507–510, 2013b. [Lin and Tordesillas(2013)]Lin:2013a Q Lin and A Tordesillas. Constrained optimisation in granular network flows: Games with a loaded dice. AIP Conference Proceedings, 15420 (1):0 547–550, 2013. [Lin and Tordesillas(2014)]Lin:2014a Q Lin and A Tordesillas. Towards an optimization theory for deforming dense granular materials: Minimum cost maximum flow solutions. J Ind Manag Optim, 100 (1):0 337–362, 2014. [Walker and Tordesillas(2013)]Walker:2013b D M Walker and A Tordesillas. Understanding multi-scale structural evolution in granular systems through gmems. AIP Conference Proceedings, 15420 (1):0 145–148, 2013. [Zhang and Small(2006)]Zhang:2006aa J Zhang and M Small. Complex network from pseudoperiodic time series: Topology versus dynamics. Phys Rev Lett, 960 (23):0 238701, 2006. [Yang and Yang(2008)]Yang:2008a Y Yang and H Yang. Complex network-based time series analysis. Physica A, 3870 (5–6):0 1381–1386, 2008. [Lacasa et al.(2008)Lacasa, Luque, Ballesteros, Luque, and Nuño]Lacasa:2008a L Lacasa, B Luque, F Ballesteros, J Luque, and J C Nuño. From time series to complex networks: The visibility graph. Proc Natl Acad Sci, 1050 (13):0 4972–4975, 2008. [Gao and Jin(2009)]Gao:2009a Z Gao and N Jin. Complex network from time series based on phase space reconstruction. Chaos, 190 (3):0 033137, 2009. [Marwan et al.(2009)Marwan, Donges, Zou, Donner, and Kurths]Marwan:2009a N Marwan, J F Donges, Y Zou, R V Donner, and J Kurths. Complex network approach for recurrence analysis of time series. Phys Lett A, 3730 (46):0 4246–4254, 2009. [MacKay(2003)]MacKay:2003 D J C MacKay. Information Theory, Inference and Learning Algorithms. Cambridge University Press, 2003. [Rechenmacher(2006)]Rechenmacher:2006a A L Rechenmacher. Grain-scale processes governing shear band initiation and evolution in sands. J Mech Phys Solids, 540 (1):0 22–45, 2006. [Rechenmacher et al.(2010)Rechenmacher, Abedi, and Chupin]Rechenmacher:2010a A L Rechenmacher, S Abedi, and O Chupin. Evolution of force chains in shear bands in sands. Géotechnique, 600 (5):0 343–351, 2010. [Andò et al.(2012)Andò, Hall, Viggiani, Desrues, and Bésuelle]Ando:2012a E Andò, S A Hall, G Viggiani, J Desrues, and P Bésuelle. Grain-scale experimental investigation of localised deformation in sand: A discrete particle tracking approach. Acta Geotechnica, 70 (1):0 1–13, 2012. [Amon et al.(2017)Amon, Born, Daniels, Dijksman, Huang, Parker, Schröter, Stannarius, and Wierschem]amon_focus_2017 A Amon, P Born, K E Daniels, J A Dijksman, K Huang, D Parker, M Schröter, R Stannarius, and A Wierschem. Preface: Focus on imaging methods in granular physics. Rev Sci Instrum, 880 (5):0 051701, 2017. [Stannarius(2017)]stannarius:17 R Stannarius. Magnetic resonance imaging of granular materials. Rev Sci Instrum, 880 (5):0 051806, 2017. [Porter et al.(2015)Porter, Kevrekidis, and Daraio]PT2015 M A Porter, P G Kevrekidis, and C Daraio. Granular crystals: Nonlinear dynamics meets materials engineering. Physics Today, 680 (11):0 44, 2015. [Cundall and Strack(1979)]Cundall1979 P A Cundall and O D L Strack. Discrete Numerical-model For Granular Assemblies. Geotechnique, 290 (1):0 47–65, 1979. [Papanikolaou et al.(2013)Papanikolaou, O'Hern, and Shattuck]Papanikolaou2013 S Papanikolaou, C S O'Hern, and M D Shattuck. Isostaticity at frictional jamming. Phys Rev Lett, 1100 (19):0 198002, 2013. [Somfai et al.(2005)Somfai, Roux, Snoeijer, van Hecke, and van Saarloos]Somfai2005 E Somfai, J N Roux, J H Snoeijer, M van Hecke, and W van Saarloos. Elastic wave propagation in confined granular systems. Phys Rev E, 720 (2):0 21301, 2005. [Rosvall et al.(2014)Rosvall, Esquivel, Lancichinetti, West, and Lambiotte]Rosvall2014NatComm M Rosvall, A V Esquivel, A Lancichinetti, J D West, and R Lambiotte. Memory in network flows and its effects on spreading dynamics and community detection. Nat Commun, 5:0 4630, 2014. [Holme(2015)]Holme2015EurPhysJB P Holme. Modern temporal network theory: A colloquium. Eur Phys J B, 880 (9):0 234, 2015. [Butts(2009)]butts2009 C T Butts. Revisiting the foundations of network analysis. Science, 3250 (5939):0 414–6, 2009. [Gravish et al.(2012)Gravish, Franklin, Hu, and Goldman]gravish2012entangled N Gravish, S V Franklin, D L Hu, and D I Goldman. Entangled granular media. Phys Rev Lett, 1080 (20):0 208001, 2012. [Murphy et al.(2016)Murphy, Reiser, Choksy, Singer, and Jaeger]murphy2015freestanding K A Murphy, N Reiser, D Choksy, C E Singer, and H M Jaeger. Freestanding loadbearing structures with Z-shaped particles. Granular Matter, 180 (2):0 26, 2016. [Hidalgo et al.(2009)Hidalgo, Zuriguel, Maza, and Pagonabarraga]Hidalgo:2009aa R Cruz Hidalgo, I Zuriguel, D Maza, and I Pagonabarraga. Role of particle shape on the stress propagation in granular packings. Phys. Rev. Lett., 1030 (11):0 118001, 2009. [Trepanier and Franklin(2010)]trepanier2010column M Trepanier and S V Franklin. Column collapse of granular rods. Phys Rev E, 820 (1):0 011308, 2010. [Schreck et al.(2010)Schreck, Xu, and O'Hern]Schreck:2010a C F Schreck, N Xu, and C S O'Hern. A comparison of jamming behavior in systems composed of dimer- and ellipse-shaped particles. Soft Matter, 60 (13):0 2960–2969, 2010. [Athanassiadis et al.(2014)Athanassiadis, Miskin, Kaplan, Rodenberg, Lee, Merritt, Brown, Amend, Lipson, and Jaeger]Athanassiadis:2014a A G. Athanassiadis, M Z. Miskin, P Kaplan, N Rodenberg, S H Lee, J Merritt, E Brown, J Amend, H Lipson, and H M Jaeger. Particle shape effects on the stress response of granular packings. Soft Matter, 100 (1):0 48–59, 2014. [Harrington and Durian(2017)]durian2017 M Harrington and D J Durian. Anisotropic particles strengthen granular pillars under compression. arXiv, arXiv:1709.09511 [cond-mat.soft], 2017. [Azéma et al.(2013)Azéma, Radjai, and Dubois]Azema:2013aa E Azéma, F Radjai, and F Dubois. Packings of irregular polyhedral particles: Strength, structure, and effects of angularity. Phys Rev E, 870 (6):0 062203, 2013. [Gómez et al.(2009)Gómez, Jensen, and Arenas]Gomez:2009a S Gómez, P Jensen, and A Arenas. Analysis of community structure in networks of correlated data. Phys Rev E, 800 (1):0 016114, 2009. [Traag and Bruggeman(2009)]Traag:2009a V A Traag and J Bruggeman. Community detection in networks with positive and negative links. Phys Rev E, 800 (3):0 036115, 2009. [Zhang et al.(2016)Zhang, Bassett, and Winkelstein]Zhang:2015 S Zhang, D S Bassett, and B A Winkelstein. Stretch-induced network reconfiguration of collagen fibers in the human facet capsular ligament. J R Soc Interface, 130 (114):0 20150883, 2016. [Puckett and Daniels(2013)]Puckett2013 J G Puckett and K E Daniels. Equilibrating temperaturelike variables in jammed granular subsystems. Phys Rev Lett, 1100 (5):0 058001, 2013. [Shaebani et al.(2012)Shaebani, Madadi, Luding, and Wolf]shaebani2012influence M R Shaebani, M Madadi, S Luding, and D E Wolf. Influence of polydispersity on micromechanics of granular materials. Phys Rev E, 850 (1):0 011301, 2012. [Kumar et al.(2016)Kumar, Magnanimo, Ramaioli, and Luding]Kumar:2016a N Kumar, V Magnanimo, M Ramaioli, and S Luding. Tuning the bulk properties of bidisperse granular mixtures by small amount of fines. Powder Technol, 293:0 94–112, 2016. [Slanina(2017)]slanina2017 F Slanina. Localization in random bipartite graphs: Numerical and empirical study. Phys Rev E, 950 (5):0 052149, 2017. [Harary(1972)]harary1972 F Harary. Graph Theory. Addison-Wesley, 1972. [Shi et al.(2013)Shi, Wang, Forest, and Mucha]Shi:2013aa F Shi, S Wang, M G Forest, and P J Mucha. Percolation-induced exponential scaling in the large current tails of random resistor networks. Multiscale Model Sim, 110 (4):0 1298–1310, 2013. [Shi et al.(2014)Shi, Wang, Forest, Mucha, and Zhou]Shi:2014aa F Shi, S Wang, M G Forest, P J Mucha, and R Zhou. Network-based assessments of percolation-induced current distributions in sheared rod macromolecular dispersions. Multiscale Model Sim, 120 (1):0 249–264, 2014. [Abhilash et al.(2014)Abhilash, Baker, Trappmann, Chen, and Shenoy]abhilash2014remodeling A S Abhilash, B M Baker, B Trappmann, C S Chen, and V B Shenoy. Remodeling of fibrous extracellular matrices by contractile cells: Predictions from discrete fiber network simulations. Biophys J, 1070 (8):0 1829–1840, 2014. [Purohit et al.(2011)Purohit, Litvinov, Brown, Discher, and Weisel]purohit2011protein P K Purohit, R I Litvinov, A E Brown, D E Discher, and J W Weisel. Protein unfolding accounts for the unusual mechanical behavior of fibrin networks. Acta Biomater, 70 (6):0 2374–2783, 2011. [Bullmore and Sporns(2009)]Bullmore2009 E Bullmore and O Sporns. Complex brain networks: Graph theoretical analysis of structural and functional systems. Nat Rev Neurosci, 100 (3):0 186–198, 2009. [Blunt(2001)]Blunt:2001a M J Blunt. Flow in porous media — pore-network models and multiphase flow. Curr Opin Colloid Interface Sci, 60 (3):0 197–207, 2001. [Al-Raoush et al.(2003)Al-Raoush, Thompson, and Willson]Al-Raoush:2003a R Al-Raoush, K Thompson, and C S Willson. Comparison of network generation techniques for unconsolidated porous media. Soil Sci Soc Am J, 670 (6):0 1687–1700, 2003. [Vo et al.(2013)Vo, Walker, and Tordesillas]Vo:2013 K Vo, D M Walker, and A Tordesillas. Transport pathways within percolating pore space networks of granular materials. AIP Conf Proc, 15420 (1):0 551–554, 2013. [Walker et al.(2013)Walker, Vo, and Tordesillas]Walker:2013a D M Walker, K Vo, and A Tordesillas. On reynolds' dilatancy and shear band evolution: A new perspective. Int J Bifurc Chaos, 230 (09):0 1330034, 2013. [van der Linden et al.(2016)van der Linden, Narsilio, and Tordesillas]Linden:2016a J H van der Linden, G A Narsilio, and A Tordesillas. Machine learning framework for analysis of transport through complex networks in porous, granular media: A focus on permeability. Phys Rev E, 940 (2):0 022904, 2016. [Russell et al.(2016)Russell, Walker, and Tordesillas]Russell:2016a S Russell, D M Walker, and A Tordesillas. A characterization of the coupled evolution of grain fabric and pore space using complex networks: Pore connectivity and optimized flows in the presence of shear bands. J Mech Phys Solids, 88:0 227–251, 2016. [Jimenez-Martinez and Negre(2017)]Martinez:2017a J Jimenez-Martinez and C F A Negre. Eigenvector centrality for geometric and topological characterization of porous media. Phys Rev E, 960 (1):0 013310, 2017. [Laubie et al.(2017)Laubie, Radjai, Pellenq, and Ulm]porous2017 H Laubie, F Radjai, R Pellenq, and F J Ulm. Stress transmission and failure in disordered porous media. Phys Rev Lett, 1190 (7):0 075501, 2017. [Newman and Clauset(2016)]newman2016structure M E J Newman and A Clauset. Structure and inference in annotated networks. Nat Commun, 7:0 11863, 2016. [Hric et al.(2016)Hric, Peixoto, and Fortunato]tiago2016 D Hric, T P Peixoto, and S Fortunato. Network structure, metadata, and the prediction of missing nodes and annotations. Phys Rev X, 60 (3):0 031038, 2016. [Palla et al.(2008)Palla, Farkas, Pollner, Derenyi, and Vicsek]palla2008fundamental G Palla, I J Farkas, P Pollner, I Derenyi, and T Vicsek. Fundamental statistical features and self-similar properties of tagged networks. New J Phys, 100 (12):0 123026, 2008. [Edelsbrunner and Harer(2010)]edels2010 H Edelsbrunner and J Harer. Computational Topology: An Introduction. American Mathematical Society, 2010. [Ramola and Chakraborty(2017)]Ramola:2017a K Ramola and B Chakraborty. Stress response of granular systems. J Stat Phys, 1690 (1):0 1–17, 2017. [Taylor-King et al.(2017)Taylor-King, Basanta, Chapman, and Porter]jkt2017 J P Taylor-King, D Basanta, S J Chapman, and M A Porter. Mean-field approach to evolving spatial networks, with an application to osteocyte network formation. Phys Rev E, 960 (1):0 012301, 2017. [Beguerisse-Diaz et al.(2014)Beguerisse-Diaz, Garduno-Hernandez, Vangelov, Yaliraki, and Barahona]beguerisse2014interest M Beguerisse-Diaz, G Garduno-Hernandez, B Vangelov, S N Yaliraki, and M Barahona. Interest communities and flow roles in directed networks: The Twitter network of the UK riots. J R Soc Interface, 110 (101):0 20140940, 2014. [Bassett et al.(2010)Bassett, Greenfield, Meyer-Lindenberg, Weinberger, Moore, and Bullmore]Bassett2010 D S Bassett, D L Greenfield, A Meyer-Lindenberg, D R Weinberger, S Moore, and E Bullmore. Efficient physical embedding of topologically complex information processing networks in brains and computer circuits. PLoS Comput Biol, 60 (4):0 e1000748, 2010. [Modes et al.(2016)Modes, Magnasco, and Katifori]Modes:2016a C D Modes, M O Magnasco, and E Katifori. Extracting hidden hierarchies in 3D distribution networks. Phys Rev X, 60 (3):0 031009, 2016. [Tighe et al.(2008)Tighe, van Eerd, and Vlugt]Tighe2008 B P Tighe, A R T van Eerd, and T J H Vlugt. Entropy maximization in the force network ensemble for granular solids. Phys Rev Lett, 1000 (23):0 238001, 2008. [Ronhovde et al.(2012)Ronhovde, Chakrabarty, Sahu, Sahu, Kelton, Mauro, and Nussinov]ronhovde2011detection P Ronhovde, S Chakrabarty, M Sahu, K K Sahu, K F Kelton, N Mauro, and Z Nussinov. Detection of hidden structures for arbitrary scales in complex physical systems. Sci Rep, 2:0 329, 2012. [Agarwala and Shenoy(2017)]amorphous2017 A Agarwala and V B Shenoy. Topological insulators in amorphous systems. Phys Rev Lett, 1180 (23):0 236402, 2017. [Muller and Luding(2009)]muller2009homogeneous M K Muller and S Luding. Homogeneous cooling with repulsive and attractive long‚Äêrange interactions. AIP Conf Proc, 11450 (1):0 697–700, 2009. [Mitarai and Nori(2006)]Mitarai:2006a N Mitarai and F Nori. Wet granular materials. Adv Phys, 550 (1-2):0 1–45, 2006. [Wrobel and Sundararaghavan(2014)]wrobel2014directed M R Wrobel and H G Sundararaghavan. Directed migration in neural tissue engineering. Tissue Eng Part B Rev, 200 (2):0 93–105, 2014. [Huttenlocher and Poznansky(2008)]huttenlocher2008reverse A Huttenlocher and M C Poznansky. Reverse leukocyte migration can be attractive or repulsive. Trends in Cell Biol, 180 (6):0 298–306, 2008. [Hartveit and Veruki(2012)]hartveit2012electrical E Hartveit and M L Veruki. Electrical synapses between aii amacrine cells in the retina: Function and modulation. Brain Res, 1487:0 160–172, 2012. [Nualart-Marti et al.(2013)Nualart-Marti, Solsona, and Fields]nualart2013biochim A Nualart-Marti, C Solsona, and R D Fields. Gap junction communication in myelinating glia. Biochim Biophys Acta, 18280 (1):0 69–78, 2013. [Pahtz et al.(2010)Pahtz, Herrmann, and Shinbrot]Pahtz:2010a T Pahtz, H J Herrmann, and T Shinbrot. Why do particle clouds generate electric charges? Nat Phys, 60 (5):0 364–368, 2010. [Ladoux et al.(2015)Ladoux, Nelson, Yan, and Mege]ladoux2015mechanotransduction B Ladoux, W J Nelson, J Yan, and R M Mege. The mechanotransduction machinery at work at adherens junctions. Integr Biol (Camb), 70 (10):0 1109–1119, 2015. [Bausch and Kroy(2006)]Bausch:2006aa A R Bausch and K Kroy. A bottom-up approach to cell mechanics. Nat Phys, 20 (4):0 231–238, 2006. [Broedersz and MacKintosh(2014)]Broedersz:2014aa C P Broedersz and F C MacKintosh. Modeling semiflexible polymer networks. Rev Mod Phys, 860 (3):0 995–1036, 2014. [Lieleg et al.(2009)Lieleg, Schmoller, Claessens, and Bausch]Lieleg:2009a O Lieleg, K M Schmoller, M M Claessens, and A R Bausch. Cytoskeletal polymer networks: Viscoelastic properties are determined by the microscopic interaction potential of cross-links. Biophys J, 960 (11):0 4725–4732, 2009. [Fletcher and Mullins(2010)]Fletcher:2010aa D A Fletcher and R D Mullins. Cell mechanics and the cytoskeleton. Nature, 4630 (7280):0 485–492, 2010. [Mizuno et al.(2007)Mizuno, Tardin, Schmidt, and MacKintosh]Mizuno2007 D Mizuno, C Tardin, C F Schmidt, and F C MacKintosh. Nonequilibrium mechanics of active cytoskeletal networks. Science, 3150 (5810):0 370–373, 2007. [Gardel et al.(2008)Gardel, Kasza, Brangwynne, Liu, and Weitz]Gardel:2008aa M L Gardel, K E Kasza, C P Brangwynne, J Liu, and D A Weitz. Mechanical response of cytoskeletal networks. Methods in Cell Biol, 89:0 487–519, 2008. [Majumdar et al.(2017)Majumdar, Foucard, Levine, and Gardel]actin2017 S Majumdar, L C Foucard, A J Levine, and M L Gardel. Encoding mechano-memories in actin networks. arXiv, arXiv:1706.05336 [cond-mat.soft], 2017. [Billen et al.(2009)Billen, Wilson, Rabinovitch, and Baljon]Billen2009 J Billen, M Wilson, A Rabinovitch, and A R C Baljon. Topological changes at the gel transition of a reversible polymeric network. EPL (Europhysics Letters), 870 (6):0 68003, 2009. [Kim et al.(2014)Kim, Litvinov, Weisel, and Alber]Kim2014 O V Kim, R I Litvinov, J W Weisel, and M S Alber. Structural basis for the nonlinear mechanics of fibrin networks under compression. Biomaterials, 350 (25):0 6739–6749, 2014. [Gavrilov et al.(2015)Gavrilov, Komarov, and Khalatur]Gavrilov2015 A A Gavrilov, P V Komarov, and P G Khalatur. Thermal properties and topology of epoxy networks: A multiscale simulation methodology. Macromolecules, 480 (1):0 206–212, 2015. [Liang et al.(2016)Liang, Jones, Chen, Sun, and Jiao]Liang:2016 L Liang, C Jones, S Chen, B Sun, and Y Jiao. Heterogeneous force network in 3D cellularized collagen networks. Phys Biol, 130 (6):0 066001, 2016. [Venkatesan et al.(2017)Venkatesan, Vivek-Ananth, Sreejith, Mangalapandi, Hassanali, and Samal]samal2017 S Venkatesan, R P Vivek-Ananth, R P Sreejith, P Mangalapandi, A A Hassanali, and A Samal. Network approach towards understanding the crazing in glassy amorphous polymers. arXiv, arXiv:1710.01996 [cond-mat.soft], 2017. [Bouzid and Del Gado(2017)]Bouzid2017 M Bouzid and E Del Gado. Network topology in soft gels: Hardening and softening materials network topology in soft gels: Hardening and softening materials network topology in soft gels: Hardening and softening materials. Langmuir, 2017. URL <http://dx.doi.org/10.1021/acs.langmuir.7b02944>. [Ahnert et al.(2017)Ahnert, Grant, and Pickard]Ahnert:2017aa S E Ahnert, W P Grant, and C J Pickard. Revealing and exploiting hierarchical material structure through complex atomic networks. npj Computational Materials, 30 (1):0 35, 2017. [Setford(2014)]setford2014 J Setford. Models of granular networks in two and three dimensions. Undergraduate Thesis, Department of Physics, University of Oxford (available at <http://www.math.ucla.edu/ mason/research/setford-final.pdf>), 2014. [Mülken and Blumen(2011)]Mulken:2011aa O Mülken and A Blumen. Continuous-time quantum walks: Models for coherent transport on complex networks. Phys Rep, 5020 (2–3):0 37–87, 2011. [Bianconi(2015)]Bianconi:2015a G Bianconi. Interdisciplinary and physics challenges of network theory. EPL (Europhysics Letters), 1110 (5):0 56001, 2015. [Biamonte et al.(2017)Biamonte, Faccin, and De Domenico]Biamonte:2017a J Biamonte, M Faccin, and M De Domenico. Complex networks: From classical to quantum. arXiv, arXiv:1702.08459 [quant-ph], 2017. [Boccaletti et al.(2006)Boccaletti, Latora, Moreno, Chavez, and Hwang]bocca2006 S Boccaletti, V Latora, Y Moreno, M Chavez, and D-U Hwang. Complex networks: Structure and dynamics. Phys Rep, 4240 (4):0 175–308, 2006. [Liu and Zhang(2011)]Liu:2011 Y Liu and X Zhang. Metamaterials: A new frontier of science and technology. Chem Soc Rev, 40:0 2494–2507, 2011. [Turpin et al.(2014)Turpin, Bossard, Morgan, Werner, and Werner]Turpin:2014 J P Turpin, J A Bossard, K L Morgan, D H Werner, and P L Werner. Reconfigurable and tunable metamaterials: A review of the theory and applications. International Journal of Antennas and Propagation, 20140 (429837), 2014. [Lee et al.(2012)Lee, Singer, and Thomas]Lee2012 J H Lee, J P Singer, and E L Thomas. Micro-/nanostructured mechanical metamaterials. Adv Mater, 240 (36):0 4782–4810, 2012. [Greaves et al.(2011)Greaves, Greer, Lakes, and Rouxel]Greaves:2011a G N Greaves, A L Greer, R S Lakes, and T Rouxel. Poisson's ratio and modern materials. Nat Mater, 100 (11):0 823–837, 2011. [Rocklin et al.(2017)Rocklin, Zhou, Sun, and Mao]Rocklin:2015 D Z Rocklin, S Zhou, K Sun, and X Mao. Transformable topological mechanical metamaterials. Nat Commun, 8:0 14201, 2017. [Fang et al.(2006)Fang, Xi, Xu, Ambati, Srituravanich, Sun, and Zhang]Fang:2006 N Fang, D Xi, J Xu, M Ambati, W Srituravanich, C Sun, and X Zhang. Ultrasonic metamaterials with negative modulus. Nat Mater, 50 (6):0 452–456, 2006. [Nicolaou and Motter(2012)]Nicolaou:2012aa Z G Nicolaou and A E Motter. Mechanical metamaterials with negative compressibility transitions. Nat Mater, 110 (7):0 608–613, 2012. [Simovski et al.(2012)Simovski, Belov, Atrashchenko, and Kivshar]Simovski2012 C R Simovski, P A Belov, A V Atrashchenko, and Y S Kivshar. Wire metamaterials: Physics and applications. Adv Mater, 240 (31):0 4229–4248, 2012. [Smith et al.(2004)Smith, Pendry, and Wiltshire]Smith:2004aa D. R. Smith, J. B. Pendry, and M. C. K. Wiltshire. Metamaterials and negative refractive index. Science, 3050 (5685):0 788–792, 2004. [Eiben and Smith(2015)]eiben2015from A E Eiben and J Smith. From evolutionary computation to the evolution of things. Nature, 5210 (7553):0 476–482, 2015. [Díaz-Manríquez et al.(2016)Díaz-Manríquez, Toscano, Barron-Zambrano, and Tello-Leal]diaz2016review A Díaz-Manríquez, G Toscano, J H Barron-Zambrano, and E Tello-Leal. A review of surrogate assisted multiobjective evolutionary algorithms. Comput Intell Neurosci, 20160 (9420460), 2016. [Papadimitriou(2014)]papadimitriou2014algorithms C Papadimitriou. Algorithms, complexity, and the sciences. Proc Natl Acad Sci, 1110 (45):0 15881–15887, 2014. [Goldberg(1989)]Goldberg:1989 D E Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley Longman Publishing Co., Inc., 1989. [McGhee(1999)]McGhee:1997 G R McGhee. Theoretical Morphology: The Concept and its Applications. Columbia University Press, 1999. [Valera et al.(2017)Valera, Guo, Kelly, Matz, Cantu, Percus, Hyman, Srinivasan, and Viswanathan]valera2017machine M Valera, Z Guo, P Kelly, S Matz, A Cantu, A G Percus, J D Hyman, G Srinivasan, and H S Viswanathan. Machine learning for graph-based representations of three-dimensional discrete fracture networks. arXiv, arXiv:1705.09866 [physics.geo-ph], 2017. [Avena-Koenigsberger et al.(2014a)Avena-Koenigsberger, Goñi, Solé, and Sporns]Avena-Koenigsberger:2014 A Avena-Koenigsberger, J Goñi, R Solé, and O Sporns. Network morphospace. J R Soc Interface, 120 (103), 2014a. [Avena-Koenigsberger et al.(2014b)Avena-Koenigsberger, Goñi, Betzel, van den Heuvel, Griffa, Hagmann, Thiran, and Sporns]Avena-Koenigsberger:2013 A Avena-Koenigsberger, J Goñi, R F Betzel, M P van den Heuvel, A Griffa, P Hagmann, J-P Thiran, and O Sporns. Using pareto optimality to explore the topology and dynamics of the human connectome. Philos Trans R Soc Lond B Biol Sci, 3690 (1653), 2014b. [Goñi et al.(2013)Goñi, Avena-Koenigsberger, de Mendizabal, van den Heuvel, Betzel, and Sporns]Goni:2013 J Goñi, A Avena-Koenigsberger, NV de Mendizabal, M P van den Heuvel, R F Betzel, and O Sporns. Exploring the morphospace of communication efficiency in complex networks. PLoS One, 80 (3):0 e58070, 2013. [Jaeger and de Pablo(2016)]jaeger2016evolutionary H M Jaeger and J J de Pablo. Perspective: Evolutionary design of granular media and block copolymer patterns. APL Materials, 40 (5):0 053209, 2016. [Miskin and Jaeger(2013)]miskin2013adapting M Z Miskin and H M Jaeger. Adapting granular materials through artificial evolution. Nat Mater, 12:0 326–331, 2013. [Miskin and Jaeger(2014)]miskin2014evolving M Z Miskin and H M Jaeger. Evolving design rules for the inverse granular packing problem. Soft Matter, 10:0 3708–3715, 2014. [Roth and Jaeger(2016)]roth2016optimizing L K Roth and H M Jaeger. Optimizing packing fraction in granular media composed of overlapping spheres. Soft Matter, 12:0 1107–1115, 2016. [Yan et al.(2017)Yan, Ravasio, Brito, and Wyart]Yan:2017a L Yan, R Ravasio, C Brito, and M Wyart. Architecture and coevolution of allosteric materials. Proc Natl Acad Sci, 1140 (10):0 2526–2531, 2017. [Ellenbroek et al.(2015)Ellenbroek, Hagh, Kumar, Thorpe, and van Hecke]Ellenbroek:2015aa W G Ellenbroek, V F Hagh, A Kumar, M F Thorpe, and M van Hecke. Rigidity loss in disordered systems: Three scenarios. Phys Rev Lett, 1140 (13):0 135501, 2015. [Goodrich et al.(2015)Goodrich, Liu, and Nagel]Goodrich:2015a C P Goodrich, A J Liu, and S R Nagel. The principle of independent bond-level response: Tuning by pruning to exploit disorder for global behavior. Phys Rev Lett, 1140 (22):0 225501, 2015. [Rocks et al.(2017)Rocks, Pashine, Bischofberger, Goodrich, Liu, and Nagel]Rocks:2017a J W Rocks, N Pashine, I Bischofberger, C P Goodrich, A J Liu, and S R Nagel. Designing allostery-inspired response in mechanical networks. Proc Natl Acad Sci, 1140 (10):0 2520–2525, 2017. [Driscoll et al.(2016)Driscoll, Chen, Beuman, Ulrich, Nagel, and Vitelli]Driscoll2016 M M Driscoll, B G-G Chen, T H Beuman, S Ulrich, S R Nagel, and V Vitelli. The role of rigidity in controlling material failure. Proc Natl Acad Sci U S A, 1130 (39):0 10813–10817, 2016. [Shekhawat et al.(2013)Shekhawat, Zapperi, and Sethna]shekhawat2013damage A Shekhawat, S Zapperi, and J P Sethna. From damage percolation to crack nucleation through finite size criticality. Phys Rev Lett, 1100 (18):0 185505, 2013. [Reid et al.(2017)Reid, Pashine, Jaeger, Liu, Nagel, and de Pablo]ReidAuxeticMetamaterials:2017 D R Reid, N Pashine, H M Jaeger, A J Liu, S R Nagel, and J J de Pablo. Auxetic metamaterials from disordered networks. arXiv, arXiv:1710.02493 [cond-mat.soft], 2017. [Quinn and Winkelstein(2011)]Quinn:2011 K P Quinn and B A Winkelstein. Preconditioning is correlated with altered collagen fiber alignment in ligament. J Biomech Eng, 1330 (6):0 064506–064506, 2011. [Zhao et al.(2014)Zhao, Chen, and Reich]Zhao:2014 R Zhao, C S Chen, and D H Reich. Force-driven evolution of mesoscale structure in engineered 3D microtissues and the modulation of tissue stiffening. Biomaterials, 350 (19):0 5056–5064, 2014. [Han et al.(2013)Han, Heo, Driscoll, Smith, Mauck, and Elliott]Han:2013 W M Han, S-J Heo, T P Driscoll, L J Smith, R L Mauck, and D M Elliott. Macro- to microscale strain transfer in fibrous tissues is heterogeneous and tissue-specific. Biophys J, 1050 (3):0 807–817, 2013. [Pong et al.(2011)Pong, Adams, Bray, Feinberg, Sheehy, Werdich, and Parker]Pong:2011 T Pong, W J Adams, M-A Bray, A W Feinberg, S P Sheehy, A A Werdich, and K K Parker. Hierarchical architecture influences calcium dynamics in engineered cardiac muscle. Exp Biol Med, 2360 (3):0 366–373, 2011. [Sporns(2014)]Sporns:2014aa O Sporns. Towards network substrates of brain disorders. Brain, 1370 (8):0 2117–2118, 2014.
http://arxiv.org/abs/1708.08080v2
{ "authors": [ "Lia Papadopoulos", "Mason A. Porter", "Karen E. Daniels", "Danielle S. Bassett" ], "categories": [ "cond-mat.soft", "cond-mat.dis-nn", "math.AT", "nlin.AO", "physics.data-an" ], "primary_category": "cond-mat.soft", "published": "20170827114420", "title": "Network Analysis of Particles and Grains" }
Genuinely Multipartite Noncausality Cyril Branciard December 7, 2017 =================================== With the much enlarged stellar sample of 55 831 stars and much increased precision in distances, proper motions, provided by Gaia DR1 TGAS we have shown with the help of the wavelet analysis that the velocity distribution of stars in the Solar neighbourhood contains more kinematic structures than previously known. We detect 19 kinematic structures between scales 3-16 at the 3σ confidence level. Among them we identified well-known groups (such as Hercules, Sirius, Coma Berenices, Pleiades, and Wolf 630). We confirmed recently detected groups (such as Antoja12 and Bobylev16). In addition we report here about a new kinematic structure at (U,V)≈(37, 8). Another three new groups are tentatively detected, but require confirmation.§ INTRODUCTION Recent studies have shown that the velocity distribution of stars in the Solar neighbourhood is rich with kinematic structures, with stars that for various reasons have similar space velocities (U,V,W)<cit.>. The list of works on kinematic groups could be extended and all of them prove that the velocity distribution in the Solar neighbourhood is inhomogeneous and has a complex, branch-like structure. The question on how did the stellar streams formed is still actual. In this work we focus on the velocity distribution of stars in the U-V plane for the enlarged stellar sample that is based on the astrometric data provided by Gaia DR1.§ INPUT DATA AND METHODOLOGY The astrometric data provided by Gaia DR1 (TGAS) <cit.> and was accomplished with radial velocities from RAVE DR5 <cit.>.To get a better precision of positions of the structures we cut U and V velocity uncertainties to be less than 4. This limit gives us asample of 55 831 stars to which we applied a wavelet analysis with the “a trous” algorithm. To filter the output data we used an auto-convolution histogram method.We then run 2 000 Monte Carlo simulations to verify that the detected structures are real due to velocity uncertainties.§ MAIN RESULTSFigure <ref> shows 3σ-significant (99.86 % means that the structure is not an artefact) detected structures in the U-V plane for the scale J=3(structure sizes are in the range 3-16). Here previously detected structures found in the literature (<cit.>) are marked with blue crosses. Classical moving groups like Sirius (1-3), Coma Berenices (4-6), Hyades (7), Pleiades (8) and Hercules (10-11), and some smaller structures like Wolf 630 (9), Dehnen98 (9), γLeo (12) can be easily recognized. They all have a comparably high percentage of detection (> 50% of repeats in Monte Carlo simulations) and a big number of stars (> 100). We confirm the two structures from <cit.> (structures 14 and 15) and one structure from <cit.> (structure 16). A new structure (number 13) was detected with 74% of significance. Groups 18-19 have low percentages of detection: <15%, and might be insignificant. For more details see <cit.>.Together in a combination with the final RAVE DR5 data we confirm results obtained before the Gaia era that the velocity distribution of stars in the solar neighbourhood is inhomogeneous and completed the list of known structures with a few more new groups. T.B. was funded by the “The New Milky Way” project grant from the Knut and Alice Wallenberg Foundation. We thank Prof. F. Murtagh for making available for us the MR software packages and for valuable and helpful comments.25 natexlab#1#1[Antoja et al.(2008)]_antoja08 Antoja, T., Figueras, F., Fernández, D., & Torra, J., 2008, , 490:135–150[Antoja et al.(2012)]_antoja12 Antoja, T., Helmi, A., Bienayme, O., Bland-Hawthorn, J., Famaey, B., et al., 2012, , 426:L1–L5[Bobylev & Bajkova(2016)]_bobylev16 Bobylev, V., & Bajkova, A., 2016, Astronomy Letters, 42:90–99[Dehnen(1998)]_dehnen98 Dehnen, W., 1998, , 115:2384–2396[Eggen(1996)]_eggen96 Eggen, O. J., 1996, , 112:1595[Famaey et al.(2005)]_famaey05 Famaey, B., Jorissen, A., Luri, X., Mayor, M., Udry, S., et al., 2005, , 430:165–186[Kunder et al.(2017)]_kunder17 Kunder, A., Kordopatis, G., Steinmetz, M., Zwitter, T., McMillan, P. J., et al. 2017, , 153:75[Kushniruk et al.(2017)]_kushniruk17 Kushniruk, I., Schirmer, T. & Bensby, T., 2017, submitted to[Michalik et al.(2015)]_michalik15 Michalik, D., Lindegren, L. & Hobbs, D., 2015, , 574:A115[Skuljan et al.(1999)]_skuljan99 Skuljan, J., Hearnshaw, J. B. &Cottrell, P. L., 1999, , 308:731–740[Zhao et al.(2009)]_zhao09 Zhao, J., Zhao, G. & Chen, Y., 2009, , 692:L113–L117
http://arxiv.org/abs/1708.08892v1
{ "authors": [ "Iryna Kushniruk", "Thiebaut Schirmer", "Thomas Bensby" ], "categories": [ "astro-ph.GA", "astro-ph.SR" ], "primary_category": "astro-ph.GA", "published": "20170825085640", "title": "Kinematic structures found with Gaia DR1/TGAS and RAVE in the Solar neighbourhood" }
*#1*/*#1*/1]Hamid Shabani,[email protected] 2]Amir Hadi Ziaie,[email protected][1]Physics Department, Faculty of Sciences, University of Sistan and Baluchestan, Zahedan, Iran [2]Department of Physics, Kahnooj Branch, Islamic Azad University, Kerman, Iran Bouncing cosmological solutions from f( R,T) gravity [ December 30, 2023 ==================================================== In this work we study classical bouncing solutions in the context of f( R, T)= R+h( T) gravity in a flat FLRW background using a perfect fluid as the only matter content. Our investigation is based on introducing an effective fluid through defining effective energy density and pressure; we call this reformulation as the effective picture. These definitions have been already introduced to study the energy conditions in f( R, T) gravity. We examine various models to which different effective equations of state, corresponding to different h( T) functions, can be attributed. It is also discussed that one can link between an assumed f( R, T) model in the effective picture and the theories with generalized equation of state (EoS). We obtain cosmological scenarios exhibiting a nonsingular bounce before and after which the Universe lives within a de-Sitter phase. We then proceed to find general solutions for matter bounce and investigate their properties. We show that the properties of bouncing solution in the effective picture of f( R, T) gravity are as follows: for a specific form of the f( R,T) function, these solutions are without any future singularities. Moreover, stability analysis of the nonsingular solutions through matter density perturbations revealed that except two of the models, the parameters of scalar-type perturbations for the other ones have a slight transient fluctuation around the bounce point and damp to zero or a finite value at late times. Hence these bouncing solutions are stable against scalar-type perturbations. It is possible that all energy conditions be respected by the real perfect fluid, however, the null and the strong energy conditions can be violated by the effective fluid near the bounce event. These solutions always correspond to a maximum in the real matter energy density and a vanishing minimum in the effective density. The effective pressure varies between negative values and may show either a minimum or a maximum. § INTRODUCTIONToday, the standard cosmological model (SCM) or the Big-Bang cosmology has become the most acceptable model which encompasses our knowledge of the Universe as a whole.For this reason itis called also the concordance model <cit.>. This model which allows one to track the cosmological evolution of the Universe very well, has matured over the last century, consolidating its theoretical foundations with increasingly accurate observations. We can numerate a number of the successes of the SCM at the classic level. For example, it accounts for the expansion of the Universe (Hubble law), the black body nature of cosmic microwave background (CMB) within the framework of the SCM can be understood and the predictions of light-element abundances which were produced during the nucleosynthesis. It also provides a framework to study the cosmic structure formation <cit.>. However, though the SCM works very well in fitting many observations, it includes a number of deficiencies and weaknesses. For instance some problems which are rooted in cosmological relics such as magnetic monopoles <cit.>, gravitons <cit.>, moduli <cit.> and baryon asymmetry <cit.>. Despite the self-consistency and remarkable success of the SCM in describing the evolution of the Universe back to only one hundredth of a second, a number of unanswered questions remain regarding the initial state of the Universe, such as flatness and horizon problems <cit.>. Moreover, there are some unresolved problems related to the origin and nature of dark matter (DM) <cit.>. Notwithstanding the excellent agreement with the observational data there still exists a number of challenging open problems associated with the late time evolution of the Universe, namely the nature of dark energy (DE) and cosmological constant problem <cit.>. Though the inflation mechanism has been introduced to treat some of the mentioned issues such as, the horizon, flatness and magnetic monopole problems at early Universe <cit.>, the SCM suffers from a more fundamental issue, i.e., the initial cosmologicalsingularity that the existence of which has been predicted by the pioneering works of Hawking, Penrose and Geroch in 1960s, known as the singularity theorems <cit.> and their later extensions by Tipler in 1978 <cit.> and by Borde, Vilenkin and Guth in the 1990s <cit.> (see also <cit.> for a comprehensive study). According to these theorems, a cosmological singularity is unavoidable if spacetime dynamics is described by General Relativity (GR) and if matter content of the Universe obeys certain energy conditions. A singular state is an extreme situation with infinite values of physical quantities, like temperature, energy density, and the spacetime curvature from which the Universe has started its evolution at a finite past. The existence of such an uncontrollable initial state is irritating, since a singularity can be naturally considered as a source of lawlessness <cit.>. A potential solution to the issue of cosmological singularity can be provided by non-singular bouncing cosmologies <cit.>. Beside a huge interest in the solutions that do not display singular behavior, there can be more motivations to seek for non-singular cosmological models. The first reason for removing the initial singularity is rooted in the initial value problem since a consistent gravitational theory requires a well-posed Cauchy problem <cit.>. However, owing to the fact that the gravitational field diverges at a spacetime singularity, we could not have a well formulated Cauchy problem as we cannot set the initial values at a singular spatial hypersurface given by t=const. Another related issue is that the existence of a singularity is inconsistent with the entropy bound S/E=(2π R)/cħ, where S, E, R, ħ and c being entropy, proper energy, the largest linear dimension, Planck's constant and the velocity of light, respectively <cit.>. During the past decades, models which describe bouncing behavior have been designed and studied as an approach to resolve the problem of initial singularity. These models suggest that the Universe existed even before the Big-Bang and underwent an accelerated contraction phase towards reaching a non-vanishing minimum radius. The transition from a preceding cosmic contraction regime to the current accelerating expansion phase (as already predicted in SCM) is the so called Big Bounce. From this perspective, the idea that the expansion phase is preceded by a contraction phase paves a new way towards modeling the early Universe and thus, may provide a suitable setting to obviate some of the problems of the SCM without the need to an inflationary scenario. Although an acceptable model can be considered as the one being capable of explaining the issues that have been treated by inflationary mechanism, e.g., most inflationary scenarios can give the scale-invariant spectrum of the cosmological perturbations <cit.>, problems of the SCM may find solutions in the contracting regime before the bounce occurs. The horizon problem, for example, is immediately resolved if the far separated regions of the present Universe were in causal connection during the previous contraction phase. Similarly, the homogeneity, flatness, and isotropy of the Universe may also be addressed by having a smoothing mechanism in the contraction phase, see e.g., <cit.> for more details. Moreover, though the fine-tuning is required to keep a stable contracting regime, the nonsingular bounce succeeds in sustaining a nearly scale-invariant power spectrum <cit.>. For several years, great effort has been devoted to the study ofbouncing cosmologies within different frameworks. The resulted cosmological models could be obtained at a classical level or by quantum modifications. Most of the efforts in quantum gravity are devoted to reveal the nature of the initial singularity of the Universe and to better understand the origin of matter, non-gravitational fields, and the very nature of the spacetime. Non-singular bouncing solutions generically appear in loop quantum cosmology (LQC) <cit.>, where the variables and quantization techniques of loop quantum gravity areemployed to investigate the effects of quantum gravity in cosmological spacetimes <cit.>. The recent large amount of works done within the loop quantum cosmology (LQC) show that when the curvature of spacetime reaches the Planck scale, the Big-Bang singularity is replaced by a quantum Big-Bounce with finite density and spacetime curvature <cit.>. Another approach, based on the de Broglie-Bohm quantum theory, utilizes the wave function of the Universe in order to determine a quantum trajectory of the Universe through a bounce <cit.>. In the framework of LQC the semi-classical Friedmann equations receive corrections as <cit.>H^2_ LQC=8π G/3ρ(1-ρ/ρ_ max),Ḣ_ LQC=-4π G(ρ+p)(1-2ρ/ρ_ max).where, ρ_ max≈0.41ρ_ Pl and ρ_ Pl=c^5/ħ G^2 being the Planck density <cit.>. We note that, the relative magnitude of ρ and ρ_ max enables one to distinguish classical and quantum regimes. By a short qualitative inspection of the above equations the general feature of the bouncing behavior will be revealed: initially, the Universe were in contracting phase at which the matter density and curvature are very low compared to the Planck scale. As the Universe contracts more, the maximum density is reached so that the quantum evolution follows the classical trajectory at low densities and curvatures but undergoes a quantum bounce at matter density ρ=ρ_ max, where we have H_ LQC=0 and also Ḣ_ LQC=4π G (ρ+p). The quantum regime then joins on to the classical trajectory that was contracting to the future. Therefore, the quantum gravity effects create a non-singular transition from contraction to expansion and thus the Big-Bang singularity is replaced by a quantum bounce. Furthermore, we see that for all matter fields which satisfy the weak energy condition (WEC) we have Ḣ_ LQC>0. These two results are accounted for the general conditions for the existence of a bouncing solution. Moreover, nonsingular bouncing scenarios have been also reported in nonlocal gravity model <cit.> where effective Friedmann equation with a slight difference to equation (<ref>) has been proposed exhibiting a bouncing-accelerating behavior. See also <cit.> for probing the issue of singularity avoidance in nonlocal gravitational theories. Another type of theories are called non-singular matter bounce scenarios which is a cosmological model with an initial state of matter-dominated contraction and a non-singular bounce <cit.>. Such a model provides an alternative to inflationary cosmology for generating the observed spectrum of cosmological fluctuations <cit.>. In these theories some matter fields are introduced in such a way that the WEC is violated in order to make Ḣ>0 at the bounce. From equation (<ref>), it is obvious that putting aside the correction term leads to negative values for the time derivative of the Hubble parameter for all fluids which respect WEC. Therefore, in order to obtain a bouncing cosmology it is necessary to either go beyond the GR framework, or else to introduce new forms of matter which violate the key energy conditions, i.e., the null energy condition (NEC) and the WEC. For a successful bounce, it can be shown that within the context of SCM the NEC and thus the WEC, are violated for a period of time around the bouncing point. In the context of matter bounce scenarios, many studies have been performed using quintom matter <cit.>, Lee-Wick matter <cit.>, ghost condensate field <cit.>, Galileon fields <cit.> and phantom field <cit.>. Cosmological bouncing models have also been constructed via various approaches to modified gravity such as f( R) gravity <cit.>, teleparallel f( T) gravity <cit.>, brane world models <cit.>, Einstein-Cartan theory <cit.>, Horava-Lifshitz gravity <cit.>, nonlocal gravity <cit.> and others <cit.>. There are also other cosmological models such as Ekpyrotic model <cit.> and string cosmology <cit.> which are alternatives to both inflation and matter bounce scenarios. In the present work we study the existence of the bouncing solutions in the context of f( R, T) gravity. These type of theories have been firstly introduced in <cit.> and later, their different aspects have been carefully studied and analyzed in  <cit.>. In the current work we use an effective approach that we have introduced previously in <cit.>. In this method one defines aneffective fluid endowed with the effective energy density ρ_ (eff) and effective pressure p_ (eff) allowing thus, to reformulate the f( R, T) gravity field equations. In a class of the minimally coupled form, i.e., f( R, T)= R+h( T), one usually presumes h( T) functions and solves the resulted field equations. Instead, using the effective fluid description we obtain the h( T) function which corresponds to an EoS defined as p_(eff)=𝒴(ρ_(eff)). We therefore observe that the effective fluid picture may atleast be imagined as a mathematical translation of gravitational interactions between the actual matter fields and the curvature to an overall behavior attributed to a mysterious fluid with p_( eff)( T) and ρ_( eff)( T). In the current article we discuss three different classes of models specified by three different effective EoSs or equivalently three different h( T) functions. We shall see that in these cases we obtainρ_( eff)( T)=β_1 T+β_2 T^γ+ρ̃_( eff),p_( eff)( T)=λ_1 T+λ_2 T^γ+p̃_( eff),where, β_i, λ_i, γ, ρ̃_( eff) and p̃_( eff) are some constants. Eliminating the trace between the effective density and pressure leaves us with an EoS as followsλ _1 (ρ -ρ̃)-β _1 (p-p̃)/β _2 λ _1-β _1 λ _2=[λ _2 (ρ -ρ̃)-β _2 (p-p̃)/β _1 λ _2-β _2 λ _1]^γ,where the subscript effhas been dropped. In view of relation (<ref>), we may conclude that in f( R, T)= R+h( T) gravity, interactions of a perfect fluid with the spacetime curvature can be mapped effectively onto the behavior of an exotic fluid obeying equation of state (<ref>). It is quiet interesting that a reduced form of (<ref>) has been introduced in <cit.> and further studied in <cit.>. Such a complicated EoS has been introduced to study the cosmological implication of a model with a mixture of two different fluids, i.e., effective quintessence and effective phantom. Additionally, one can find other related works investigating some exotic fluids which follow various subfamily of (<ref>). These theories may be called modified equation of state (MEoS) models. This branch of research presumes a mysterious fluid(s) specified by an unusual EoS with the hope of dealing with some unanswered questions in the cosmological realm. For example some relevant works in the literature can be addressed as follows; in <cit.> the author has employed an EoS of the form p=-ρ+γρ^λ in order to obtain power-law and exponential inflationary solutions. The case with λ=1/2 has been analyzed in <cit.> to focus on the future expansion of the Universe. Emergent Universe models have been studied in <cit.> by taking into account an exotic component with p=Aρ-B ρ^1/2 and in <cit.> with A=-1. Different cosmological aspects of DE with more simple form of EoS, i.e, p_ DE=α(ρ_ DE-ρ_0) have been investigated in <cit.> and the study of cosmological bouncing solutions can be found in <cit.>.Therefore, recasting the f( R, T) field equations into the effective picture may provide a bridge to the cosmological models supported by MEoS. Via this connection the problem of an exotic fluid turns into the problem of a usual fluid with exotic gravitational interactions. However, contrary to the former, in the latter case we start with a predetermined Lagrangian, i.e., f( R, T) gravitational Lagrangian. The importance of the effective picture becomes more clear when one considers the energy conditions in f( R, T) gravity. As discussed in <cit.>, in f( R, T) gravity the energy conditions would be obtained for effective pressure and effective energy density. Therefore, it is reasonable to define a fluid as a source with effective pressure and energy density. As we shall see, the bouncing solutions in f( R, T) gravity (using only one perfect fluid in a flat FLRW background) in the framework of our effective fluid approach, exhibit nonsingular properties such that in a finite value of the bounce time t_ b, non of the cosmological quantities would diverge. More exactly, as t→ t_ b we observe that the scale factor decreases to a minimum non-vanishing value, i.e., a→ a_ b,H|_t→ t_ b→ 0, ρ|_t→ t_ b→ρ_ b, ρ_ (eff)|_t→ t_ b→0 and p_ (eff)|_t→ t_ b→p_(eff)_ b. Therefore, non of the future singularities would appear. Also, in all cases we have 𝒲|_t→ t_ b→-∞, where 𝒲 being the effective EoS parameter and subscript b stands for the value of quantities at the time at which the bounce occurs. We then observe that if we want to describe nonsingular bouncing solutions in f( R, T)= R+h( T) gravity using a minimally coupled scalar field, a phantom field should be employed. These solutions show a violation of the NEC in addition to the strong energy condition SEC. Such a behavior is predicted in GR for a perfect fluid in FLRW metric with k=-1,0 <cit.>.The current research is planned as follows. In Sec. <ref> we briefly present the effective fluid picture. Sec. <ref> is devoted to the bouncing solutions with asymptotic de Sitter behavior before and after the bounce. We first analyze models with constant effective pressure in subsection <ref>, they are called models of type A. We then proceed to investigate the corresponding bouncing solutions, the energy conditions, the scalar field representation and finally the stability of these type ofsolutions. In subsection <ref> models which correspond to two different EoSs assuming p_( eff)=𝒴(ρ_( eff)) will be discussed. These models are named as B, C, D and E models. An example of the matter bounce solution is considered in Sec. <ref> which is labeled as model E. The connection ofA-E models with MEoS theories will be presented through the effective picture. Section <ref>, is devoted to study of scalar-type cosmological perturbations. In Sec. <ref> we give a brief review of singular models in the context of f( R,T) gravity and obtain a class of solutions exhibiting singular behavior. Finally, in Sec. <ref> we summarize our conclusions. § REFORMULATION OF F( R, T) FIELD EQUATIONS IN TERMS OF A CONSERVED EFFECTIVE FLUID In the present section we review the field equations of f( R, T) gravity theories and rewrite them in terms of a conserved effective fluid. This reformulation would allow us to better understand the properties of bouncing solutions as well as classifying them. In f( R, T) gravity, one usually chooses a mathematically suitable f( R, T) Lagrangian, then obtains the corresponding field equations, and finally tries to solve them. In  <cit.>, we introduced a novel point of view to dealing with the field equations of f( R, T) gravity. This approach is based on reconsidering the field equations in terms of a conserved effective fluid. One of the benefits of using this method is to obtain a form of f( R, T) function for a physically justified condition on the effective fluid. Therefore, instead of mathematical arbitrariness in selecting different f( R, T) functions, we have physically meaningful Lagrangian forms. Hence,in this paper we make use of those f( R, T) functions that we obtained in our previous work <cit.>. The action integral in f( R, T) gravity theories is given by <cit.>S=∫√(- g) d^4 x [1/2κ^2 f( R,T)+ L^ (m)],where R, T≡ g^μν T_μν, L^(m) denote the Ricci scalar, the trace of energy momentum tensor (EMT) and the Lagrangian of matter, respectively. The determinant of metric is shown by g, κ^2≡8 π G is the gravitational coupling constant and we have set c=1. We assume that the matter Lagrangian L^ (m) depends only on the metric components therefore the following form for the EMT can be definedT_μν≡-2/√(-g)δ[√(- g) L^ (m)]/δ g^μν.By varying action (<ref>) with respect to the metric components g^αβ we obtain the following field equation <cit.>F( R, T)R_μν-1/2 f( R, T) g_μν+( g_μν□ -▿_μ▿_ν)F( R, T)= (κ^2-ℱ( R, T)) T_μν-ℱ( R, T)Θ_μν,where we have definedΘ_μν≡ g^αβδ T_αβ/δ g^μν=-2 T_αβ+ g_αβ L^ (m),andℱ( R, T) ≡∂ f( R, T)/∂ T                     F( R, T) ≡∂ f( R, T)/∂ R.The differential equations governing the dynamical evolution of the Universe can be obtained from equation (<ref>) by choosing suitable line element. For cosmological applications one can use a spatially flat FLRW metric given asds^2=-dt^2+a^2(t) (dr^2+r^2dΩ^2),where a(t) is the scale factor of the universe and dΩ^2 is the standard line element on a unit two sphere. Applying metric (<ref>) to field equation (<ref>) together with using the definition for Θ_μν for a perfect fluid leads to3H^2F( R, T)+1/2(f( R, T)-F( R, T) R)+3Ḟ( R, T)H= (κ^2 +ℱ ( R, T))ρ+ℱ ( R, T)p,as the modified Friedmann equation, and2F( R, T) Ḣ+F̈ ( R, T)-Ḟ ( R, T) H=-(κ^2+ℱ ( R, T))(ρ+p),as the modified Raychaudhuri equation, where H indicates the Hubble parameter. Note that, we have used L^ (m)=p for a perfect fluid in the expression (<ref>). Applying the Bianchi identity to the field equation (<ref>) gives following covariant equation (κ^2 +ℱ)∇^μ T_μν+1/2ℱ∇_μ T + T_μν∇^μℱ-∇_ν(pℱ)=0,where we have dropped the argument of ℱ( R, T) for abbreviation. Equation (<ref>) tells us that the conservation of EMT is not generally respected in f( R, T) gravity theories. There is Only a narrow class of solutions for which the conservation of energy is still preserved <cit.>. In this work we consider a specific class of models for which the Lagrangian is written as a minimal coupling between the Ricci curvature scalar and a function of trace of EMT, i.e.,f( R, T)= R+ακ^2 h( T).Also, we study bouncing solutions when the Universe contains a single perfect fluid with a barotropic EoS given as p=wρ. In the next section, we seek for non-singular cosmological solutions in f( R, T) gravity with a baratropic perfect fluid. As we shall see, a useful approach for choosing the functionality of h( T) is to rewriting equations (<ref>) and (<ref>) in terms of an effective fluid that respects the conservation of EMT. For the Lagrangian, (<ref>) equations (<ref>) and (<ref>) are simplified to3H^2=κ^2{[1+(1+w)α h'] T/3w-1-α h/2},and2Ḣ=-κ^2w+1/3w-1(1+α h') T,where ' denotes derivative with respect to the argument. Also, from equation (<ref>) we obtain(1 + α/2(3-w)h' + α (1+w) Th”)Ṫ+ 3H(1+w)(1 + α h') T=0.As a matter of fact, one can directly solve equations (<ref>)-(<ref>) to obtain the scale factor or the Hubble parameter, once the h( T) function is determined. Alternatively, one can also define the pressure and energy density profiles of an effective fluid along with imposing the energy conditions in order to obtain the functionality of h( T). We then proceed in this way and rewrite equation (<ref>) as3H^2=κ^2ρ_( eff)( T),where,ρ_( eff)( T)≡[1+(1+w)α h'] T/3w-1-α h/2.With this definition the acceleration of expansion of the Universe can be obtained as followsä/a=-κ^2/6(ρ_( eff)( T)+3p_( eff)( T)),where, we have defined the effective pressure asp_( eff)( T)≡w/3w-1 T+α h/2.Therefore, the original field equations of f( R, T) gravity can be recast into the usual Friedman form with an effective fluid. This fluid is characterized by an effective energy density and effective pressure which in turn are determined in terms of the EMT trace. Therefore, once a property/relation for energy density and pressure components is established, a first order differential equation for the function h( T) could be reached. Solving the differential equation leads to an h( T) function that conveys the specified property/relation. Hence, in this way we can obtain a minimal f( R, T) model based on the conditions on energy density and pressure profiles of the effective fluid, instead of choosing the functionality of f( R, T) based on ad hoc mathematical terms. It is straightforward to verify that equation (<ref>) is turned to the usual conservation equation in terms of ρ_(eff) and p_(eff), i.e.,ρ̇_( eff)+3H(ρ_( eff)+p_( eff))=0,where the arguments are dropped for the sake of simplicity. Consequently, an effective EoS parameter can be defined for this effective fluid as𝒲≡p_( eff)/ρ_( eff)=-1+2(1+w)(1+α h') T/2[1+(1+w)α h'] T-(3w-1)α h. § ASYMPTOTIC DE-SITTER BOUNCING SOLUTIONS IN F( R, T) GRAVITY In this section we study different bouncing cosmological solutions of f( R, T) gravity. We extract those solutions which correspond to some properties of the effective fluid. Such an approach may help us to understand how these solutions can emerge in f( R, T) gravity. Furthermore, there can be obtained more bouncing solutions, however, to show that f( R, T) gravity theories are capable of describing a non-singular pre-Big Bang era, we are restrict ourselves to study only few examples. We set κ^2=1 in the rest of the work. §.§ Type A models: Solutions which correspond to a constant effective pressure, p_( eff)( T)=𝒫Let us begin with bouncing solutions which are obtained by assuming an effective fluid with constant pressure. We show that these type of models lead to a de-Sitter era at late times <cit.>. From definition (<ref>) for a constant effective pressure we obtainh_ A( T)=2/α(𝒫+w/1-3 w T).Substituting the above function into the modified conservation equation (<ref>) together with using T=(3w-1)ρ_ A for a perfect fluid we getρ_ A=ρ_0a_ A^-3.where ρ_0 is an integration constant. Substituting solutions (<ref>) and (<ref>) back into the Friedman equation (<ref>) leaves us with the following differential equation for the scale factor3(a'_ A(t)/a_ A(t))^2-ρ_0(w^2-1)/(3 w-1) a_ A(t)^3+𝒫=0.For 𝒫=0, a particular solution to the above equation is found as a_ A(t)∝ t^2/3. Moreover, in case in which the middle term is absent, the solution represents a de-Sitter phase for 𝒫<0.From the Friedman equation (<ref>), we observe that the effective pressure 𝒫 plays the role of a cosmological constant. Other forms for the Friedman equation similar to equation (<ref>) have been found in the literature. For example in <cit.> authors have worked on a perfect fluid, dubbed as DE with a linear EoS, in a flat FLRW background. They used p_ DE=α(ρ_ DE-ρ_0)≡ p_α+p_Λ, where α and ρ_0 being some constants. The DE pressure is then decomposed to a dynamical part which corresponds to p_α as well as a constant part which corresponds to p_Λ. These assumptions lead to an equation that will reduce to (<ref>), when only p_Λ is considered. In the upcoming subsections when we study the model C, we find that in the context of f( R, T) gravity one can obtain the same results as those given in <cit.>. Equation (<ref>) admits three different general solutions which we will consider only one of them. A general solution of equation (<ref>) is obtained asa_ A(t)=1/𝔄[cosh(√(-𝒫/3t))-sinh(√(-𝒫/3t))] ×{𝔅+(w^2-1)𝒫ρ_0/2(3w-1)[sinh(√(-3𝒫) t)+cosh(√(-3𝒫) t)-1]}^2/3𝔄=[𝒫^2 (2𝔅-(w^2-1)𝒫ρ _0/3 w-1)]^1/3𝔅=a_0^3 𝒫^2-√(a_0^3 𝒫^3 (a_0^3 𝒫+ρ_0(1- w^2)/3 w-1)).Note that once we set the integration constant in equation (<ref>) such that a_A(0)=0, we get 𝔅=0 and furthermore, if we consider the case w=0, we obtain 𝔄=𝒫ρ_0^1/3. Thus the solution (<ref>) reduces to the familiar form a(t)=(-ρ_0/𝒫)^1/3sinh ^2/3(√(-3𝒫/2)t). However, the integration constant in solution (<ref>) is fixed such that we have a(t=0)=a_0. The most important feature of solution (<ref>) is the appearance of cosine hyperbolic function which allows us to have a non-singular behavior for 𝒫<0. This solution describes the de-Sitter expansion of the Universe which is initially dominated by dust <cit.>. In Figure <ref> the thick black curve shows the scale factor solution (<ref>). This solution behaves exponentially in the far past and far future from the bounce and reaches a nonzero minimum value at the bounce time, given bya_(min)A=𝔄^-1√(2(1-3w)𝔅/(1- w^2)𝒫ρ_0-1)[2(3 w-1)𝔅^2/2 (3 w-1)𝔅-(w^2-1)𝒫ρ_0-. .(w^2-1)𝒫ρ_0/3 w-1]^2/3.The Hubble parameter is obtained asH_A(t)=√(-𝒫/3)[2 (1-3 w)𝔅+(w^2-1)𝒫ρ_0(sinh(√(-3𝒫) t)+cosh(√(-3𝒫) t)+1)/2 (3 w-1)𝔅+(w^2-1)𝒫ρ_0(sinh(√(-3𝒫) t)+cosh(√(-3𝒫) t)-1)].Figure <ref> shows that the Hubble parameter tends to constant values before and after the bounce and vanishes at the bounce. On the other hand, the Hubble parameter and its time derivatives diverge at an imaginary time t_s=iπ/√(-3𝒫)-t_b, where t_b is the (real) time at which the bounce occurs. Therefore, we observe that the Hubble parameter behaves in a well-defined way (without any future singularity) so that it respects the bounce conditions. We can also obtain the effective energy density and the effective EoS parameter. To this aim, we substitute (<ref>) into equations (<ref>) and (<ref>), respectively. Thus we obtainρ_( eff)A(a)=(w^2-1)ρ_0/(3 w-1)a_A^3-𝒫,𝒲_A=(1-3 w)𝒫a_A^3/(3 w-1)𝒫a_A^3-(w^2-1)ρ_0. As can be seen from Figure <ref>, the effective density reduces from a constant value and tends to zero near the bounce. From equation (<ref>) we see that the vanishing of Hubble parameter at the bounce demands that the effective density becomes zero. Also for the same reason, the effective EoS diverges at the bounce. Such behaviors are common for all bouncing models that we shall present in the framework of minimal f( R, T) gravity. the matter energy density itself increases from small values to a maximum value near the bounce. Based on the exchange of energy between gravitational field and matter constituents (that the mechanism of which is explained in <cit.>) one may explain the process of bouncing behavior; the interaction of the real fluid with curvature leads to some transformations of energy from gravitational field to matter before the bounce where the spacetime curvature is dominant in comparison to matter energy density. Such a transmutation, that the start of which is triggered at the time far past the bounce, gives rise to an increase in the energy density as the bounce event is approached. At the bounce time the energy density of matter grows to a maximum value after which the process of transmutation is reversed until the density falls back to zero (the post-bounce regime). Note that the effective energy density remains constant in the de-Sitter era. However, some physics is needed in order to explain the process of matter production from curvature component which disturbs the stability of de-Sitter era to enter the bounce event. Informations from Figure <ref> can help us to discuss the energy conditions. In GR the well-known energy conditions are the NEC, WEC, SEC and the dominant energy condition (DEC). In a modified gravity theory with defined effective energy density and pressure, these conditions can be written as <cit.>WEC⇔ρ_( eff)≥0,  ρ_(eff)+p_(eff)≥0, NEC⇔ρ_(eff)+p_(eff)≥0, SEC⇔ρ_(eff)+3p_(eff)≥0  ρ_(eff)+p_(eff)≥0, DEC⇔ρ_ (eff)≥0,  ρ_(eff)± p_(eff)≥0.For the model A, we obtain the following resultsρ_(eff)+p_(eff)=(w^2-1)ρ_0/(3 w-1)a_A^3,ρ_(eff)+3p_(eff)=(w^2-1)ρ_0/(3 w-1)a_A^3+2𝒫.It is obvious that the fulfillment of NEC and thus WEC requires (w^2-1)/(3w-1)≥0 which gives -1≤ w ≤1/3. From (<ref>) we see that the effective energy density tends to -𝒫 at late times and vanishes at the time of bounce. Therefore, we obtain 3 𝒫≤ρ_(eff)+3p_(eff)≤2𝒫, (remember that 𝒫<0). Thus, since only negative values are valid for the effective pressure 𝒫, the SEC is always violated. However, the validity of NEC depends upon the value of w. We plot the diagrams for w=0.6 in Figure <ref>. This figure shows that the NEC and SEC are violated in this case. Our studies show that the bouncing behavior is achieved from solution (<ref>) for w>1/3. Note that, as we have mentioned before, solution (<ref>) is only one of the three possible solutions of equation (<ref>). Investigating other solutions may validate the cases with w<1/3 from energy conditions point of view.One may be tempted to reinterpret the source of matter as that of a scalar field. Such a representation is also used in similar works <cit.>. In the case of constant effective pressure, the particular solution (<ref>) (which is valid for 1/3<w<1) corresponds to 𝒲_A<-1 which can be realized from relation (<ref>), see also the long-dashed blue curve in Figure <ref>. Therefore, if we want to translate mutual interactions of perfect fluid and curvature as the behavior of an effective scalar field, we should employ a phantom scalar field. Therefore, we can defineℒ_Ph=-1/2ϕ_(eff)_;μϕ_(eff)^;μ-V(ϕ_(eff)),ρ_(eff)=-1/2ϕ̇_(eff)^2+V(ϕ_(eff)),         p_(eff)=-1/2ϕ̇_(eff)^2-V(ϕ_(eff)).where, the subscript Ph denotes the phantom field and ; indicates covariant differentiation. We then getϕ̇_(eff)A^2=-(ρ_(eff)+p_(eff))=(1-w^2)ρ_0/(3 w-1)a_A^3,V(ϕ_(eff)A)=1/2(ρ_(eff)-p_(eff))=(w^2-1)ρ_0/2(3 w-1)a_A^3-𝒫.A straightforward calculation reveals thatϕ_(eff)A=-4𝔄/𝒫√((1 - 3 w) (w-1)𝔄/3ℭ)arctan[ℬ (3 w-1) (tanh(√(-3p/16)t)-1)+ (w^2-1)𝒫,ρ_0/√(- (w+1)𝒫ρ_0ℭ)],where we have definedℭ=(w-1) [2(1-3 w)𝔅+(w^2-1)𝒫ρ_0].We can also obtain ϕ_(eff)A in terms of the scale factor by solving solution (<ref>) for time t, which givest=1/√(-3𝒫)log[2 a_A^3 𝔄^3 (1-3 w)^2 (1-√(ℭ (w+1)𝒫ρ_0/a_A^3 𝔄^3 (1-3 w)^2+1))+ (w+1)𝒫ρ_0ℭ/(w^2-1)^2𝒫^2ρ_0^2 ].We have plotted ϕ_(eff)A and V_(eff)A in Figure <ref> for the same parameters of Figure <ref>. Another important issue that needs to be treated is to investigate the stability properties of solution of equation (<ref>). Substituting solution (<ref>) in (<ref>) and also (<ref>) in (<ref>) and taking H and ρ as dynamical variables, we arrive at the following dynamical systemḢ=β/2ρ,ρ̇=-3 H ρ,3H^2=-βρ-𝒫,where β=(1-w^2)/2(3w-1) and we have dropped subscripted A . Note that the validity of solution (<ref>) requires that β>0. The System (<ref>)-(<ref>) has two critical points far from the bounce; P^(±)=(±√(-𝒫/3),0) which corresponds to the eigenvalues λ^(±)=(∓√(-𝒫/3),0). We see that the stability properties of solutions are independent of w. Due to the appearance of the zero eigenvalues, one may not decide about the stability properties of these fixed points, however, by inspecting equations (<ref>) and (<ref>) it is possible to figure out the nature of the fixed points. For fixed point P^(-) equation (<ref>) becomesρ̇^(-)=√(-3𝒫)ρ^(-).Therefore, in the vicinity of P^(-) within the phase space, by indicating the values of ρ and H on vertical and horizontal axises, respectively, we have Ḣ>0 & ρ̇>0 for all points with ρ>0 and Ḣ<0 & ρ̇<0 for points with ρ<0. These show that when ρ→ρ+δρ for t→ t+δ t, the solution at P^(-) will not stay stationary and hence it is a repulsive fixed point. On the other hand, for P^(+) we haveρ̇^(+)=-√(-3𝒫)ρ^(+).Therefore, in this case we have Ḣ>0 & ρ̇<0 for all points with ρ>0 and Ḣ<0 & ρ̇>0 for points with ρ<0 in the vicinity of P^(+). Hence it is a stable fixed point. The bounce corresponds to the point P^(b)=(0,-𝒫/β) for which we have Ḣ=-𝒫/2>0. Therefore, at this point the tangent vectors on the phase space trajectories are directed toward the right side. We plot a typical trajectory in the phase space in Figure <ref>.§.§ Solutions which correspond to a general effective EoS, p_(eff)=𝒴(ρ_(eff))These class of models can be constructed by imposing a particular condition on the effective profiles. This approach can be viewed as a sort of classification of f( R, T) gravity models based on the properties of the effective quantities. Generally, one can obtain a class of h( T) functions for a determined property which is specified by an effective EoS. In the following sections we consider two subclasses based on conditions on the effective densities. We find that each class of h( T) solutions that exhibit bouncing behavior correspond to an effective EoS which is already introduced or obtained for an exotic fluid in the literature <cit.>.§.§.§ Type B Models: Solutions which follow the relation dρ_(eff)/d T=[n/(1+w) T](ρ_(eff)+p_(eff))Applying this condition together with using the definitions for the effective energy density and pressure, we arrive at a differential equation for h( T) which can be solved as follows <cit.>h_B( T)=2 Γ_B (w+1)/2 n+3 w-1 T^2 n+3 w-1/2 (w+1)-2 (n-1) /α(2 n+w-3) T+Λ_B,where, Γ_B and Λ_B are integration constants and n is an arbitrary constant. Substituting the relation dρ_eff/d T=[n/(1+w) T](ρ_eff+p_eff) into equation (<ref>) givesρ_B=ρ_0a^-3 (w+1)/n.Next we proceed to find a non-singular bouncing solution by solving the modified Friedmann equation (<ref>) for solutions (<ref>) and (<ref>). We first try to obtain solutions of the form a_ B(t)=ℛ(cosh[(t-t_0)/ℛ]-𝒮) where ℛ and 𝒮 are constants. This type of solution has been discussed in <cit.> under assumption of an MEoS and has the following form of theFriedmann equation3(a'_B(t)/a_B(t))^2-3(𝒮^2-1)/a_B(t)^2-6𝒮/ℛa_B(t)-3/ℛ^2=0.Applying (<ref>) and (<ref>) in equation (<ref>) gives the Friedmann equation for arbitrary constants w and n. We can check that, there are only two cases which correspond to equation (<ref>) and thus to the scale factor a_B as the solution; when w=-1/5,n=12/5 and w=-1/5,n=6/5. However, the latter leads to similar physics to the former. The physical quantities constructed out of the bouncing solution for w=-1/5, n=12/5 are given as followsa_B(t)=ℛ[cosh(t/ℛ)-𝒮],  ℛ=√(-6/αΛ_B),  𝒮=-3/10ℛρ0,H_B(t)=sinh(t/ℛ)/ℛ[cosh(t/ℛ)-𝒮],ρ_(eff)B=3 (a_B+ℛ(𝒮-1)) (a_B+ℛ(𝒮+1))/a_B^2 ℛ^2,p_(eff)B=ℛ [ℛ-𝒮 (4 a_B+𝒮ℛ)]-3 a_B^2/a_B^2 ℛ^2,𝒲_B=-1/3[a_B/a_B+ℛ(𝒮-1)+a_B/a_B+ℛ(𝒮+1)+1].The behavior of the above quantities are depicted in Figure <ref>. In order that the ansatz a_ B(t) satisfies equation (<ref>) we must haveΓ_B=1/32 α^2( 27/Λ_B+50 α/ρ_0^2).Solution (<ref>) shows that the Universe shrinks from an infinite size to a minimum radius which is equal to ℛ(1-𝒮). The size of Universe at the time of bounce is controlled by the constant Λ_B as well as the coupling constant α.As can be seen, in equation (<ref>) and the subsequent solutions, these two constants appear as a multiplied form. This means that if either of these constants become zero, the bouncing solution would disappear. Also, expression (<ref>) together with solution (<ref>) indicate that the bouncing solution disappears in model B when α=0.The expressions for effective energy density and pressure, i.e. (<ref>) and (<ref>), can be rewritten as a sum of three densities and pressures given byρ_(eff)B=1-𝒮^2/a_B^2-4𝒮/a_Bℛ-3/𝒮^2=ρ_1+ρ_2+ρ_3,p_(eff)B=𝒮^2-1/a_B^2+6𝒮/a_Bℛ+3/𝒮^2=p_1+p_2+p_3,From these expressions, we could suppose that our effective fluid consists of a combination of three perfect fluids with EoSs w_i=ρ_i/p_i. In this view, there is a DE component which corresponds to a cosmological constant with w_1=-1, a quintessence with w_2=-2/3 and a fluid which drives an expanding Universe with zero acceleration, with w_2=-1/3. Such a description has been presented in <cit.>. In the framework of f(R, T) gravity this decomposition may be translated as follows; the effects of unusual interactions of matter (here a perfect fluid with w=-1/5) with curvature, could produce the same behavior as the case of GR with three different fluids. Eliminating the scale factor from solutions (<ref>) and (<ref>) leads to an effective EoS of the formp_(eff)B=-ρ_(eff)B/3+2 𝒮/3ℛ^2 (𝒮^2-1) [3ℛ^2 (𝒮^2-1)ρ_(eff)B+9]^1/2+2ℛ^2(𝒮^2-1).In the cosmological applications, a general type of EoS p=βρ+γ f(ρ)is ascribed to some exotic or dark fluid which can determine the evolution of the Universe. Different choices for function f(ρ) are studied in the literature. Such an equation has been already discussed in <cit.> where the authors considered cosmological consequences of an EoS of the form p=-ρ-ρ^q+1.Substituting expressions (<ref>) and (<ref>) in the NEC and SEC conditions (<ref>) and (<ref>) leads to the following conditionsNEC_(eff)B=2 (𝒮^2-1)/a_B^2+2 𝒮/a_Bℛ≥0, SEC_(eff)B=-6(1/ℛ^2+𝒮/a_Bℛ)≥0.However, expressions (<ref>) and (<ref>) are never satisfied; far from the bounce where a→∞, the second term of (<ref>) which always has negative sign dominates and in the limit a→ a_b=ℛ(1-𝒮) the expression for NEC becomes -2/ℛ^2(1-𝒮) which is also negative because 𝒮<0. The same line of reasons can be used to prove the violation of SEC. Thus, in model B the NEC and SEC are never satisfied. Note that, the NEC is violated only near the bounce, because expression (<ref>) tends to zero as the scale factor gets large values.By considering the above discussion for energy conditions, we find that again, a phantom scalar field can be used for modeling the behavior of bouncing solution. In the case of model B we haveϕ̇_(eff)B^2=-2 (a_B𝒮+(𝒮^2-1) ℛ)/a_B^2 ℛ,V_(eff)B=2 (𝒮^2-1)/a_B^2+5 𝒮/a_Bℛ+3/ℛ^2.Our studies show that the behavior of the above solutions are similar to those of model A which in Figure <ref> a typical example is demonstrated. By choosing H_B and ρ_B as dynamical variables one can rewrite the Friedmann equation as followsḢ=1-𝒮^2/ρ_0^2ρ^2+3/10ρ,ρ̇=-H ρ,H^2=1-𝒮^2/ρ_0^2ρ^2-3/5ρ+1/ℛ^2,where we have dropped the subscript B. This system admits two critical points P_B^(±)=(±√(1/ℛ),0) far from the bounce. In these situations we have ρ→0, hence, the dynamics of equation (<ref>) is determined by the second term. Also, near the point of the bounce as specified by H_b=0, ρ_b=3 ρ_0^2/10 ℛ(ℛ+1), we always have Ḣ>0. Therefore, the stability properties of the system is similar to the model A.The Friedmann equation (<ref>) for general function (<ref>) takes the form3H^2=2 α n(w+1)Γ_B/(3 w-1) (2 n+3 w-1) T^n-2/w+1+3/2+n (w-1) T/(3 w-1) (2 n+w-3)-αΛ_B/2.Seeking for a general solution demands that one substitutes the solution (<ref>) into (<ref>) (using the fact that T=(3w-1)ρ) and solves for the resulting differential equation to find the scale factor. However, the resulting equation cannot be solved analytically for arbitrary values of w and n. Nevertheless, for particular values of these parameters a non-singular solution can be obtained as given in expressions (<ref>)-(<ref>). But, we are still able to find more general solutions.As the third type of solutions named as C, we work on a bouncing solution for which the governing differential equation is given by3 (a'_C(t)/a_C(t))^2-αΓ_C nρ_0^n+1/2/n+1a_C(t)^-3(n+1)/n+αΛ_C/2=0,where we have set w=1. Choosing a relation betweenΓ_C and ρ_0, a_0, n and Λ_C as the followingΓ_C=(n+1)Λ_C(√(2ρ_0) a_0^-3/n)^-n-1/n (Q+1),equation (<ref>) leads to the following solution for the scale factora_C(t)=a_0[(Q+1)cosh(√(3αΛ_C/2)(n+1)/2nt±cosh ^-1(Q)/2)]^2n/3 n+3,where Q is an arbitrary constant and for the Hubble parameter we haveH_C(t)=-√(αΛ_C/6)tanh[1/4(√(6αΛ_C) (n+1) /nt±2cosh ^-1(Q))].By substituting (<ref>) for w=1 within the definitions (<ref>), (<ref>) and (<ref>) along with using relation (<ref>) we get the effective quantities as follows[We note that as we are concerned with the solutions that respect the conservation equation (<ref>), expression (<ref>) is used for these class of solutions.]ρ_(eff)C(a)=αΛ_C[1/2-(a/a_0)^-3 (n+1)/n/Q+1],p_(eff)C(a)=-αΛ_C[(a/a_0)^-3 (n+1)/n/n(Q+1)+1/2],𝒲_C(a)=2+n (Q+1) (a/a_0)^3 (n+1)/n/n[2-(Q+1) (a/a_0)^3 (n+1)/n].We have plotted the cosmological parameters (<ref>)-(<ref>) in Figure <ref>.Unlike the solution B, here, we see that matter creation in the time of bounce leads to a decrease in the effective pressure. Eliminating the scale factor between expressions (<ref>) and (<ref>) gives the effective EoS which can be viewed as the characteristic equation for the model C. We therefore getp_(eff)C= 1/n(ρ_(eff)C-αΛ_C/2(1+n) ).Some of the cosmological properties of model C have been investigated in <cit.>. The authors have considered a model of DE for which an EoS of the form p_DE=γ(ρ_DE-θ)[In the original paper they used α and ρ_0 instead of γ and θ, respectively. We changed this character in the present work to prevent ambiguity.] is assumed for a perfect fluid. We therefore observe that if we apply ρ_(eff)→3ρ_DE within equation (<ref>), we obtain the same Friedman equation as the one given in <cit.>. Also, by redefining the parameters as n→1/γ and α(1+n)Λ_C/2→θ in (<ref>) we will obtain the corresponding solution for the scale factor. These considerations show that the problem of dark fluid with an unusual EoS (which may not clearly correspond to a definite Lagrangian) can be explained in the framework of f( R, T) gravity.§.§.§ Type D models: solutions which are consistent with the relation dρ_(eff)/d T=mApplying this condition on the definition of effective energy density, i.e., definition(<ref>), leads to a second order differential equation for h( T) function; the solution then readsh_D( T)=2Γ_D(w+1)/3 w-1 T^3 w-1/2 (w+1)+2 [m(1-3w)+1]/α(w-3) T+Λ_D,where, m is an arbitrary constant and Γ_D, Λ_D are integration constants. Substituting (<ref>) into the conservation equation (<ref>), we obtain a first order differential equation for the matter energy density in terms of the scale factor. However, since the mentioned equation cannot be solved for an exact general solution for arbitrary values of w and m, we proceed with particular cases. Note that, further investigations may give other exact solutions or even numerical simulations can be utilized to study other solutions. At the present, we work on a particular case w=1. The conservation equation (<ref>) then yieldsρ_D(a)=(ζ -αΓ_Da_D^3)^2/8 a_D^6 m^2,where we have again used T=(3w-1)ρ. In the limit a→∞ one obtains Γ_D^2α^2/8m^2 from solution (<ref>). As can be seen, in the model D the matter energy density evolves from a non-vanishing initial value far from the bounce event. This property cannot be seen in the previous models. To obtain the solution for the scale factor, we substitute (<ref>) in the Friedmann equation (<ref>) for w=1, which givesa_D(t)=[Γ_Dζ/ω-Δ/ωcosh(√(3 αω/4 m)t)- Υsinh(√(3 αω/4 m)t)]^1/3,where ζ=αΓ_D±2 m √(2ρ_0),                        ω=αΓ_D^2-2mΛ_D,Υ=√(a_0^6+ζ/αω(ζ -2 α a_0^3Γ_D)),          Δ=Γ_Dζ -ωa_0^3.From solution (<ref>), we can obtain the time at which the bounce occurs as well as the radius of the Universe at the moment of bounce, ast^(b)_D=(3 αω/m)^-1/2log(Δ-ωΥ/Δ+ωΥ),anda^(b)_D=[Γ_Dζ +√(Δ^2-ω^2Υ^2)/ω]^1/3.Differentiating solution (<ref>) with respect to time gives the Hubble parameter as followsH_D=√(αω/12m)ωΥcosh(√(3 αω/4 m)t)+Δsinh(√(3 αω/4 m)t)/Δcosh(√(3 αω/4 m)t)+ωΥsinh(√(3 αω/4 m)t)-Γ_Dζ,and from definitions of the effective quantities we findρ_(eff)D=(ζ -αΓ_Da_D^3)^2/4m a_D^6 -αΛ_D/2,p_(eff)D=1/4[2 αΓ_D√((ζ-αΓ_Da_D^3 )^2/m^2a_D^6)+(ζ-αΓ_Da_D^3)^2/m a_D^6+2 αΛ_D],𝒲_D=α(2 Λ_Dm-αΓ_D^2) a_D^6+ζ ^2/(ζ -a_D^3 αΓ_D)^2-2mαΛ_Da_D^6.By inspection of expressions (<ref>) and (<ref>), we observe that model D corresponds to the following EoSp_(eff)D=ρ_(eff)D-αΓ_D/√(m)√(ρ_(eff)D+αΛ_D/2)+αΛ_D.In view of what we discussed in the paragraph right after equation (<ref>), model D corresponds to a generalized EoS with β=1. This may be interesting since the most bouncing models have been obtained for β=-1. Note that, the behavior of obtained cosmological quantities for two values of ζ as given in (<ref>), are similar to those of model C. By choosing the model parameters so that the term including sinh in (<ref>) disappears, we arrive at a different model. In this case, the behavior of cosmological quantities is the same as the case for which Υ≠0. However, the evolution of matter energy density and the effective pressure are different. We typically plot these quantities for both situations in Figure <ref>. The thick curves belong to the solution (<ref>) and the thin ones show the solution with Υ=0. The energy condition considerations show that models of type D lead to the violation of NEC near the bounce event. Also the phase space and the scalar field representation of this model are the same as model A. § MATTER BOUNCE SOLUTIONS IN F( R, T) GRAVITYIn this section, we deal with the well-known matter bounce scenario which can be established through the models that obey (<ref>) and (<ref>). We specify these class of solutions as type E models. A branch of the matter bounce scenarios have been discussed with the characteristic scale factora_E(t)=(𝐐 t^2+𝐙)^𝐌,where 𝐐, 𝐙 and 𝐌 are positive constants. Note that 𝐐=uρ_max, with u=2/3, 3/4, 4/3, 𝐙=1 and 𝐌=1/3 <cit.> and𝐌=1/4 <cit.> has been used in the literature. The scale factor (<ref>) gives the following expression for the Hubble parameterH_E(t)=2 𝐌𝐐 t/𝐐 t^2+𝐙.Note that in model E the Hubble parameter and its time derivatives never diverge since all of them are proportional to negative powers of 𝐐 t^2+𝐙. By eliminating the time parameter between (<ref>) and (<ref>), we get the Hubble parameter in terms of the scale factor and thus the Friedmann equation can be obtained as3H_E^2(a)=12 𝐐𝐌^2 [a_E^-1/𝐌- 𝐙 a_E^-2/𝐌].From another side, substituting (<ref>) into (<ref>) together with using (<ref>), the Friedmann equation in terms of the scale factor reads3H^2(a)= 2 αΓ_E (w+1)n ρ_0^n-2/w+1+3/2(3 w-1 )^n-2/w+1+1/2/2 n+3 w-1a^-3 (1-2n-3w)/2n+(w-1)n ρ _0/2 n+w-3a^-3 (w+1)/n,where we have used subscript E for integration constant Γ_ E and we have set Λ_E=0. Comparing equations (<ref>) and (<ref>) we find two different type of solutions which only one of them can be accepted. A consistent solution is valid forn=12 𝐌/6𝐌-1,                      w= 4/6 𝐌-1-1,𝐙=16 Γ_Eαρ_0 (2-3 𝐌)/12 𝐌 (3 𝐌-2)+3,        𝐐=3(1-2𝐌) ρ_0/4 𝐌 (6 𝐌-1).Eliminating ρ_0 between 𝐙 and 𝐐 leads to𝐐=9 (2 𝐌-1)^2/64 αΓ_E 𝐌 (3 𝐌-2)𝐙.Valid solutions for which {𝐐,𝐙,𝐌}>0 holds, are givenby1/2<𝐌<2/3       1/3<w<1     αΓ_E<0,𝐌>2/3             -1<w<1/3     αΓ_E>0.As can be seen,in the context of f( R, T) gravity, there can be found a matter bounce solution for every values of w (except for w=1/3,1). The value w=-1 can be accessed for large values of 𝐌 (see relations (<ref>)). From (<ref>) and (<ref>) we see that for models of type E we have ρ_E=ρ_0a_E^-1/𝐌. The effective quantities are then obtained asρ_(eff)E=12 𝐌^2𝐐/𝐙a_E^-2/𝐌(a_E^1/𝐌-1)=12 𝐌^2 𝐐^2 t^2/𝐙(𝐐 t^2+1)^2,p_(eff)E=-4𝐌𝐐/𝐙a_E^-2/𝐌((3𝐌-1) a_E^1/𝐌-3𝐌+2)=-4𝐌𝐐/𝐙(3𝐌-1) 𝐐 t^2+1/(𝐐 t^2+1)^2,𝒲_E=1/3(1/𝐌(1- a^1/𝐌)+1/𝐌-3)=-1+1/3𝐌(1-1/𝐐 t^2).Model E corresponds to an effective EoS asp_(eff)E^±=(2/3 𝐌-1)ρ_(eff)E±2 𝐐/𝐙√(𝐌^2-𝐙/3 𝐐ρ_(eff)E)-2 𝐌𝐐/𝐙.Estimating effective pressure (<ref>) in the limiting times t→ 0 and t→±∞, indicates that the effective EoS follows the expression p_(eff)E^- far from the bounce and obeys p_(eff)E^+ near the bounce. In t→ 0pressure (<ref>) gives p_(eff)E=-4𝐌𝐐/𝐙 which is consistent with p_(eff)E^- and in t→±∞ we havep_(eff)E=0 which can be explained only with p_(eff)E^+. Figure <ref> shows the behavior of different quantities. As is seen in the left panel, the scale factor decreases till reaching a minimum non-zero value at t=t_ b where the Hubble parameter vanishes. The Universe experiences four phases during its evolution from pre-bounce to post bounce. Before the bounce occurs, the Universe has been within an accelerated contracting regime till the first inflection point (t_ 1inf<t_ b) is reached at which the accelerations vanishes. At this point the Hubble parameter maximizes in negative direction and correspondingly the effective energy density gains a peak value. The Universe then enters a decelerating contracting regime so that its velocity decreases in negative direction. The collapse of Universe halts at the bounce time after which the Universe goes into an accelerating expanding phase where both ä_ E>0 and H_ E>0. Once the second inflection point (t_ 2inf>t_ b) is reached, the Universe enters a decelerating expanding regime so that the speed of expansion decreases at later times.Let us now check the behavior ofNEC_(eff) and SEC_(eff) which for model E take the following form asNEC_(eff)E=p_(eff)E+ρ_(eff)E=4 𝐌𝐐(𝐐 t^2-1)/𝐙(𝐐 t^2+1)^2, SEC_(eff)E=p_(eff)E+3ρ_(eff)E=12 𝐌𝐐[(1-2𝐌)𝐐 t^2-1]/𝐙(𝐐 t^2+1)^2. It is obvious that in the expression (<ref>), the sign of 𝐐 t^2-1 determines the validity of NEC. We therefore find that near the bounce, NEC is violated within the range -1/√(𝐐)<t<1/√(𝐐), and out of this range it is preserved. The SEC is violated within a larger time interval, i.e., in -1/√(𝐐(1-2𝐌))<t<1/√(𝐐(1-2𝐌)) for 0<𝐌<1/2 and is always violated for 𝐌>1/2. In figure <ref> we have plotted the expression of NEC and SEC for two different values of 𝐌, i.e., for 𝐌=0.3 and 𝐌=0.7. We see again that, in the background of f( R, T) gravity a bouncing behavior corresponds to violating the NEC.We then may conclude that scalar representation type Emodels can be constructed by a phantom field within the time interval where NEC is violated (notice that from solution (<ref>) we see that for the same intervals in which the NEC is violated, we have 𝒲_E<-1). In this cases, the kinetic energy of scalar field is negative. Thus, for a phantom scalar field in model E we obtainϕ_(eff)E=-8/3√(𝐌/𝐙)[arcsin(√(𝐐) t)-√(2)arctan(√(2𝐐t/1-𝐐 t^2))],V_(eff)E=2 𝐌𝐐[(6𝐌-1)𝐐 t^2+1]/𝐙(𝐐 t^2+1)^2.In Figure <ref> we have presented typical diagrams for the effective phantom field and its corresponding potential.To see the stability properties of model E, we rewrite the field equations as follows2Ḣ_ E=32𝐌-1/6𝐌-1ρ(1-2ρ/ρ_0),ρ̇_ E=-H/𝐌ρ,3H_ E^2=-9𝐌2𝐌-1/6𝐌-1ρ(1-ρ/ρ_0).The analysis of these equations is similar to those of the previous models. Firstly, note that only for 1/6<𝐌<1/2, equation of (<ref>) leads to the standard Friedmann equation when the correction term is absent. The system (<ref>) and (<ref>) have two fixed points with coordinates P_E^(±)=(±0,0) which correspond to the limit ρ→0. The bounce event occurs at ρ_ E=ρ_0 at which we have Ḣ_ E=-3(2𝐌-1)/6𝐌-1>0. The stability of the system can be analyzed similar to the way used in the previous sections. A simple study shows the stability properties are analogous to the other models; the evolution of the Universe begins from an unstable state, then passing through an unstable bounce phase and finally reaches a stable state. Different bouncing solutions and their main properties* Models[c]a(t) p_(eff)h( T) ρ w A [l]1/𝔄[cosh(√(-𝒫/3t))-sinh(√(-𝒫/3t))]× {𝔅+(w^2-1)𝒫ρ_0/2(3w-1)[sinh(√(-3𝒫) t)+cosh(√(-3𝒫) t)-1]}^2/3 𝒫=cons. 2/α(𝒫+w/1-3 w T) ρ_0a^-3 >1/3 B ℛ[cosh(t/ℛ)-𝒮] [l]-ρ/3+2 𝒮√(3ℛ^2 (𝒮^2-1)ρ+9)/3ℛ^2 (𝒮^2-1) +2/ℛ^2(𝒮^2-1)Γ T^2/2-7/4α T+Λ ρ_0a^-1 -1/5 C a_0[(Q+1)cosh(√(3αΛ/2)(n+1)/2nt±cosh ^-1(Q)/2)]^2n/3(n+1) 1/n[ρ-αΛ_C/2(1+n) ] 2Γ T^n+1/2/n+1- T/α+Λ ρ_0a^-6/n 1 D [Γζ/ω-Δ/ωcosh(√(3 αω/4 m)t)- Υsinh(√(3 αω/4 m)t)]^1/3 ρ-αΓ/√(m)√(ρ+αΛ/2)+αΛ 2Γ√( T)-(2m-1) T/α+Λ (ζ -αΓ a^3)^2/8 a^6 m^2 1 E (𝐐 t^2+𝐙)^𝐌 [l](2/3 𝐌-1)ρ ±2 𝐐/𝐙√(𝐌^2-𝐙/3 𝐐ρ)-2 𝐌𝐐/𝐙 [l]2 Γ (w+1)/2 n+3 w-1 T^n-2/w+1+3/2 -2 (n-1) /α(2 n+w-3) T+Λ ρ_0a^-3 (w+1)/n w6l* The subscripts A,..,E and eff are dropped for abbreviation.§ STABILITY OF THE BOUNCING MODELS In this section we verify the wholesomeness of the bouncing models which has been introduced in the previous sections through considering the possibility of occurring serious instabilities. We therefore examine the evolution of scalar-type perturbations in the discussed models within the metric formalism. Since f(R,T) gravity introduces unusual coupling of matter to curvature part of its action, the evolution of matter density perturbations (specially, as the effect of bounce event on the evolution of matter perturbations is not obvious) can be problematic. In order to study such type of perturbations we consider the matter density perturbations in f( R, T)= R+ακ^2 h( T) models for a fat FLRW metric in the longitudinal gauge ds^2=-(1+2Φ)dt^2+a(t)^2(1-2Ψ)δ_ijdx^idx^j,where the metric scalar perturbations Φ and Ψ are functions of four coordinates (t,x,y,z), generally. In the current work we shall obtain necessary equations for models including a barotropic perfect fluid with equation of state p=wρ and a general h( T) function. In this respect, the authors of <cit.> have already considered the matter perturbations in a narrow class of f( R, T) models[Paper <cit.> has considered models in which the conservation of EMT is respected. These modes accept h( T)=√( T) <cit.>.] for a pressure-less perfect fluid. The perturbations of EMT in the longitudinal gauge are given by <cit.>δ T^t_ t=-δρ_m,δ T^i_ t=1/a(1+w)ρ v_,i,δ T^t_ i=-a (1+w)ρ v_,i,δ T^i_ j=wδ_ijδρ,where, v is a covariant velocity perturbation <cit.>. Using the background equations (<ref>) and (<ref>) we obtain the following equations for the scalar perturbations in Fourier space2k^2/a^2Ψ+6H(HΦ+Ψ̇)=δΣ_ t^t+1/2ℱδT,as the ADM energy constraint (𝒢_ t^t component of the field equation, if we rewrite (<ref>) as 𝒢_ μ^ν=Σ_ μ^ν),HΦ+Ψ̇=-1/3∫δΣ_ i^tdx^i,as the ADM momentum constraint (𝒢_ i^t component). Moreover, we haveΦ-Ψ=0,as the ADM propagation equation (𝒢_ i^j-1/3δ_ i^j𝒢_ l^l component),6[Ψ̈+2HΨ̇+HΦ̇+2(H^2+Ḣ)Φ]-2k^2/a^2Φ=δΣ_ i^i-δΣ_ t^t+ℱδT,as the perturbed version of Raychaudhuri equation (𝒢_ i^i-𝒢_ t^t component),δR=-(δΣ+2ℱδT),as the trace equation (𝒢_ μ^μ=Σ_ μ^μ),δ̇+η Hδ+ξ(k^2/a^2v-3Ψ̇)=0,as the time component of perturbed EMT conservation and finallyv̇+3Hσ v-Φ+λδ=0,as the spatial component of perturbed EMT conservation[For more details and also the utilized terminology, see <cit.>.]. Note that equations (<ref>)-(<ref>) are the most general equations describing scalar perturbations in minimal f( R, T) gravity for condition F=1 when a barotropic perfect fluid is included. These equations are not independent, so that it is possible to obtain one equation from another one; for example (<ref>) follows from multiplying (<ref>) by 2 then adding it to (<ref>) and using (<ref>). In the above equations and relations we have used the following definitions for the source terms, which appear in the right hand sides of field equation (<ref>), its trace and in equation (<ref>) when is written as ∇_βT_ α^β=Σ_α, respectivelyΣ_ μ^ν=(κ^2+ℱ) T_ μ^ν-wρℱ g_ μ^ν,Σ=(κ^2+ℱ) T-4wρℱ,Σ_ α=1/κ^2+ℱ[w∇_α(ρℱ)-1/2ℱ∇_ α T- ∇_βℱT_ α^β].The gauge-invariant density contrast in the longitudinal gauge is defined asδ=δ̅ρ/ρ+3Hvand the perturbed Ricci scalar for the FRLW metric can be obtained asδ̅ R=-2[3Ψ̈+12 HΨ̇+3HΦ̇+6ḢΦ+12H^2Φ-k^2/a^2(Φ-2Ψ)].Also, using definitions in (<ref>)-(<ref>), the coefficients in (<ref>) and (<ref>) can be obtained as follows η=-3(1+w)(3w-1)N_η/D_ηHρ,     σ=3N_σ/D_σ,ξ=(1+w)[κ^2+1/2(3-w)ℱ+(1+w)(3w-1)ρℱ'/κ^2+ℱ]^-1,λ=1/w+1[1/2(1-w)ℱ/κ^2+ℱ-w],whereN_η=-(1+w)ℱℱ'+(1+w)(3w-1)ρℱ'^2-κ^2(w+3)/2ℱ'-(1+w)(3w-1)(κ^2+ℱ)ρℱ”,D_η=[κ^2+1/2(3-w)ℱ+(1+w)(3w-1)ρℱ']^2,andN_σ=1/2(1-w)ℱ-w(κ^2+ℱ)+(3w^2+2w-2)ρℱ'D_σ=κ^2+1/2(3-w)ℱ+(1+w)(3w-1)ρℱ',and prime denotes derivative with respect to the trace. Note that wherever is needed we have used ρ̇ which can be obtained from the EMT conservation equation (<ref>). Now, using equations(<ref>) and (<ref>) we arrive at the evolutionary equation for the matter perturbation, as follows,δ̈+𝒟_1δ̇+𝒟_2δ+ξ[-3Φ̈-3(3σ+2)HΦ̇+k^2/a^2Φ]=0.From equations (<ref>) and (<ref>) we obtain a dynamical equation for the perturbed potential Φ, as2[3Φ̈+15HΦ̇+(6Ḣ+12 H^2+k^2/a^2)Φ]+θδ=0,where we have used solution (<ref>) and defined the following coefficients𝒟_1=[(2+3σ)H+η-ξ̇/ξ],𝒟_2=(η̇-ξ̇/ξη+3ση H-λξk^2/a^2+2η H),θ=[κ^2+(3-5w)ℱ-(1+w)(3w-1)ℱ'ρ]ρ.Hence, we have two differential equations (<ref>) and (<ref>) along with relation (<ref>) to be solved for δ and Ψ. Obviously, the coefficients of these equations are some complicated functions of model properties, H, a, ρ,T, ℱ, ℱ', ℱ”, and thus it may not be possible to obtain exact solutions. However, one can resort to numerical methods or even in the case of stiff equations, obtain approximated solutions. We have plotted the evolution of δ and Φ in Figure <ref> for models A, B and C. Numerical simulations show that the behavior of perturbations are typically similar in A, B and C models for the same initial values. We have sketched two set of plots in Figure <ref>. These models show a zero value for the fluctuations when the initial values δ(0)=0, δ'(0)=1, Φ(0)=0 and Φ'(0)=1 are assumed. In this case, the fluctuations tend to zero (left panel) and constant values (right panel) in the regime of large times, see the black curves in Figure <ref>. As another case, the fluctuations increase from zero to a maximum finite value in the period of bounce if the initial data are set as δ(0)=1, δ'(0)=0, Φ(0)=1 and Φ'(0)=0, see the gray curves. As can be seen from the evolution of scalar perturbations across the bounce, though small temporary fluctuations, no instability occurs during the bounce, nor does it happen in the limit of large times for these models. Unfortunately, the system of differential equations (<ref>) and (<ref>) become stiff for models D and E so as it is not possible to plot reasonable diagrams for δ and Φ. In this case, we proceed to obtain approximate solutions. For models E and D (in case in which Υ=0), equations (<ref>) and (<ref>) at times near the bounce will take the following formsδ̈+1/2(2 𝒟_2^0+ξ^0θ^0) δ +2(3H'^0+k^2/a^0^2)ξ^0Φ=0,6 Φ̈+θ^0δ+ (2k^2/a^0^2+12 H'^0)Φ=0,for which the solutions up to second order can be found asδ=δ_i+δ'_it+[-1/4(2 𝒟_2^0+ξ^0θ^0)δ_i-(3H'^0+k^2/a^0^2)ξ^0Φ_i]t^2,Φ=Φ_i+Φ'_it+[-1/12θ^0δ_i- H'^0Φ_i-k^2/6a^0^2Φ^i]t^2,where the superscript “0" denotes the values of quantities in the limit t→0 and the subscript i shows the initial values required for integrations. As we see, the solutions are stable in the period of bounce. Note that other coefficients except those which are shown in equations (<ref>) and (<ref>) vanish in the limit t→0. In the limit of large times, we have numerically plotted the evolution of the matter contrast δ and the potential Φ in Figure <ref>. As can be seen, from (<ref>) and (<ref>), it is obvious that there happens no instability at the period of bounce in models D and E, however, far away from the bounce point, both δ and Φ increase dramatically in model D, see Figure <ref>. For model E, in the limit of large times, t→∞, we getδ̈-3ξ^∞Φ̈=0,Φ̈=0,with the solutionsδ=δ_i+δ'_it,Φ=Φ_i+Φ'_it.Therefore, depending on the initial values, the perturbations in model E can grow before and after the bounce. Therefore we conclude that, though the scalar-type perturbations in models D and E behave regularly at the bounce point, these solutions are unstable asymptotically. We conclude this section by saying that the scalar perturbations in non-singular minimal class of f(R,T) gravity is not confronted by instabilities. However, they may grow with time or tend constants values far away from the bounce.§ SINGULAR SOLUTIONSIn this section we seek for possible solutions that exhibit singular behavior, specially the big-bang singularity. Some singular solutions have been already studied in the literature which we suffice to give a short discussion for them. Presuming the EMT conservation (i.e., supposing T= T_0 a^-3(1+w)), equation (<ref>) can be solved to give the following solutionh( T)=2 Q_1w+1/3 w+1 T^3/2-1/w+1+ Q_2,where Q_1 and Q_2 are some constants. In <cit.> the authors have analytically shown thatfor the case of pressure-less perfect fluid, i.e., for w=0, one findsa_SIN^I(t)=(3 /256)^1/3[√(Ω_0^(p))H_0 t (8 √(3)-3 β t)]^2/3,which shows a singular behavior. This singular solution is the only one which respects the EMT conservation. Relaxing such a constraining condition, one can obtain other kind of solutions. Note that function (<ref>) is the only form that respects the EMT conservation, that is, to obtain another solution one should assume some suitable form for h(T). For example, in <cit.> for f( R, T)= R+ακ^2T the authors have obtained ρ(a)=ρ_0 a^6 (α-1)/2-3α,a_SIN^II (t)=(3/2)^2-3 α/6(1- α)[3(1-α ) √(Ω_0) H_0t/√(2-3 α)]^2-3α/3 (1-α),for a pressure-less matter. In this case singular solution can be found for some valid range of values of the model constant α. Also, the following solution has been obtained for models of form f( R, T)= R+ακ^2T^-1/2ρ (a)=2^-2/3(α+2ρ_0^3/2/a^9/2-α)^2/3a_SIN^III(t)=(27/256)^1/9[α+2(3H_0^2Ω_0)^3/2]^2/9t^2/3. As a new class of (big-bang) singular models, we examine the models which admit the following solutiona_SIN^IV(t)=a_0(t/t_0)^ℓ, where ℓ being a real positive number and t_0 being the time at which the scale factor gets the present value a_0. Any solution has to satisfy two of the three equations (<ref>)-(<ref>), for a presumed scale factor function. In this case, two unknown functions T(a) and h(T) should be obtained via solving the resulted equations. To proceed further we make use of the following anzatsh(T)=C_1T^μ+C_2T, where constants C_i's and μ should be fixed according to the following considerations. Substituting (<ref>) and (<ref>) in (<ref>) and solving for T(a) we getT(a)={α C_1μ(w-3)T_0/[α C_1μ(w-3)T_0^μ+(w-1) T_0 ](a/a_0)^-6 (μ -1) (w+1)/2 μ(w+1)-3 w+1+ T_0 (1-w)}^1/1-μ, where an integration constant has been set so that T(a=a_0)= T_0. To ensure that three functions(<ref>), (<ref>) and(<ref>) provide an analytic solution, they must satisfy at least one of the equations(<ref>) and (<ref>). Substituting these functions in (<ref>) shows that we have a solution only for the case of stiff fluid, i.e., for w=1. Therefore, a singular solution can be obtained provided thatC_1 =-2ℓ (3ℓ-2)T_0^1/3 ℓ-2/t_0^2ακ^2,   C_2=-1/α,  μ =1/2-3ℓ,  1/3<ℓ<2/3,  w=1. Thus, for conditions (<ref>) we obtainT= T_0(a/a_0)^6-4/ℓ,f(R,T)=R-κ^2T+2 ℓ (3 ℓ-2) t_0^-2( T/ T_0)^1/2-3 ℓ. Therefore, besides the non-singular solutions obtained in the present paper, one can still find a set of singular solutions. In this brief section in addition to addressing some previous results we obtained a new singular model, though a coherent study can be performed in order to deal with possible conditions for which big-bang singularity would occur; however working on this issue is beyond the scope of the present paper and comprehensive studies on this subject will be reported elsewhere. It is worthwhile to mention that some studies have been already made to consider other forms of singular solutions, e.g., <cit.>. Beside the above results, Bianchi type I cosmological model with magnetized strange quark matter in the framework of f ( R,T) gravity have been investigated and it is found that the model begins with big-bang and ends with big rip <cit.>. Using Lie point symmetry analysis method, the authors of <cit.> have shown that for a Bianchi type I spacetime, both singular (big-bang) and nonsingular solutions could exist subject to the type of specified symmetry.Recently, the authors of <cit.>, have considered some cosmological features of f(𝒯) gravity (where here 𝒯 denotes torsion scalar) using the dynamical system approach both generally and for some specific forms of f(𝒯) functions. The core of their studies is taking the advantage of this fact that the torsion scalar can be used interchangeably with the Hubble parameter (i.e., 𝒯=-6H^2). Thus, the field equations reduce to a single equation (in the case of pressure-less matter) in the form of Ḣ=ℱ(H), since the matter density can also be rewritten as a function of the Hubble parameter. Briefly, they have shown that in f(𝒯) gravity a single equation (which can be interpreted as a simple one dimensional dynamical system) can govern the dynamics of field equations. Benefiting this useful result they investigated phase space portraits of various cosmological evolutions such as, singular and non-singular solutions. Likewise, one may be motivated to utilize such an approach in order to investigate the cosmological solutions of f( R,T) gravity (especially, in the case of present work, i.e., the function given in (<ref>)) through phase portrait diagrams. However, looking at equations (<ref>) and (<ref>) one finds that it is impossible to obtain an equation like Ḣ=ℱ(H) so that it reflects full information of the field equations. In this case for an assumed function h(T), we have a two dimensional dynamical system without any further reduction. Thus, the procedure proposed in <cit.> would be generally failed in f( R,T) gravity. § CONCLUDING REMARKS In the present work we studied classical bouncing behavior of the Universe in the framework of f( R, T)= R+h( T) gravity theories. We assumed a single perfect fluid in a spatially flat, homogeneous and isotropic FLRWbackground. Having obtained the resulted field equations, we employed the concept of effective fluid (which is firstly introduced in <cit.>) via defining an effective energy density and pressure and also reformulating the field equations in terms of these fluid components. In this picture, one could recast the field equations of f( R, T) gravity for a real perfect fluid into GR field equations for an effective fluid. It is also shown that in a modified gravity model the energy conditions are usually obtained by using the effective EMT, not the one for real fluids. In f( R, T) gravity, the definitions for effective energy density and pressure have already been used to obtain the energy conditions <cit.>. The effective fluid has an EoS of the form, p_(eff)=𝒴(ρ_(eff)), which corresponds to an h( T) function. In this method one firstly specifies an effective EoS or a condition on the effective components and then obtains the corresponding h( T) function and other cosmological quantities. It is also possible to make a link between f( R, T) gravity in effective picture and models which use some exotic or dark component with unusual EoS. These models which have been widely discussed in the literature (to deal with some cosmological issues) are also called theories with generalized EoS. The mathematical representation of effective components provides a setting within which unusual interactions of a real perfect fluid with gravitational field can be translated as the presence of an exotic fluid which admits the EoS of the form p_(eff)=𝒴(ρ_(eff)). In this paper we have shown that it is possible to recover generalized EoS models which have been previously studied in the literature ( see e.g., <cit.>), in the framework of f( R, T) gravity. Therefore, the problem of exotic fluid in the context of generalized EoS models which are mostly without a determined Lagrangian may be discussed in a Lagrangian based theory of gravity like f( R, T) gravity.In the current research, we discussed four different bouncing models in f( R, T) gravity. We labeled them as the models A, B, C, D and E and briefly mentioned their main properties in Table <ref>. Each model can be specified either by an h( T) function or by an effective EoS. Models A-D mimic an asymptotic de Sitter expansion in the far past and future of the bounce. The model A corresponds to a constant effective pressure, p_(eff)=𝒫; for the model B we have p_(eff)B=-ρ_(eff)B/3+√(b_Bρ_(eff)B+d_B)+e_B, themodel C is specified by p_(eff)C=j_Cρ_(eff)C+e_C, the model D corresponds top_(eff)D=ρ_(eff)D+√(b_Dρ_(eff)D+d_D)+e_D and finally the model E obeys the EoS, p_(eff)E=a_Eρ_(eff)E+√(b_Eρ_(eff)E+d_E)+e_E, where the constants b, d,  e and j are written in terms of model parameters. In all models the matter density grows to a maximum value at the bounce which corresponds to a minimum for the scale factor. The effective density varies from zero at the bounce to a positive value in the far past and future of the bounce. The effective pressure varies between negative values; in model A, it is a constant, in the model B it increases at the bounce, in the models C and E it decreases and the model D admits both behaviors. The effective EoS has the property -∞<𝒲<-1 when the bounce point is approached. The Hubble parameter satisfies H(t)=0 and dH/dt>0 at the event of bounce and also all its time derivatives have regular behavior for all models. Therefore, these bouncing solutions do not exhibit future singularities which are classified in the literature of cosmological solutions. We can consider the inherent exoticism hidden behind f( R, T) gravity in another way. As already we mentioned, this issue can be described as an unusualinteraction between gravitational field and normal matter or introducing an effective fluid. From the point of view of the energy conditions, in all discussed models the SEC and NEC are violated (note that for a normal fluid NEC is not violated <cit.>) near the bounce and the effective density gets minimized to zero. Such a result has been previously predicted in GR <cit.>. As discussed in <cit.>, the exoticness can be understood as a minimization in the effective pressure. In the other words, a minimum in the effective energy density corresponds to a minimum in the scale factor. Such a behavior is permitted provided that 𝒲<-1. Note that, for a normal matter, a minimum (maximum) compression leads to a minimum (maximum) energy density. Thus, in f( R, T) gravity an abnormal or effective fluid which leads to an uncommon balance in the density and pressure can be responsible for the bouncing behavior. An interesting feature of the bouncing solution in f( R, T) gravity is that one can construct solutions in which the SEC is respected by the real perfect fluid. Such solutions cannot be found in GR <cit.>. Also note that the real perfect fluid with w>-1never violates the NEC. Therefore, we have solutions without the future singularities and all energy conditions can be respected by a real perfect fluid. By this discussion, one may use the definition of an (effective) phantom scalar field if one asks for the matter source to be reinterpreted as that of a scalar matter field. We obtained the equivalent scalar field ϕ_(eff)(t) and its corresponding potential V_(eff)(t) in each case. Moreover, we have studied the dynamical system representation of these models. We found that the evolution of the Universe can be displayed by trajectories which initially start from an unstable state, passing through an unstable fixed point (the bounce event) and finally are absorbed by a stable point. The initial and final states are de-Sitter era in models A, B, C and D and the decelerated expanding Universe in model E. Another important issue discussed in this work is related to the study of stability of bouncing solutions through scalar-type cosmological matter perturbations in the bouncing universe. Our numerical analysis of density perturbations for models A, B and C revealed that, though a slight jump (depending on the initial conditions) at the bounce point, the amplitude of matter density perturbation (δ) and perturbed potential (Φ) behave regularly throughout the bounce phase. Therefore, since the time interval during which the fluctuations that occur within density contrast and perturbed field is short, the instabilities do not have enough time to grow to a significant magnitude. However, this case does not happen for the two remaining models. As the final remarks we should emphasize that our models were obtained by indicating different conditions on the effective density and pressure which led to different h( T) functions. This means that the models A, B, C, D and E are not the only possible models for the bouncing behavior. It is obvious that one can still choose other h( T) functions or consider other assumptions on the effective density and pressure to obtain new bouncing solutions (with even new features). Our aim was to show the existence of varieties of bouncing solutions in f( R, T) gravity and study their properties. Especially, our study was confined to the Lagrangians of type f( R, T)= R+h( T) though other forms of Lagrangians can be investigated. The other issue is that our study was performed in the effective picture. In case such an approach is not taken seriously, one can think of it as only an alternative mathematical method. One can still investigate a nonsingular cosmological scenario without employing the equations which are written in terms of the effective quantities. In this case it is enough to assume a Lagrangian and solve the field equations to inspect for a bouncing solution. However, cosmological solutions for the f( R,T) gravity model presented here are not singularity free and as we observed under certain conditions, a class singular solutions could be obtained.99 Ostriker95 Ostriker, J. P. & Steinhardt, P. J. Cosmic concordance, astro-ph/9505066. Coles02 Coles, P. & Lucchin, F., Cosmolgy: The origin and evolution of cosmic structure, (John Wiley & Sons, England, 2002). relicmonopole Liddle, R. A. & Lyth, D. H., The primordial density perturbation: Cosmology, inflation and the origin of structure, Cambrige University Press (2009); Kolb,E. W. & Turner, M. S.,The Early Universe, Frontiers in physics, Avalon Publishing (1994). relicgraviton Khlopov, M. Y., & A. D. Linde, A. D., Is It Easy to Save the Gravitino?, Phys. Lett. B 138 (1984) 265;Ellis, J., Kim, J. E., & Nanopoulos, D. V.,Cosmological Gravitino Regeneration and Decay, Phys. Lett. B 145 (1984) 181;Kawasaki, M., & Moroi, T., Gravitino Production in the Inflationary Universe and the Effects on Big Bang Nucleosynthesis, Prog. Theor. Phys. 93 (1995) 879;Khlopov, M. Yu., Levitan, Yu. L., Sedelnikov, E. V., & Sobol, I. M., Nonequilibrium Cosmological Nucleosynthesis of Light Elements: Calculations by the Monte Carlo Method, Phys. At. Nuclei 57 (1994) 1393;Kawasaki, M., Kohri, K., & Moroi, T., Hadronic decay of late-decaying particles and Big-Bang nucleosynthesis, Phys. Lett. B 625 (2005) 7. relicmoludi Coughlan, G. D., Fischler, W., Kolb, E. W., Raby, S., & Ross, G. G., Cosmological Problems for the Polonyi Potential, Phys. Lett. B 131 (1983) 59; Ellis, J. R.,Nanopoulos, D. W., & Quiros, M., On the Axion, Dilaton, Polonyi, Gravitino and Shadow Matter Problems in Supergravity and Superstring Models, Phys. Lett. B 174 (1986) 176;Banks, T., Kaplan, D. B., & Nelson, A. E., Cosmological Implications of Dynamical Supersymmetry Breaking, Phys. Rev. D 49 (1994) 779;de Carlos, B., Casas, J. A., Quevedo F., & Roulet, E., Model-Independent Properties and Cosmological Implications of the Dilaton and Moduli Sectors of 4-D Strings, Phys. Lett. B 318 (1993) 447. relicbaryon Dolgov, A. D., Sazhin, M.V. & Zeldovich, I. A. B., Basics of Modern Cosmology, Atlantica Seguier Frontieres (1990);Xing, Z. & Zhou, S.,Neutrinos in Particle Physics, Astronomy and Cosmology, Springer Berlin Heidelberg, (2011). flathorizonpro Padmanabhan, T., Cosmology and Astrophysics Through Problems, Cambridge University Press (1996); Dodelson, S.,Modern Cosmology, Academic Press (2003); Ellis, G. F. R.,Maartens, R., & MacCallum, M. A. H., Relativistic Cosmology Cambridge University Press (2012);Liddle, A. An Introduction to Modern Cosmology, John Wiley & Sons (2015). DMPRO Sciama, D. W., Modern Cosmology and the Dark Matter Problem, Cambridge University Press (1993);Bambi, C., & Dolgov, A. D., Introduction to Particle Cosmology: The Standard Model of Cosmology and its Open Problems, Springer (2015);Freese, K., Status of Dark Matter in the Universe, Int. J. Mod. Phys. D 26 (2017) 1730012 . DEPR Weinberg, S. The cosmological constant problem, Rev. Mod. Phys. 61 (1989) 1;Amendola, L., & Tsujikawa, S., Dark Energy: Theory and Observations Cambridge University Press (2010);Gleyzes, J., Dark Energy and the Formation of the Large Scale Structure of the Universe Springer (2016). inflationmech Guth, A. H., Inflationary universe: A possible solution to the horizon and flatness problems, Phys. Rev. D 23 (1981) 347;Sato, K., First-order phase transition of a vacuum and the expansion of the Universe, Mon. Not. Roy. Astron. Soc. 195 (1981) 467;Linde, A. D., A new inflationary universe scenario: A possible solution of the horizon, flatness, homogeneity, isotropy and primordial monopole problems, Phys. Lett. B 108 (1982) 389. hawpen Hawking, S. W., & Penrose, R., The singularities of gravitational collapse and cosmology, Proc. Royal Soc. London A 314 (1970) 529;Penrose, R.,Gravitational collapse and space-time singularities, Phys. Rev. Lett. 14 (1965) 57;Hawking, S. W., Occurrence of singularities in open universes, Phys. Rev. Lett. 15 (1965)689;Hawking, S. W., The occurrence of singularities in cosmology, Proc. R. Soc. Lond. A 294 (1966) 511;Hawking, S. W., The occurrence of singularities in cosmology. II, Proc. R. Soc. Lond. A 295 (1966) 490;Geroch, R. P., Singularities in closed universes Phys. Rev. Lett. 17 (1966) 445;Hawking, S. W., The occurrence of singularities in cosmology. III Causality and singularities Proc. R. Soc. Lond. A 300 (1967) 187; Hawking, S. W., &Ellis, G. F. R., The Large Scale Structure of Space-Time, Cambridge University Press (1975);Senovilla, J. M. M., Singularity Theorems and Their Consequences, Gen. Relativ. Grav. 29 (1997) 701. Tipsing Tipler, F. J., General relativity and conjugate ordinary differential equations, J. Diff. Equ. 30 (1978) 65;Tipler, F. J., Energy conditions and spacetime singularities, Phys. Rev. D 17 (1978) 2521. Bordeetal1990 Borde, A., Geodesic focusing, energy conditions and singularities, Class. Quantum Grav. 4 (1987) 343;Vilenkin, A., Did the universe have a beginning?, Phys. Rev. D 46 (1992) 2355;Borde, A., Vilenkin, A., Eternal inflation and the initial singularity, Phys. Rev. Lett. 72 (1994) 3305Borde, A., Open and closed universes, initial singularities and inflation, Phys. Rev. D 50 (1994) 3692;Borde, A., Vilenkin, A., Singularities in inflationary cosmology: a review, Int. J. Mod. Phys. D 5 (1996) 813;Borde, A., Guth, A.H., Vilenkin, A., Inflationary spacetimes are incomplete in past directions, Phys. Rev. Lett. 90 (2003) 151301. GianlucaCalcagni2017 Calcagni, G., Classical and Quantum Cosmology, Graduate Texts in Physics, Springer, (2017). Novello08 Novello, M. & Perez Bergliaffa, S. E. Bouncing cosmologies, Phys. Rep. 463 (2008) 127. bouncecos Mukhanov, V. F., & Brandenberger, R. H., A nonsingular universe, Phys. Rev. Lett. 68 (1992) 1969;Brandenberger, R. H., Mukhanov, V. F. & Sornborger, A., Cosmological theory without singularities, Phys. Rev. D 48 (1993) 1629. WELLSING Choquet-Bruhat, Y., General Relativity and the Einstein Equations, OUP Oxford (2008); Bojowald, M., Essay: Initial Conditions for a Universe, Gen. Relativ. Grav., 35 (2003) 1877. Contreras17 Contreras, F., Cruz, N. & Palma, G., Bouncing solutions from generalized EoS, arXiv: 1701.03438 [gr-qc]. REVBOUNCE1123 Brandenberger, R. H., Introduction to Early Universe Cosmology, arXiv:1103.2271 [astro-ph.CO];Battefeld, D., & Peter, P., A critical review of classical bouncing cosmologies, Phys. Rep. 571 (2015) 1;Cheung, Y.-K. E., Song, X., Li, S., Li, Y., & Zhu, Y., The CST Bounce Universe model – a parametric study, arXiv:1601.03807 [gr-qc]. Xuepospec Lidsey, J. E., Wands, D., & Copeland, E. J., Superstring Cosmology, Phys. Rept. 337 (2000) 343;Xue, B. K., Garfinkle, D., Pretorius, F., and Steinhardt, P. J., Nonperturbative analysis of the evolution of cosmological perturbations through a nonsingular bounce, Phys. Rev. D 88 (2013) 083509. lqcbounce Date, G., & Hossain, G. M., Genericness of a Big Bounce in Isotropic Loop Quantum Cosmology, Phys. Rev. Lett. 94 (2005) 011302;Mielczarek, J., Stachowiak, T., Szydlowski, M., Exact solutions for a big bounce in loop quantum cosmology, Phys. Rev. D 77 (2008) 123506;Singh, P., Are loop quantum cosmos never singular?, Class. Quant. Grav. 26 (2009) 125005;Ashtekar, A., Singularity Resolution in Loop Quantum Cosmology: A Brief Overview, J. Phys. Conf. Ser. 189 (2009) 012003;Cai, Y.-F., Wilson-Ewing, E., Non-singular bounce scenarios in loop quantum cosmology and the effective field description, J. Cosmol. Astropart. Phys. 03 (2014) 026. lqbounce1 Bojowald, M., Loop quantum cosmology, Living Rev. Rel. 8 (2005) 11. lqbounce2 Banerjee, K., Calcagni, G., &Martin-Benito, M., Introduction to loop quantum cosmology, SIGMA 8 (2012) 016. Ashtekar11 Ashtekar, A. & Singh, P., Loop quantum cosmology: a status report, Class. Quantum Grav. 28 (2011), 213001. lqbounce3 Ashtekar, A., Pawlowski T., & Singh, P., Quantum Nature of the Big Bang: Improved dynamics, Phys. Rev. D 74 (2006) 084003. deroglie Peter, P., Pinho, E. J. C., & Pinto-Neto, N., A Non inflationary model with scale invariant cosmological perturbations, Phys. Rev. D 75 (2007) 023516;Pinto-Neto, N., &Fabris, J. C., Quantum cosmology from the de Broglie-Bohm perspective, Class. Quant. Grav. 30 (2013) 143001. freqlqc Singh, P., & Toporensky, A., Big Crunch Avoidance in k = 1 Semi-Classical Loop Quantum Cosmology, Phys. Rev. D 69 (2004) 104008;Lidsey, J. E., Mulryne, D. J., Nunes, N. J., & Tavakol, R., Oscillatory Universes in Loop Quantum Cosmology and Initial Conditions for Inflation, Phys. Rev. D 70 (2004) 063521. vanrhopl Singh, P., Vandersloot, K., Vereshchagin, G. V., Non-Singular Bouncing Universes in Loop Quantum Cosmology, Phys. Rev. D 74 (2006) 043510;Bojowald, M., Quantum gravity in the very early universe, Nucl. Phys. A 862-863 (2011) 98;Ashtekar, A., Loop Quantum Gravity and the Planck Regime of Cosmology , Fundam. Theor. Phys. 177 (2014) 323. nonlocalgravity Calcagni, G., Modesto, L., Nicolini, P., Super-accelerating bouncing cosmology in asymptotically-free non-local gravity, Eur. Phys. J. C 74 (2014) 2999. nonlocgravit Biswas, T., Koivisto, T., Mazumdar, A., Towards a Resolution of the Cosmological Singularity in Non-local Higher Derivative Theories of Gravity, JCAP 1011:008 (2010);Donoghue, J. F., El-Menoufi, B. K., Non-local quantum effects in cosmology 1: Quantum memory, non-local FLRW equations and singularity avoidance, Phys. Rev. D 89 (2014) 104062;Li, Y.-D., Modesto, L., Rachwal, L., Exact solutions and spacetime singularities in nonlocal gravity, JHEP (2015) 2015: 1;Amendola, L., Burzilla, N., Nersisyan, H., Quantum Gravity inspired nonlocal gravity model, Phys. Rev. D 96 (2017) 084031;Modesto, L., Rachwal, L., Nonlocal quantum gravity: A review, Int. J. Mod. Phys. D 26 (2017) 1730020Modesto, L., Super-renormalizable quantum gravity, Phys. Rev. D 86 (2012) 044005. mattbounce Cai,Y.-F., Easson, D. A., & Brandenberger, R., Towards a non-singular Bouncing Cosmology, J. Cosmol. Astropart. Phys. 1208 (2012) 020 . Brandenberger12 Brandenberger, R., The matter bounce alternative to inflationary cosmology, arxiv: 1206.4196 [astro-ph.co];Brandenberger, R., Alternatives to the inflationary paradigm of structure formation , Int. J. Mod. Phys. Conf. Ser. 01 (2011) 67;Brandenberger, R., Cosmology of the Very Early Universe , AIP Conf. Proc. 1268 (2010) 3. WE13 Wilson-Ewing, E., The matter bounce scenario in loop quantum cosmology, arxiv: 1211.6269 [gr-qc]. Cai08 Cai, Y. F., Qiu, T., Piao, Y. S., Li, M., & Zhang, X. Bouncing Universe with Quintom Matter, JHEP 0710 (2007) 071;Cai, Y.F., Qiu, T., Brandenberger, R., Piao, Y.S. & Zhang, X., On Perturbations of Quintom Bounce, J. Cosmol. Astropart. Phys. 0803 (2008), 013;Cai, Y. F. & Zhang, X., Evolution of Metric Perturbations in Quintom Bounce model, J. Cosmol. Astropart. Phys. 0906 (2009) 003. LEEWICK Cai, Y.-F., Qiu, T., Brandenberger, R., & Zhang, X.,A Nonsingular Cosmology with a Scale Invariant Spectrum of Cosmological Perturbations from Lee-Wick Theory, Phys. Rev. D 80 (2009) 023511. Lin11Lin, C., Brandenberger, R. & Levasseur, L.P., A Matter Bounce By Means of Ghost Condensation, J. Cosmol. Astropart. Phys. 1104 (2011) 019.Qiu11Gal Qiu, T., Evslin, J., Cai, Y. F., Li, M. & Zhang, X., Bouncing Galileon Cosmologies,' J. Cosmol. Astropart. Phys. 1110 (2011), 036;Easson, D. A., Sawicki, I., & Vikman, A., G-Bounce, J. Cosmol. Astropart. Phys. 1111 (2011) 021.phantomfield Brown, M. G., Freese, K. & Kinney, W. H., The Phantom Bounce: A New Oscillating Cosmology, J. Cosmol. Astropart. Phys. 0803 (2008) 002;Dzhunushaliev, V., Folomeev, V., Myrzakulov, K., & Myrzakulov, R., Phantom fields: bounce solutions in the early Universe and S-branes, Int. J. Mod. Phys. D 17 (2008) 2351;Nozari, K., & Sadatian, S. D., Bouncing universe with a non-minimally coupled scalar field on a moving domain wall, Phys. Lett. B 676 (2009) 1;Saridakis, E. N. & Sushkov, S. V., Quintessence and phantom cosmology with non-minimal derivative coupling, Phys. Rev. D 81 (2010) 083510; Banijamalia, A., & Fazlpour, B., Phantom behavior bounce with tachyon and non-minimal derivative coupling, J. Cosmol. Astropart. Phys. 01 (2012) 039. Barragan09 Barragán, C., Olmo, G. J. & Sanchis-Alepuz, H., Bouncing cosmologies in Palatini F( R) gravity, Phys. Rev. D 80 (2009) 024016. Oikonomou14Oikonomou,V. K., Loop quantum cosmology matter bounce reconstruction from F( R) gravity using an auxiliary field, Gen. Rel. Grav. 47 (2015) 126. Odintsov14Odintsov, S. D. & Oikonomou,V. K., Matter bounce loop quantum cosmology from F( R) gravity, Phys. Rev. D 90 (2014) 124083. Odintsov151Odintsov, S. D. & Oikonomou,V. K., Bouncing cosmology with future singularity from modified gravity, Phys. Rev. D 92 (2015) 024016. Oikonomou15 Oikonomou, V. K., Superbounce and loop quantum cosmology Ekpyrosis from modified gravity,Astrophys. Space Sci. 359 (2015) 30. Cai11 Cai, Y.-F., Chen, S.-H., Dent, J. B., Dutta, S. & Saridakis, E. N.,Matter bounce cosmology with the f( T) gravity, Class. Quantum Grav. 28 (2011) 215011;Bamba, K., Nashed, G. G. L., El Hanafy, W., Ibraheem, Sh. K., Bounce inflation in f( T) Cosmology: A unified inflaton-quintessence field, Phys. Rev. D 94 (2016) 083513. Kehagias99 Kehagias, A. & Kiritsis, E., Mirage cosmology, J. High Energy Phys. 11 (1999) 022. ecbounce Gasperini, M., Repulsive Gravity in the Very Early Universe, Gen. Relativ. Grav. 30 (1998) 12;Stachowiak, T. & Szydlowski, M., Exact solutions in bouncing cosmology, Phys. Lett. B 646 (2007) 209;Brechet, S. D., Hobson, M. P. & Lasenby, A. N., Classical big-bounce cosmology: dynamical analysis of a homogeneous and irrotational Weyssenhoff fluid, Class. Quantum Grav. 25 (2008) 245016; Poplawski, N. J., Cosmology with torsion: An alternative to cosmic inflation, Phys. Lett. B 694(2010) 181;Poplawski, N. J., Big bounce from spin and torsion, Gen. Relativ. Gravit. 44 (2012) 1007;Poplawski, N. J., Nonsingular, big-bounce cosmology from spinor-torsion coupling, Phys. Rev. D 85 (2012) 107502;Magueijo, J., Zlosnik, T. G., Kibble, T. W. B., Cosmology with a spin, Phys. Rev. D 87 (2013) 063504; Hadi, H., Heydarzade, Y., Hashemi, M., Darabi, F., Emergent Cosmos in Einstein-Cartan Theory, Eur. Phys. J. C 78 (2018) 38. horlifb Brandenberger, R., Matter Bounce in Horava-Lifshitz Cosmology, Phys. Rev. D 80 (2009) 043516. nonlocb Biswas,T., Koshelev, A. S., Mazumdar, A., & Vernov, S. Y., Stable bounce and inflation in non-local higher derivative cosmology, J. Cosmol. Astropart. Phys. 08 (2012) 024;Dragovich B. On Nonlocal Modified Gravity and Cosmology, In: Dobrev V. (eds) Lie Theory and Its Applications in Physics. Springer Proceedings in Mathematics & Statistics, vol 111. Springer, Tokyo (2014). Odintsov152 Odintsov, S. D., Oikonomou,V. K. & Saridakis, E. N., Superbounce and loop quantum ekpyrotic cosmologies from modified gravity: F( R), F( G) and F( T)", Ann. Phys 363 (2015) 141. Khoury01 Khoury, J., Ovrut, B. A., Steinhardt, P. J. & Turok, N. The ekpyrotic universe: Colliding branes and the origin of the hot Big-Bang, Phys. Rev. D 64 (2001) 123522;Lehners, J.-L., Ekpyrotic and Cyclic Cosmology, Phys. Rept. 465 (2008) 223. Gasperini93 Gasperini, M.& Veneziano, G., Pre- big bang in string cosmology, Astropart. Phys. 1 (1993) 317;Biswas, T., Mazumdar, A. & Siegel, W., Bouncing Universes in String-inspired Gravity, J. Cosmol. Astropart. Phys. 0603 (2006) 009;Kounnas, C., Partouche, H., & Toumbas, N., Thermal duality and non-singular cosmology in d-dimensional superstrings, Nucl. Phys. B 855 (2012) 280;Florakis, I., Kounnas, C., Partouche, H., & Toumbas, N., Non-singular string cosmology in a 2d Hybrid model, Nucl. Phys. B 844 (2011) 89;Brandenberger, R. H. , Kounnas, C., Partouche, H., Patil, S. P., Cosmological Perturbations Across an S-brane,& Toumbas, N. J. Cosmol. Astropart. Phys. 03 (2014) 015. Harko11 Harko, T., Lobo, F. S. N., Nojiri, S. & Odintsov, S. D.,f( R,T) gravity, Phys. Rev. D 84 (2011) 024020. Alvarenga13Alvarenga, F. G., de la Cruz-Dombriz, A., Houndjo, M. J. S., Rodrigues, M. E. & Sáez-Gómez, D. Dynamics of scalar perturbations in f( R,T) gravity Phys. Rev. D87 (2013) 103526. Shabani13 Shabani, H., & Farhoudi, M.,f(R,T) cosmological models in phase-space, Phys. Rev. D 88 (2013) 044048. Harko14Harko, T., Thermodynamic interpretation of the generalized gravity models with geometry-matter coupling, Phys. Rev. D 90(2013) 044048. Shabani14 Shabani, H., & Farhoudi, M. Cosmological and solar system consequences of f( R,T) gravity models, Phys. Rev. D 90 (2014) 044031. Singh14 Singh, C. P. & Singh, V. Reconstruction of modified f( R, T) gravity with perfect fluid cosmological models, Gen. Relativ. Gravit. 46 (2014) 1696. Sharif14 Sharif, M. & Zubair, M. Cosmological reconstruction and stability in f( R, T) gravity, Gen. Relativ. Gravit. 46 (2014) 1723. Alves16 Alves, M. E. S.,Moraes, P. H. R. S., de Araujo, J. C. N. & Malheiro, M.,Gravitational waves in f( R,T) and f( R, T^ϕ) theories of gravity, Phys. Rev. D 94 (2016) 024032. Sun16 Sun, G. & Huang, Y.-C.,The cosmology in f( R,T) gravity without dark energy, Int. J. Mod. Phys. D 25 (2016) 1650038. Zaregonbadi161 Zaregonbadi, R.,Farhoudi, M. & Riazi, N., Dark Matter From f( R,T) Gravity, Phys. Rev. D 94 (2016) 084052. Zaregonbadi162Zaregonbadi R. &Farhoudi, M., Cosmic acceleration from matter-curvature coupling, Gen. Relativ. Gravit. 48 (2016) 142. Shabani171Shabani, H. & Ziaie, A. H., Stability of the Einstein static universe in f( R,T) gravity, Eur. Phys. J.C 77 (2017) 31. Shabani172 Shabani, H. & Ziaie, A. H., Consequences of energy conservation violation: Late time solutions of Λ( T) CDM subclass of f( R,T) gravity using dynamical system approach, Eur. Phys. J. C 77 (2017) 282. Shabani173 Shabani, H. & Ziaie, A. H., Late-time cosmological evolution of a general class of f(R,T) gravity with minimal curvature-matter coupling, Eur. Phys. J. C 77 (2017) 507. Shabani174 Shabani, H., Cosmological consequences and statefinder diagnosis of non-interacting generalized Chaplygin gas in f(R,T) gravity, Int. J. Mod. Phys. D. 26 (2017) 1750120. Moraes17 Moraes, P. H. R. S., Paula, de W. & Correa, P. A. C., Charged wormholes in f( R,T) extended theory of gravity arXiv:1710.07680 [gr-qc]. modelwormprd Moraes, P. H. R. S. and Sahoo, P. K., Modeling wormholes in f(R,T) gravity, Phys. Rev. D 96 (2017) 044038. Deb18Deb, D., Rahaman, F., Ray, S. & Guhaa, B.K., Strange stars in f( R,T) gravity,, J. Cosmol. Astropart. Phys. 03 (2018) 044.Moraes18 Moraes, P. H. R. S. & Correa, P. A. C., Evading the non-continuity equation in the f( R,T) formalism,Eur. Phys. J.C 78 (2018) 192. Sahoo18 Sahoo, P.K., Moraes, P.H.R.S. & Sahoo, P., “Wormholes in R^2-gravity within the f(R,T) formalism" ,Eur. Phys. J.C 78 (2018) 46. Shabani18Shabani, H. & Ziaie, A. H., Interpretation of f( R,T) gravity in terms of a conserved effective fluid,, Int. J. M. Phys. A 33 (2018) 1850050. Stefancic05 Štefančić, H., Dark energy transition between quintessence and phantom regimes: An equation of state analysis, Phys. Rev. D 71 (2005) 124036. Nojiri05 Nojiri, S. &Odintsov, S. D., Inhomogeneous equation of state of the universe: Phantom era, future singularity, and crossing the phantom barrier, Phys. Rev. D 72 (2005) 023003. Barrow90 Barrow, J. D., Graduated inflationary universes, Phys. Lett. B 235 (1990) 40. Mukherjee06 Mukherjee, S., Paul, B. C. D. & Beesham, A., Emergent universe with exotic matter, Class. Quantum Grav. 23 (2006) 46927. Stefancic052 Štefančić, H., Expansion around the vacuum equation of state: Sudden future singularities and asymptotic behavior, Phys. Rev. D 71 (2005) 084024. Nojiri052 Nojiri, S.,Odintsov, S. D.& Tsuijikawa, S., Properties of singularities in the (phantom) dark energy universe, Phys. Rev. D 71 (2005) 063004. Contreras16 Contreras, F.,Cruz, N.& Gonzàles, E., Generalized equations of state and regular universes, J. Phys. Conf. Ser. 720 (2016) 012014. Babichev05 Babichev, E., Dokuchaev, V. & Eroshenko, Yu., Dark energy cosmology with generalized linear equation of state, Class. Quantum Grav. 22 (2005) 143. Sharif13 Sharif, M. & Zubair, M., Energy conditions constraints and stability of power law solutions in f( R, T) gravity, J. Phys. Soc. Jap. 82 (2013) 014002. Molina99 Molina-París, C. & Visser, M.,Minimal conditions for the creation of a Friedman–Robertson–Walker universe from a bounce, Phys. Lett. B 455 (1999) 90. Frieman08 Frieman, J. A., Turner, M. S. & Huterer, D.,Dark energy and the accelerating universe, Annu. Rev. Astron. Astrophys. 46 (2008) 385. Chavanis13 Chavanis, P.-H.,A cosmological model based on a quadratic equation of state unifying vacuum energy, radiation, and dark energy, Journal of Gravity 2013 (2013) 682451. Bamba12 Bamba, K., Capozziello, S., Nojiri, S. & Odintsov, S. D., Dark energy cosmology: the equivalent description via different theoretical models and cosmography tests, Astrophys. Space Sci. 342 (2012) 155. Cai15 Cai, Y. F., Qiu, T., Brandenberger, R., Piao, Y. S. & Zhang, X., A ΛCDM bounce scenario, J. Cosmol. Astropart. Phys. 03 (2015) 006. Tsujikawa08 Tsujikawa, S., Uddin, K. &Tavakol, R., “Density perturbations in f(R,T) gravity in metric and Palatini formalisms", Phys. Rev. D 77 (2008), 043007. Malik05 Malik, K.A. & Wands, D., “Adiabatic and entropy perturbations with interacting fluids and fields", J. Cosmol. Astropart. Phys. 02 (2005) 007. Hwang01 Hwang, J.-C. &Noh, H., “Gauge-ready formulation of cosmological kinetic theory in generalized gravity theory", Phys. Rev. D 65 (2001), 023512. Houndjo13 Houndjo, M.J.S., Batista, C.E.M., Batista, Campos, J.P., and Piattel, O.F., “Finite-time singularities in f(R, T) gravity and the effect of conformal anomaly", Can. J. Phys. 91 (2013)547. sahoo Sahoo, P. K., Sahoo, P., Bishi, B. K., and Aygün, S., “Magnetized strange quark model with Big Rip singularity in f( R, T) gravity", Mod. Phys. Lett. A 32 (2018) 1750105. alisymm Yadav, A. K., Ali, A. T., Invariant Bianchi type I models in f( R, T) Gravity, Int. J. Geom. Methods. Mod. Phys. 15 (2018) 1850026. Awad18 Awad, A., Hanafy, W.E., Nashed, G.G.L. & Saridakis, E.N., “Phase portraits of general f (T) cosmology", J. Cosmol. Astropart. Phys. 02 (2018) 052.
http://arxiv.org/abs/1708.07874v6
{ "authors": [ "Hamid Shabani", "Amir Hadi Ziaie" ], "categories": [ "gr-qc", "hep-th" ], "primary_category": "gr-qc", "published": "20170825201749", "title": "Bouncing cosmological solutions from f(R,T) gravity" }
^1 Peter the Great St. Petersburg Polytechnic University, Saint Petersburg, Russia ^2 Ioffe Institute, Saint-Petersburg, Russia [email protected] The evolution of inclination angle and precession damping of radio pulsars is considered. It is assumed that the neutron star consists of3 "freely"rotating components: the crust and two core components, one of which contains pinned superfluid vortices. We suppose that each component rotates as a rigid body. Also the influence of the small-scale magnetic field on the star's brakingprocess is examined. Within the framework of this model the star simultaneously can have glitch-like events combined withlong-period precession (with periods 10-10^4 years). It is shown that the case of the small quantity of pinned superfluid vortices seems to be more consistent with observations. § INTRODUCTIONRadio pulsars can be considered as exceptionally stable clocks. But sometimes besides smooth braking their rotation suffers some irregularities called glitches<cit.>.Usually it is related to superfluid pinning.Pinned vortices do not allow superfluidto participate in pulsar braking. It causes the sudden unpinning of vortices at some moments andthe transfer of angular momentum from superfluid to the crust <cit.>. That is observed as a glitch.There are some evidences that isolated neutron starsmay precess with large periods T_p of precession.Pulsar B1821-11 precesses with periodT_p∼ 500 <cit.>. Some other pulsars show periodic variations which may be explained by precession withT_p∼ 100 - 500 <cit.>.The changing of pulse profile of Crab pulsaralso may be related to precession with T_p∼ 10^2 <cit.>. The increase of radio luminosity of Geminga pulsar <cit.> may be related to precession with T_p > 10 as well. The non-regular pulse variations known as "red noise"<cit.>with timescales T ∼ 1- 10^4 also can be related to precession <cit.>.The problem is that the pinning of vortices leads to precession with periodsT_p∼ P · (I_tot / L_g ) ∼ (10^2 - 10^6) P <cit.>, where P ispulsar period, I_tot is moment of inertia of the star, L_g is the angular momentum of pinned superfluid. Such precession seems to damped very quickly and it is incompatible with existence of the long-period precession <cit.>.In this paper we consider a model of rotating neutron star, proposed in <cit.>. It allows the coexistence of long-term precession and quasi-glitch events. We also take into account the influence of the small-scale magnetic field on pulsar braking.§ BASIC EQUATIONSWe assume that neutron star consists of 3 components which we will call the crust (c-component), g-component and r-component. The crust (c-component) . We suppose that it rotates as a rigid body with angular velocity Ω⃗_c. It is the outer component so the angular velocity Ω⃗_cis the observed pulsar angular velocity Ω⃗, Ω = 2 π / P. We suppose that M⃗_c = I_cΩ⃗_cṀ⃗̇_c = K⃗_ext + N⃗_gc + N⃗_rc,where M⃗_c is angular momentum of the crust,I_c is its moment of inertia,K⃗_ext is the external torque of magnetospheric origin acting on the crust, N⃗_gc and N⃗_rc are the torquesacting on the crust due to its interaction with g and r components correspondingly.g-component. It is the one of two inner components.We assume that it consists of normal matterrotating as a rigid body with angular velocity Ω⃗_g and superfluid matterfirmly pinned to the normal matter so that superfluid vortices rotate together with the normal matter with angular velocity Ω⃗_g:M⃗_g = I_gΩ⃗_g + L⃗_gṀ⃗̇_g = N⃗_cg + N⃗_rgL̇⃗̇_g = [ Ω⃗_g×L⃗_g],where M⃗_g is the total angular momentum of g-component, I_g is the moment of inertia of its normal matter, L⃗_g is angular momentum of pinned superfluid, N⃗_cg and N⃗_rg are the torquesacting on g-component due to its interaction withthe crust andr-component correspondingly. r-component . It is the second inner component. We assume that it rotates as a rigid body with angular velocity Ω⃗_r:M⃗_r = I_rΩ⃗_rṀ⃗̇_r = N⃗_cr + N⃗_gr,where M⃗_r is the angular momentum of r-component, I_r is its moment of inertia, N⃗_cr and N⃗_gr are the torquesacting on the r-component due to its interaction withthe crust and g-component correspondingly.In the crust frame of references the equations of rotation (<ref>)-(<ref>) can be rewritten asΩ̇⃗̇=R⃗_gc + R⃗_rc + S⃗_ext,μ̇⃗̇_cg+ [Ω⃗×μ⃗_cg ] + [Ω⃗×ω⃗_g] + [μ⃗_cg×ω⃗_g ] = R⃗_cg + R⃗_rg- R⃗_gc - R⃗_rc - S⃗_ext,μ̇⃗̇_cr+ [Ω⃗×μ⃗_cr ] = R⃗_cr + R⃗_gr - R⃗_gc - R⃗_rc - S⃗_ext,ω̇⃗̇_g=[ μ⃗_cg×ω⃗_g],whereμ⃗_ij = Ω_j - Ω_i,N⃗_ij = - N⃗_ji,R⃗_ij = N⃗_ij / I_j, i,j = c,g,r, S⃗_ext = K⃗_ext / I_c and ω⃗_g = L⃗_g / I_g.In the sake of simplicity we suppose thatN⃗_ij = - I_j (α_ijμ^||_ije⃗_Ω + β_ijμ⃗^⊥_ij+ γ_ij [ e⃗_Ω×μ⃗^⊥_ij ]),where α_ij,β_ij,γ_ij are some constants, e⃗_Ω = Ω⃗ / Ωand we have introduced parallelcomponent A^|| = A⃗·e⃗_Ω and perpendicular component A⃗^⊥ = A⃗ - A^||e⃗_Ω for any vector A⃗. First let us consider the equilibrium state for zeroth external torque K⃗_ext=0. In this case, the whole star rotates as a rigid body (μ⃗_ij = 0) and, hence, R⃗_ij = 0. Equations (<ref>)-(<ref>) may be written asΩ̇⃗̇ = 0 ω̇⃗̇_g = 0ω⃗_g = ω_ge⃗_Ω. Let us further consider a small perturbation to the equilibrium state. We will treat values μ⃗_ij, ω⃗_g^⊥ and S⃗_ext as small perturbations and neglect any term quadratic in these values. Hence, equations (<ref>)-(<ref>) may be written asΩ̇= R_gc^|| + R_rc^|| + S_ext^||,μ̇_cg^||= R_cg^|| + R_rg^|| - R_gc^|| - R_rc^|| - S_ext^||,μ̇_cr^||= R_cr^|| + R_gr^|| - R_gc^|| - R_rc^|| - S_ext^||,ω̇_g^||= 0,Ωė⃗̇_Ω=R⃗_gc^⊥ + R⃗_rc^⊥ + S⃗_ext^⊥,μ̇⃗̇_cg^⊥-(ω_g^|| - Ω)[ e⃗_Ω×μ⃗_cg^⊥ ] + [Ω⃗×ω⃗_g^⊥] = R⃗_cg^⊥ + R⃗_rg^⊥- R⃗_gc^⊥ - R⃗_rc^⊥ - S⃗_ext^⊥,μ̇⃗̇_cr^⊥+[Ω⃗×μ⃗_cr^⊥ ] = R⃗_cr^⊥ + R⃗_gr^⊥- R⃗_rc^⊥ - R⃗_gc^⊥ - S⃗_ext^⊥,ω̇⃗̇_g^⊥=- ω_g^||/Ω (R⃗_gc^⊥ + R⃗_rc^⊥ + S⃗_ext^⊥ + [ Ω⃗×μ⃗_cg ] ). In order to calculate the magnetospheric torque K⃗_ext acting on the crust we use the model proposed in <cit.>. It is assumed that neutron star is braking simultaneously by both magnitodipolar and current losses. Hence,K⃗_ext = - Ĩ_tot/τ_0 ( e⃗_Ω - (1-α) cosχ e⃗_m- R_eff [e⃗_Ω×e⃗_m]),where m⃗ = m e⃗_m is the dipolar magnetic moment of neutron star, χ is the inclination angle(the angle between e⃗_Ω and e⃗_m,see fig.<ref>), τ_0 = 3/2c^3/m^2Ω^3Ĩ_tot, Ĩ_tot = I_c + I_g + I_r, the coefficientR_eff is related to the magnetic field inertia <cit.>. In the paper we assume that R_eff = 9/10c/Ω r_ns∼ 5 · 10^3( P/1s)<cit.>, where r_ns is neutron star radius.The coefficient α is related to thevalue of the current flowing through the pulsar tubes. In this paper we assume thatthere are only "inner gaps"with free electron emission from neutron star surface in pulsar tube. Hence, the magnitude of the current depends on the structure of surface small-scale magnetic field (see fig. <ref>). The value of α averaged over precession angle ϕ_Ω (see fig <ref>) <α>(χ) = 1/2π∫_0^2πα(χ,ϕ_Ω) dϕ_Ω is shown in fig <ref>. Here, ν = B_sc / B_dip,B_sc is the induction of small-scale magnetic field, B_dip = 2 m / r_ns^3 is the induction of dipolar field on the magnetic pole of neutron star.§ QUASISTATIC APPROXIMATIONFirstly let us take into account that τ_rel≪τ_0, whereτ_rel∼max( 1 /α_ij, 1/β_ij, 1/γ_ij) ∼ (1 - 10^7) s is relaxation time <cit.>. Hence, we can consider the rotation of neutron star under the small slowly varying torque K⃗_ext. In this case, it is possible to neglect terms μ̇⃗̇_ij and ω̇⃗̇_g^⊥in (<ref>)-(<ref>).Using (<ref>)-(<ref>) we can immediately obtain thatΩ̇ =K_ext^||/Ĩ_totso the star brakes as a rigid body withmoment of inertia equal to Ĩ_tot.Equations (<ref>) and (<ref>) give usμ_cg^||= -S_ext^|| I_c/Ĩ_tot·α_cr + α_rg + α_gr/α_cgα_cr + α_rgα_cr + α_cgα_gr,μ_cr^||= -S_ext^|| I_c/Ĩ_tot·α_cg + α_rg + α_gr/α_cgα_cr + α_rgα_cr + α_cgα_gr.Let usintroduce a complex number A^⊥ associated to arbitrary perpendicular vector A⃗^⊥ in the following way: if e⃗_z = e⃗_Ω then A⃗^⊥ = A_xe⃗_x + A_ye⃗_y and A^⊥ = A_x + i A_y. Hence, equations (<ref>)-(<ref>) give usμ_cg^⊥=S_ext^⊥/Δ_⊥·( iΩ + ξ_cr + ξ_gr),μ_cr^⊥=S_ext^⊥/Δ_⊥·( iΩ + ξ_gr),ω_g^⊥=S_ext^⊥/Ω Δ_⊥·( ω_g^|| ( iΩ + ξ_cr + ξ_gr)- Ω ξ_cg+ ξ_cgξ_cr + ξ_cgξ_gr + ξ_rgξ_cr),where ξ_pq = β_pq + i γ_pq andΔ_⊥ = Ω^2- i Ω( ξ_cr + ξ_rc + ξ_gr + ξ_gc )- ( ξ_gcξ_cr + ξ_gcξ_gr + ξ_rcξ_gr).In the case of weak viscosity |ξ_pq| ≪Ω we have μ⃗_cg^⊥≈1/Ω [ e⃗_Ω×S⃗_ext^⊥ ] μ⃗_cr^⊥≈1/Ω [ e⃗_Ω×S⃗_ext^⊥ ]ω⃗_g^⊥≈ω_g^||/Ω^2 [ e⃗_Ω×S⃗_ext^⊥ ].In the last equation we also assume that |ξ_pq| ≪ω_g^||.Taking into account equations (<ref>) - (<ref>) and (<ref>) one can rewrite equations (<ref>) and (<ref>)asΩ̇=- Ω/τ_0( sin^2χ + αcos^2χ),χ̇=- Ĩ_tot/I_c 1/τ_0 sinχ cosχ·( B (1-α) - ΓR_eff),ϕ̇_Ω=- Ĩ_tot/I_c 1/τ_0 cosχ·( B R_eff + Γ(1-α) ), where coefficients B and Γ are defined asB + iΓ = Ω/Δ_⊥·(Ω - i(ξ_cr + ξ_gr) ).In the case of weak viscosity |ξ_pq| ≪Ω we haveB ≈ 1 - γ_gc + γ_rc/ΩΓ≈β_gc + β_rc/Ω.And consequently equation (<ref>) gives that as long as inclination angle χ differsfrom 0^∘ and 90^∘ the star will precess with periodT_p = τ_0/ R_eff· I_c/Ĩ_tot· 2π/cosχ∼ (10^-5 - 10^-3)τ_0∼ 10 - 10^3.Also it is worth to note that as long as quasistatic approximation is valid the pinned superfluid L⃗_g does not influence on star rotation. We suppose that the period of precession T_p≪( I_c /Ĩ_tot) ·τ_0≪τ_0. Hence, we can average equations (<ref>) and (<ref>) over precession obtaining equationdχ/d P = - 1/P·Ĩ_tot/I_c·sinχcosχ· B (1-< α >) - ΓR_eff/sin^2χ + < α > cos^2χ.§ QUASI-GLITCH EVENTSAccording to equation (<ref>) the direction of L⃗_g followsthe vector e⃗_Ω so the pinned superfluid vortices are directed almost along the crust rotation axis. Consequently, the direction of pinned vortices precesses together with e⃗_Ω. However, due to ideal pinning (see the last equation in (<ref>))the magnitude of vector L⃗_g does not change at all. Hence, due to pulsar braking the difference between velocities of superfluid and normal matter will grow. So the glitch must occur in g-component at some moment.We suppose that the glitch occurs and relaxes at time-scalemuch smaller than the period of precession T_p. It allows us to consider e⃗_Ω as a constant vectorin equations (<ref>), (<ref>), (<ref>)-(<ref>)and, for simplicity, neglect the acting of external torque S⃗_ext. Let us assume that before glitch the neutron star rotates as a rigid body. At some moment (t=0) a small amount of angular momentum ΔL⃗_g = Δ L_ge⃗_Ω is is transfered from superfluid part to normal part of g-component. Then the solution of equations (<ref>)-(<ref>) gives us (see fig. <ref>)Ω(t) = ΔΩ( 1 - e^-p_+t - Q (1-e^-p_-t) ),whereΔΩ = ΔΩ_∞/1-Q, ΔΩ_∞ = Δ L_g/Ĩ_tot, the coefficients p_+ and p_- (p_+ > p_-) are the roots of equationp^2-( α_cg + α_rg + α_cr + α_gr + α_gc + α_rc) p++( α_gc + α_rg + α_cg) ·( α_cr + α_gr + α_rc) + ( α_rc - α_rg) ·( α_gr - α_gc) = 0and Q = Ĩ_totα_cg - I_c p_+/Ĩ_totα_cg - I_c p_-In the case ofα_cg≫( 1 + I_c/I_r) α_rc,( 1 + I_g/I_r) α_rg we havep_+≈( 1 + I_g/I_c) α_cg p_-≈Ĩ_tot/I_c + I_g ( α_cr + α_gr)1 - Q ≈I_c+I_g/Ĩ_tot If one want to relate these quasi-glitch event to observed glitches then 1 / p_+ and 1 / p_- should be interpreted as the glitch growth and relaxation times respectively. We obtain that 1 / p_+≤ 1 <cit.> and 1 / p_-∼ 1 - 10^2 <cit.>. Unfortunately, 1-Q ∼ 10^-2-10^-1 in our model. That may be not so bad for glitches in some pulsars like Crab (Q ≥ 0.8 <cit.>) or J0205+6446 (Q ≈ 0.77 <cit.>),but obviously contradicts glitches in most pulsars for which Q ≪ 1 <cit.>. In particular, our model does not describe glitches in Vela pulsarQ ≤ 0.2 <cit.>.§ RESULTSIn present paper wewill use the following component interaction model. The crust and g-component interact with r-component like normal matter with superfluid and there are normal viscosity between the crust and g-component <cit.> :β_cr = β_gr = Ωσ/1+σ^2α_cr = α_gr = 2 β_gcγ_cr = γ_gr = - σβ_gcβ_cg = α_cgγ_cg = 0.The solution of equation (<ref>) for different initial inclination angles χ and initial period P = 10 in the case ofσ = 10^-10, α_cg = 10^-1 s^-1, I_c / Ĩ_tot = 10^-2, I_g / Ĩ_tot = 10^-3 is shown in fig. <ref> and <ref>.We can see that in most casesthe star forgets initial value ofinclination angle χ very rapidly and evolves to equilibrium inclination angle χ_eq, at whichsinχcosχ·( B (1-< α >) - ΓR_eff) ≈ 0.The subsequent evolution of angle χ is caused bythe slow changing of equilibrium angle χ_eq due to pulsar braking and consequently growing of R_eff.The solution (<ref>) for initial inclination angleχ = 45^∘, initial period P = 10 and different values of ν, in case of α_cg = 10^-1 s^-1, I_c / Ĩ_tot = 10^-2, I_g / Ĩ_tot = 10^-3 is shown in fig. <ref>. In this case 1-Q ≈ 10^-2. The same but for I_c / Ĩ_tot = 10^-1, I_g / Ĩ_tot = 10^-5is shown in fig <ref>-<ref>. In this case Q ≈ 0.9.The increase of I_c leads to slower evolution to equilibrium angle χ_eq. The equilibrium inclination angle evolves slowly due to both the decrease of I_g and, hence, lesser dissipation,and the increase of I_c and, hence, large precession period T_p. And, consequently, evolutionary tracks may pass through the most pulsars. The same but for I_g / Ĩ_tot = 10^-3, I_r / Ĩ_tot = 10^-3is shown in fig. <ref> -<ref>.In this case, I_c≈ I_tot and Q ≈ 10^-3. The star rotates almost as a rigid body. Consequently, the inclination angle changes very slowly and the pulsars usually do not reach the equilibrium angle during their life.§ DISCUSSIONWe consider a model proposed in <cit.> that allows simultaneously the long-period precession and quasi-glitch events with taking into account the influence of the small-scale magnetic field on pulsar braking. For simplicity we consider only the case of axial symmetric precession and do not take into account that the presence of the small-scale magnetic field makes precession triaxial <cit.>.The main problem of proposed model is the exact nature of components, especially, of g-component. Strictly speaking, we only postulate the existence of components with some properties.Let us speculate a little aboutpossible nature of these components. The g-component may consist of tangles of closed fluxoids, normal matter inside the tangles and pinned superfluid(see fig. <ref>). These tangles "freely"flow inside the star core. The collapse of closed fluxoids is prevented by repulsion of pinned vortices. Similar configurations are considered in<cit.>, where vortices are pinned to fluxoids forming the regular toroidal magnetic field inside the neutron star core, and<cit.>, where the "freely"flowing of magnetic field tangles is discussed. We expect that L_g∼ 0.1 I_gΩ. Hence, if we suppose that L_g∼ 10^-2 I_totΩ <cit.> then I_g∼ 10^-3 I_tot. In this case, c-component is exactly the cruct, soI_c∼ 10^-2 I_tot <cit.> and r-component consists of normal and superfluid matter located outside the tangles, so I_r≈ I_tot.We suppose that strong interaction between the crust and the tangles may be related to small number of fluxoids got out the tangles. It is worth to note that a tangle located deep inside the core weakly interacts with the crust and may produce something like "slow glitch"<cit.>.The main problem of such configuration is the stability of tangles and its partial destruction during glitches.The g-component also may be created by small rigid core which can exist in central region of the star <cit.> (see fig. <ref>) with superfluid vortices pinned to it. However, in this case, we must assume that the vortices are extremely rigid, so a vortex pinned by its central part to the rigid core is fixed outside the rigid core as well. Consequently, if we assume that the radius of rigid core r_g∼ (0.1-0.2) r_ns then the moment of inertia of "normal"matter inside the rigid core I_g∼ 0.1 ( r_g / r_ns )^3∼ (10^-6-10^-5) I_tot However,it controls the vortices movement and, hence, the superfluid flowin volume ∼ r_g^2· r_ns, see fig. <ref>. Consequently, we can estimate the angular momentum of pinned superfluid: L_g∼( r_g / r_ns)^2 I_totΩ∼ 10^-2 I_totΩ. In this case, c-component consists of the crust and normal matter of the core: I_c∼ 10^-1 I_tot <cit.>, and r-component is superfluid which is not pinnedto rigid core I_r≈ I_tot. We suppose that the strong interactionbetween g-component and the crust requires that the magnetic field penetrates from crustup to boundaries of rigid core. Consequently, core superconductivity must be absent(at least outside the rigid core). In the paper we assume that the interaction between c-component and g-component is strong in order to provide rapid angular momentum transfer from g-component to the crust. Hence, if I_g∼ I_tot, precession is damped very rapidly and the inclination angle quickly evolves to χ≈ 0^∘ or χ≈ 90^∘. So in order to save precession during pulsar life time it is necessary to suppose that I_g≪ I_tot. We can reduce this restriction if we suppose that the friction between the crust and g-component increases during glitch. For example, we can assume that the crust and g-component interact due to weak viscous friction between two glitches but during the the glitch the angular momentum is rapidly transferedby Kelvin waves <cit.>, Alfven or sound waves.We can also reduce the restriction if we assume that g-component is slowly destroyed. For example, the tangles of fluxoids slightly destroyed during each glitch.Authors thank O.A. Goglichidze and D.M.Sedrakian for usefull discussion which has led to writting this paper, A.N.Biryukov for usefull discussion of pulsar precession and V.S.Beskin, I.F.Malov, E.B.Nikitina, A.N.Kazantsev, V.A. Urpin and A.I.Chugunov for help anduseful discussions.§ REFERENCES99 DAlessandro1996 F. D'Alessandro1996 Astrophysics and Space Science246 73 Melatos2015 B.Haskell, A.Melatos 2015 International Journal of Modern PhysicsD 24id. 1530008 Jones2016 G.Ashton, D.I.Jones, R.Prix 2016 MNRAS 458 881 Johnston2015 M.Kerr, G.Hobbs, S.Johnston, R.M.Shannon 2016 MNRAS 455 1845Arzamasskiy2015 L.Arzamasskiy, A.Philippov, A.Tchekhovskoy 2015 MNRAS 453 3540 Malofeev2012 V.M.Malofeev, O.I.Malov, S.V.Logvinenko, D.A.Teplykh 2012 Odessa Astronomical Publications 25 194Biryukov2012 A.Biryukov, G.Beskin, S.Karpov 2012 MNRAS 420 103Link2006 B.Link 2006 A& A 458 881 Sedrakian1999 A.Sedrakian, I.Wasserman, J.M. Cordes 1999 ApJ.524 341. Dodson2002 R.G.Dodson, P.M.McCulloch, D.R.Lewis 2002 ApJ. 564 L85 Gurgercinoglu2017 E.Gurgercinoglu 2017MNRAS 469 2313 Yu2013 M.Yu, R.N.Manchester, G.Hobbs et. al 2013 MNRAS 429 688Polyakova2009 D.P.Barsukov, P.I.Polyakova, A.I.Tsygan 2009Astronomy Reports53 1146 Zheltoukhov2014 V.S.Beskin, A.A. Zheltoukhov 2014Physics Uspekhi57 799 Goglichidze2015 O.A.Goglichidze, D.P.Barsukov, A.I.Tsygan 2015 MNRAS 451 2564 Melatos2000 A.Melatos 2000 MNRAS313 217 Nikitina2011_1 I.F.Malov, E.B.Nikitina 2011 Astronomy Reports55 19 ATNF R.N.Manchester et al 2005 Astron. J.129 1993http://www.atnf.csiro.au/research/pulsar/psrcat Takatsuka1989 T.Takatsuka, R.Tamagaki 1989 Progress of Theoretical Physics82 945 Gurgercinoglu2014 E.Gurgercinoglu, M.A.Alpar 2014 ApJ Letters788L11 Glampedakis2015 K.Glampedakis, P.D.Lasky 2015 MNRAS450 1638 Yakovlev2007 P.Haensel, A.Y.Potekhin, D.G.Yakovlev"Neutron stars 1: Equation of state and structure"2007 Astrophysics and space library326Shabanova2007 T.V.Shabanova, Yu.P.Shitov 2007 Astronomy Reports 51 746Link2013 B.Link 2014 ApJ 789 id 141
http://arxiv.org/abs/1708.07505v1
{ "authors": [ "K. Y. Kraav", "M. V. Vorontsov", "D. P. Barsukov" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170824175334", "title": "The influence of small-scale magnetic field on the evolution of inclination angle and precession damping in the framework of 3-component model of neutron star" }
Department of Mathematics, Texas A&M University, College Station, TX, 77843, USA [email protected] this work, we consider a FDE (fractional diffusion equation) u(x,t)-a(t)Ł u(x,t)=F(x,t)with a time-dependent diffusioncoefficient a(t). This is an extension of <cit.>,which deals with this FDE in one-dimensional space. For the direct problem, given an a(t),we establish the existence, uniqueness and some regularity propertieswith a more general domain Ω and right-hand side F(x,t).For the inverse problem–recovering a(t),we introduce an operator K one of whose fixed points is a(t)and show its monotonicity, uniqueness andexistence of its fixed points. With these properties, a reconstructionalgorithm for a(t) is created and some numerical results are providedto illustrate the theories.Keywords: fractional diffusion, fractional inverse problem, uniqueness, existence, monotonicity,iteration algorithm. AMS subject classifications: 35R11, 35R30, 65M32. An undetermined time-dependent coefficient in a fractional diffusion equation Zhidong Zhang December 30, 2023 ============================================================================= § INTRODUCTION This paper considers the fractional diffusion equation (FDE) witha continuous and positive coefficient function a(t):u(x,t)-a(t)Ł u(x,t) =F(x,t), x∈Ω, t∈ (0,T]; u(x,t) =0, (x,t)∈∂Ω× (0,T]; u(x,0) =u_0(x), x∈Ω, where Ω is a bounded and smooth subset of R^n, n=1,2,3, -Ł is a symmetric uniformly elliptic operator defined as-Ł u=-∑_i,j=1^n (a^ij(x)u_x_i)_x_j+c(x)uwith conditionsa^ij,c∈ C^2(Ω)(i,j=1,…,n), ∂Ω isC^3,andis the left-sided Djrbashian-–Caputo α-th order derivative with respect to time t. The definition forisu(x,t)=1/Γ(n-α)∫_0^t (t-τ)^n-α-1d^n/dτ^nu(x,τ)dτ with Gamma function Γ(·) and the nearest integer n with α≤ n. In this paper, we are assuming a subdiffusion process,i.e. α∈(0,1). This simplifies the definition ofasu(x,t)=1/Γ(1-α)∫_0^t (t-τ)^-αd/dτu(x,τ)dτ. This work is an extension of <cit.> from a simple space domain Ω to ℝ^n, considers the more general analysis for the direct problem and contains an existence argument for the inverse problem of recovering a(t).This paper consists of two parts; the direct problem andthe inverse problem. For the direct problem, we build thespectral representation of the weak solution u(x,t;a). The notation u(x,t;a) is used for displaying the dependence of the solution u on the diffusivity a(t).Then the existence, uniqueness and regularity results are provedwith several assumptions on the coefficient function a(t).Unlike <cit.>,the right hand side function F(x,t) is not of the form f(x)g(t), sothat the proof of regularity is more delicate. For the inverse problem, we use the single point flux data a(t)∂ u/(x_0,t;a)=g(t), x_0∈∂Ωto recover the coefficient a(t)(We choose the data a(t)∂ u/(x_0,t;a)=g(t) instead ofthe classical flux ∂ u/(x_0,t;a) becausein practice, a(t)∂ u/(x_0,t;a) is usually measured as the flux).For the reconstruction, we only consider to recover acontinuous and positive a(t) to match the assumptions set in the directproblem. Acting a flux data, we introduce an operator Kone of whose fixed points is the coefficient a(t). Usingthe weak maximum principle <cit.>, we establish the monotonicity and uniqueness of the fixed points of operator K, and the proof of uniqueness leads to a numerical reconstruction algorithm. Since we consider a multidimensional domain Ω here, the Sobolev Embedding Theorem yields that we need to add the condition (<ref>) on the operator -Ł to ensure the C^1-regularity of the series representation of u. Then the operator K is well-defined, where the proofs can be seen in section 4. This is a significant difference from<cit.>. Furthermore, an existence argument of the fixed points of K is included by this paper, which <cit.> does not contain. The rest of this paper follows the following structure. In section 2, we collectsome preliminary results about fractional calculus and the eigensystemof -Ł. The direct problemis discussed in section 3, i.e. we establish the existence, uniquenessand some regularity results of the weak solution for FDE (<ref>).Then section 4 deals with the inverse problem of recovering a(t). Specifically, an operatorK is introduced at the beginning of this section, then its monotonicityand uniqueness of its fixed points give an algorithm to recover thecoefficient a(t). In particular, the existence argument of the fixedpoints of K is included by this section. In section 5, somenumerical results are presented to illustrate the theoretical basis. § PRELIMINARY MATERIAL§.§ Mittag-Leffler functionIn this part, we describe the Mittag-Leffler function which playsan important role in fractional diffusion equations. This is a two-parameter function defined as E_α,β(z) = ∑_k=0^∞z^k/Γ(kα+β),z∈ℂ.It generalizes the natural exponential function in the sense thatE_1,1(z)=e^z. We list some important properties of theMittag-Leffler function for future use. Let 0<α<2 and β∈ℝ be arbitrary, and απ/2 <μ<min(π,απ). Then there exists a constant C=C(α,β,μ)>0 such that|E_α,β(z)|≤C/1+|z|,μ≤|arg(z)|≤π. This proof can be found in <cit.>.For λ>0, α>0 and +, we haved^n/dt^nE_α,1(-λ t^α)=-λ t^α-nE_α,α-n+1(-λ t^α), t>0.In particular, if we set n=1, then there holdsd/dtE_α,1(-λ t^α)=-λ t^α-1E_α,α(-λ t^α), t>0.This is <cit.>. If 0<α<1 and z>0, then E_α,α(-z)≥ 0.This proof can be found in <cit.>.For 0<α<1, E_α,1(-t^α) is completely monotonic,that is,(-1)^n d^n/dt^nE_α,1(-t^α)≥ 0, for t>0and n=0,1,2,⋯.See <cit.>. §.§ Fractional calculusIn this part, we collect some results of fractional calculus.The next lemma states the extremal principle of . Fix 0<α<1 and given f(t)∈ C[0,T] with f ∈ C[0,T]. If f attains its maximum (minimum) over the interval [0,T] at the point t=t_0, t_0∈ (0,T], then ^CD_t_0^α f ≥(≤) 0. Even though the conditions are different from the ones of <cit.>, the maximum case can be proved following the proof of<cit.>. For the minimum case, we only need to set f=-f. The following lemma about the composition between and the fractional integralis presented in<cit.>. Define the Riemann-–Liouville α-th order integral I_t^α as u=1/Γ(α)∫_0^t (t-τ)^α-1u(τ)dτ. For 0<α<1, u(t), u∈ C[0,T], we have(∘ u)(t)=u(t), (∘ u)(t)=u(t)-u(0), t∈[0,T]. §.§ Eigensystem of -ŁSince -Ł is a symmetric uniformly elliptic operator,we denote the eigensystem of -Ł by {(λ_n,ϕ_n):+}.Then we have 0<λ_1≤λ_2≤⋯ where finite multiplicity is possible, λ_n→∞ and {ϕ_n:+}⊂ H^2(Ω)∩H_0^1(Ω) forms an orthonormal basis of L^2(Ω).Moreover, with the condition (<ref>), for each +, it holds that ϕ_n ∈ H^3(Ω) <cit.>.Then by the Sobolev Embedding Theorem, we have ϕ_n∈ C^1(Ω) and ∂ϕ_n/(x_0) is well-defined for each +. Hence,without loss of generality, we can suppose ∂ϕ_n/(x_0)≥ 0, for each +.Otherwise, if ∂ϕ_k/(x_0)< 0 for somek∈ℕ^+, we can replace ϕ_k by -ϕ_k.-ϕ_k satisfies all the properties we need, such as it isan eigenfunction of -Ł corresponding to the eigenvalue λ_k,composes an orthonormal basis of L^2(Ω) together with{ϕ_n:+,n k} and ∂ (-ϕ_k)/(x_0)≥ 0. The assumption (<ref>) will be used in Section 4.§ DIRECT PROBLEM–EXISTENCE, UNIQUENESS AND REGULARITYThroughout this section, we suppose a(t), u_0(x) and F(x,t)satisfy the following assumptions: (a) a(t)∈ C^+[0,T]:={ψ∈ C[0,T]:ψ(t)>0, t∈[0,T]}; (b) F(x,t) ∈ C([0,T];L^2(Ω)); (c) u_0(x) ∈ H_0^1(Ω).§.§ Spectral Representation We call u(x,t;a) a weak solution ofFDE (<ref>) in L^2(Ω) corresponding to thecoefficient a(t) if u(·,t;a)∈ H_0^1(Ω) for t∈(0,T] and for any ψ(x)∈ H^2(Ω)∩ H_0^1(Ω), it holds ( u(x,t;a),ψ(x))-(a(t)Ł u(x,t;a),ψ(x))=(F(x,t),ψ(x)), t∈(0,T];(u(x,0;a),ψ(x))=(u_0(x),ψ(x)),where (·,·) is the inner product in L^2(Ω). With the above definition, we give a spectral representationfor the weak solution in the following lemma. Define b_n:=(u_0(x),ϕ_n(x)), F_n(t)=(F(x,t),ϕ_n(x)), +.The spectral representation of the weak solution of FDE(<ref>) isu(x,t;a)=∑_n=1^∞ u_n(t;a)ϕ_n(x), (x,t)∈Ω× [0,T], where u_n(t;a) satisfies the fractional ODE u_n(t;a)+λ_n a(t) u_n(t;a)=F_n(t),u_n(0;a)=b_n, +. For each +, multiplying ϕ_n(x) on both sides of FDE (<ref>) and integrating it on x over Ω allow us to deduce that ( u(x,t;a), ϕ_n(x))+λ_n a(t)(u(x,t;a),ϕ_n(x))=F_n(t),where(-Ł u(x,t;a),ϕ_n(x))=(u(x,t;a),-Łϕ_n(x)) =λ_n (u(x,t;a),ϕ_n(x)) follows from the symmetricity of -Ł. Set u_n(t;a)=(u(x,t;a), ϕ_n(x)) and defineu(x,t;a)=∑_n=1^∞ u_n(t;a)ϕ_n(x).Then (<ref>) and the completeness of{ϕ_n(x):+} lead to the desired result.§.§ Existence and UniquenessIn order to show the existence and uniquenessof the weak solution (<ref>), we state the following lemma<cit.>.For the Cauchy-type problem y=f(y,t), y(0)=c_0, if for any continuous y(t), f(y,t)∈ C[0,T], ∃ A>0 which is independent of y∈ C[0,T] and t∈[0,T] s.t. | f(t,y_1)-f(t,y_2)|≤ A| y_1-y_2|, then there exists a unique solution y(t) for the Cauchy-typeproblem, which satisfies y∈ C[0,T]. The theorem of existence and uniqueness for u(x,t;a) follows fromLemma <ref>. Suppose Assumption <ref> holds. Under Definition <ref>, there exists a unique weak solution u(x,t;a) of FDE (<ref>) with the spectral representation (<ref>) andfor each +, u_n(t;a)∈ C[0,T] is the unique solution of the fractional ODE (<ref>) with u_n(t;a) ∈ C[0,T]. From the spectral representation (<ref>), itsuffices to show the existence and uniqueness of u_n(t;a),+. Fix +, Assumption <ref> (a) and (b) yieldthat the fractional ODE (<ref>) satisfies the conditions ofLemma <ref>. Hence the existence and uniquenessfor u_n(t;a) hold. §.§ Sign of u_n(t;a)In this part, we state two properties of u_n(t;a) which play importantroles in building the regularity of u(x,t;a). Given h∈ C^+[0,T], f∈ C[0,T] with f ∈ C[0,T], if f(0)≤ (≥)0 and f+h(t)f(t)≤ (≥)0, then f≤ (≥)0 on [0,T].Since f(t)∈ C[0,T], f(t) attains its maximum over [0,T] at some point t_0∈ [0,T]. If t_0=0, then f(t)≤ f(0)≤ 0.If t_0∈ (0,T], with Lemma <ref>, we have ^CD_t^α f(t_0)≥ 0, which yields h(t_0)f(t_0)≤ 0, i.e. f(t_0)≤ 0 due to h>0 on [0,T]. The definition of t_0 assures f≤ 0.For the case of “≥ 0", let f(t)=-f(t), then the above proof gives f≤ 0, i.e. f≥ 0.The following corollary, which concerns the sign of u_n(t;a), followsfrom Lemma <ref> directly.Set u_n(t;a) be the unique solution of the fractional ODE (<ref>). Then u_n(t;a)+λ_n a(t) u_n(t;a)≤ (≥) 0 on [0,T] and u_n(0;a) ≤ (≥) 0 implyu_n(t;a)≤ (≥)0 on [0,T], +. Assumption <ref> gives that λ_na(t)∈ C^+[0,T]. Then the proof is completed by applying Lemma <ref> to the fractional ODE (<ref>). §.§ Regularity In this part, we establish the regularity of u(x,t;a).To this end, we split FDE (<ref>) intou(x,t)-a(t)Ł u(x,t) =F(x,t), x∈Ω, t∈ (0,T]; u(x,t) =0, (x,t)∈∂Ω× (0,T]; u(x,0) =0, x∈Ω, and u(x,t)-a(t)Ł u(x,t) =0, x∈Ω, t∈ (0,T]; u(x,t) =0, (x,t)∈∂Ω× (0,T]; u(x,0) =u_0(x), x∈Ω. Denote the weak solutions of FDEs (<ref>) and(<ref>) by u^r(x,t;a) and u^i(x,t;a), respectively (“r" and “i" denote the initials of “right-hand side" and “initialcondition").The following lemma about u^r(x,t;a) and u^i(x,t;a)follows from Lemma <ref> andTheorem <ref>. Suppose Assumption <ref> holds. Thenu^r(x,t;a) and u^i(x,t;a) are the unique solutions for FDEs (<ref>) and (<ref>), respectively, with the spectral representations asu^r(x,t;a)=∑_n=1^∞ u^r_n(t;a)ϕ_n(x),u^i(x,t;a)=∑_n=1^∞ u^i_n(t;a)ϕ_n(x),where u^r_n(t;a), u^i_n(t;a) satisfy the following fractional ODEs u_n^r(t;a)+λ_na(t)u_n^r(t;a)=F_n(t), u_n^r(0;a)=0, +; u_n^i(t;a)+λ_na(t)u_n^i(t;a)=0, u_n^i(0;a)=b_n, +.Moreover, Theorem <ref> ensures the weaksolution u(x,t;a) of FDE (<ref>)can be written as u(x,t;a)=u^r(x,t;a)+u^i(x,t;a), i.e.u_n(t;a)=u^r_n(t;a)+u^i_n(t;a), +. §.§.§ Regularity of u^rFor each +, define F^+_n(t)= F_n(t),if F_n(t)≥0;0,if F_n(t)<0, F^-_n(t)= F_n(t), if F_n(t)<0;0, if F_n(t)≥0.It is obvious that F_n=F^+_n+F^-_n, the supportsof F^+_n and F^-_n are disjoint andF^+_n,F^-_n ∈ C[0,T] which follows from F_n∈ C[0,T].Split u_n^r(t;a) as u_n^r(t;a)=u_n^r,+(t;a)+u_n^r,-(t;a),where u_n^r,+(t;a), u_n^r,-(t;a) satisfy u_n^r,+(t;a)+λ_na(t)u_n^r,+(t;a)=F^+_n(t), u_n^r,+(0;a)=0, +; u_n^r,-(t;a)+λ_na(t)u_n^r,-(t;a)=F^-_n(t), u_n^r,-(0;a)=0, +,respectively. The existence and uniqueness ofu_n^r,+(t;a) and u_n^r,-(t;a) hold due to Lemma<ref> and we can write u^r(x,t;a)=u^r,+(x,t;a)+u^r,-(x,t;a),whereu^r,+(x,t;a)=∑_n=1^∞u_n^r,+(t;a)ϕ_n(x),u^r,-(x,t;a)=∑_n=1^∞u_n^r,-(t;a)ϕ_n(x). Then we state some properties of u_n^r,+(t;a)and u_n^r,-(t;a).For any +,u_n^r,+(t;a)≥ 0 and u_n^r,-(t;a)≤ 0 on [0,T]. This proof follows from Corollary <ref> directly. Given a_1(t), a_2(t)∈ C^+[0,T] with a_1(t)≤ a_2(t) on [0,T], we have0≤ u_n^r,+(t;a_2)≤ u_n^r,+(t;a_1),u_n^r,-(t;a_1)≤ u_n^r,-(t;a_2)≤ 0,t∈ [0,T], +.Pick +, u_n^r,+(t;a_1) and u_n^r,+(t;a_2) satisfy the following system: u_n^r,+(t;a_1)+λ_na_1(t)u_n^r,+(t;a_1)=F^+_n(t);u_n^r,+(t;a_2)+λ_na_2(t)u_n^r,+(t;a_2)=F^+_n(t); u_n^r,+(0;a_1)=u_n^r,+(0;a_2)=0,which leads to w+λ_na_1(t)w(t)=λ_n u_n^r,+(t;a_2)(a_2(t)-a_1(t))≥0,w(0)=0, where w(t)=u_n^r,+(t;a_1)-u_n^r,+(t;a_2) and thelast inequality follows from Lemma <ref> and a_1≤ a_2.Hence, Corollary <ref> shows thatw(t)≥ 0, i.e. u_n^r,+(t;a_2)≤ u_n^r,+(t;a_1) andLemma <ref> gives0≤ u_n^r,+(t;a_2)≤ u_n^r,+(t;a_1), t∈ [0,T]. Similarly, we have u_n^r,-(t;a_1)≤ u_n^r,-(t;a_2)≤ 0, t∈ [0,T],completing the proof. Assumption <ref> (a) impliesthere exists constants q_a, Q_a s.t. 0<q_a<a(t)<Q_a on [0,T]. From Lemma <ref>, we obtain |u_n^r,+(t;a)|≤ |u_n^r,+(t;q_a)|,|u_n^r,-(t;a)|≤ |u_n^r,-(t;q_a)| on t∈ [0,T], +,where u_n^r,+(t;q_a),u_n^r,-(t;q_a) are the unique solutions offractional ODEs (<ref>) and (<ref>) respectivelywith a(t)≡ q_a on [0,T]. The next two lemmas concern the regularity ofu^r,+(x,t;a) and u^r,+(x,t;a), respectively. u^r,+_L^2(0,T;H^2(Ω))≤ CF_L^2([0,T]×Ω). Calculating u^r,+(x,t;a)_L^2(0,T;H^2(Ω))^2 directlyyieldsu^r,+(x,t;a)_L^2(0,T;H^2(Ω))^2 =∫_0^Tu^r,+(x,t;a)_H^2(Ω))^2dt≤∫_0^T C(-Ł u^r,+)(x,t;a)_L^2(Ω)^2dt=C∫_0^T∑_n=1^∞λ_nu_n^r,+(t;a)ϕ_n(x)_L^2(Ω)^2dt=C∫_0^T∑_n=1^∞λ_n^2 |u_n^r,+(t;a)|^2 dt≤ C∫_0^T∑_n=1^∞λ_n^2 |u_n^r,+(t;q_a)|^2dt,where the last inequality is obtained from (<ref>).By the Monotone Convergence Theorem, we have u^r,+(x,t;a)_L^2(0,T;H^2(Ω))^2≤ C∫_0^T∑_n=1^∞λ_n^2 |u_n^r,+(t;q_a)|^2dt=C ∑_n=1^∞∫_0^T|λ_nu_n^r,+(t;q_a)|^2dt. For each +, <cit.> gives the explicit representation of u_n^r,+(t;q_a)u_n^r,+(t;q_a)=∫_0^t F^+_n(τ) (t-τ)^α-1 E_α,α(-λ_n q_a(t-τ)^α) dτ,which together with Young's inequality leads to∫_0^T|λ_nu_n^r,+(t;q_a)|^2dt =F_n^+(t)*(λ_nt^α-1E_α,α(-λ_nq_at^α))_L^2[0,T]^2≤F_n^+_L^2[0,T]^2 λ_nt^α-1E_α,α(-λ_nq_at^α)_L^1[0,T]^2.Lemmas <ref>, <ref> and<ref> give the bound of λ_nt^α-1E_α,α(-λ_nq_at^α)_L^1[0,T]λ_nt^α-1E_α,α(-λ_nq_at^α)_L^1[0,T] =∫_0^T |λ_nτ^α-1 E_α,α(-λ_n q_aτ^α)| dτ=∫_0^T λ_nτ^α-1 E_α,α(-λ_n q_aτ^α) dτ=-q_a^-1∫_0^T d/dτE_α,1(-λ_nq_aτ^α) dτ=q_a^-1(1-E_α,1(-λ_nq_aT^α)) ≤ q_a^-1;while the definition (<ref>) provides the bound ofF_n^+_L^2[0,T] asF_n^+_L^2[0,T]≤F_n_L^2[0,T]. Consequently, it holds ∫_0^T|λ_nu_n^r,+(t;q_a)|^2dt≤ q_a^-2F_n_L^2[0,T]^2, +, i.e.∑_n=1^∞∫_0^T|λ_nu_n^r,+(t;q_a)|^2dt ≤ q_a^-2∑_n=1^∞F_n_L^2[0,T]^2,which together with (<ref>) and the completenessof {ϕ_n(x):+} in L^2(Ω) givesu^r,+(x,t;a)_L^2(0,T;H^2(Ω))^2 ≤ C∑_n=1^∞∫_0^T|λ_nu_n^r,+(t;q_a)|^2dt≤ C∑_n=1^∞F_n_L^2[0,T]^2=CF_L^2([0,T]×Ω)^2,where the constant C only depends on a(t). This completes the proof.u^r,+_L^2([0,T]×Ω)≤ CF_L^2([0,T]×Ω).(<ref>), (<ref>),definition (<ref>) and the Monotone Convergence Theorem giveu^r,+_L^2([0,T]×Ω)^2=∫_0^T ∑_n=1^∞ u_n^r,+(·;a) ϕ_n(x) _L^2(Ω)^2 dt=∑_n=1^∞∫_0^T | u_n^r,+(·;a)|^2dt≤∑_n=1^∞∫_0^T (2|λ_n a(t) u_n^r,+(t;a)|^2+2|F_n^+(t)|^2) dt≤2∑_n=1^∞∫_0^T |λ_n a(t) u_n^r,+(t;a)|^2 dt +2∑_n=1^∞∫_0^T |F_n(t)|^2 dt.The estimate of ∑_n=1^∞∫_0^T|λ_n a(t) u_n^r,+(t;a)|^2 dt follows from(<ref>), (<ref>) and the proof of Lemma<ref>∑_n=1^∞∫_0^T|λ_n a(t) u_n^r,+(t;a)|^2 dt≤ Q_a ∑_n=1^∞∫_0^T|λ_n u_n^r,+(t;q_a)|^2 dt≤ CF_L^2([0,T]×Ω)^2;while the completeness of {ϕ_n(x):+} gives∑_n=1^∞∫_0^T |F_n(t)|^2 dt=F_L^2([0,T]×Ω)^2. Hence, (<ref>) developsu^r,+_L^2([0,T]×Ω)^2 ≤ CF_L^2([0,T]×Ω)^2,which implies the indicated conclusion.The following corollary follows immediately from the proofs ofLemmas <ref> and <ref>. u^r,-_L^2(0,T;H^2(Ω))≤ CF_L^2([0,T]×Ω),u^r,-_L^2([0,T]×Ω)≤ CF_L^2([0,T]×Ω).From Lemmas <ref>, <ref>,Corollary <ref> and (<ref>),we are able to deduce the regularity for u^r(x,t;a) andu^r(x,t;a). u^r_L^2(0,T;H^2(Ω)) + u^r_L^2([0,T]×Ω)≤ CF_L^2([0,T]×Ω). (<ref>) gives u^r(x,t;a)=u^r,+(x,t;a)+u^r,-(x,t;a), which leads tou^r_L^2(0,T;H^2(Ω)) + u^r_L^2([0,T]×Ω) ≤u^r,+_L^2(0,T;H^2(Ω))+u^r,-_L^2(0,T;H^2(Ω))+ u^r,+_L^2([0,T]×Ω)+ u^r,-_L^2([0,T]×Ω) ≤CF_L^2([0,T]×Ω).If we impose a higher regularity on F, we can obtain the regularity estimate of u^r_C([0,T];H^2(Ω)). Under Assumption <ref>, if F∈ C^θ([0,T];L^2(Ω)), 0<θ<1, then u^r_C([0,T];H^2(Ω)) + u^r_C([0,T];L^2(Ω))≤ CF_C^θ([0,T];L^2(Ω)),where C depends on Ω, -Ł and a(t).For each t∈[0,T], we have u^r,+(x,t;a)^2_H^2(Ω) ≤ C -Ł u^r,+^2_L^2(Ω)≤ C ∑_n=1^∞ |λ_nu^r,+_n(t;a)|^2≤ C ∑_n=1^∞|λ_n ∫_0^t F^+_n(τ) (t-τ)^α-1E_α,α(-λ_n q_a(t-τ)^α) dτ|^2≤ C ∑_n=1^∞|λ_n ∫_0^t |F^+_n(τ) -F^+_n(t)|(t-τ)^α-1E_α,α(-λ_n q_a(t-τ)^α) dτ|^2 +C ∑_n=1^∞|F_n^+(t)∫_0^t λ_n(t-τ)^α-1E_α,α(-λ_n q_a(t-τ)^α) dτ|^2.The definition of F_n^+(t) yields that |F^+_n(τ) -F^+_n(t)|≤ |F_n(τ) -F_n(t)|; Lemma <ref> gives 0<∫_0^t λ_n(t-τ)^α-1E_α,α(-λ_n q_a(t-τ)^α) dτ= q_a^-1(1-E_α,1(-λ_n q_a t^α))<q_a^-1.Hence, u^r,+(x,t;a)^2_H^2(Ω) ≤ C ∑_n=1^∞|λ_n ∫_0^t |F_n(τ) -F_n(t)|(t-τ)^α-1E_α,α(-λ_n q_a(t-τ)^α) dτ|^2 +C ∑_n=1^∞|F_n(t)|^2.By <cit.>, we have u^r,+(x,t;a)^2_H^2(Ω)≤ C F_C^θ([0,T];L^2(Ω))^2+C F(·,t)_L^2(Ω)^2,t∈[0,T],which gives u^r,+_C([0,T];H^2(Ω))≤ C F_C^θ([0,T];L^2(Ω)),and the constant C depends on Ω, -Ł and a(t). Similarly, we can showu^r,-_C([0,T];H^2(Ω))≤ C F_C^θ([0,T];L^2(Ω)).For u^r, by (<ref>), we haveu^r,+=∑_n=1^∞ [-λ_n a(t) u_n^r,+(t;a)+F_n^+(t)]ϕ_n(x).Then for each t∈[0,T],u^r,+^2_L^2(Ω) ≤C∑_n=1^∞ Q_a^2|λ_nu_n^r,+(t;a)|^2+C∑_n=1^∞ |F_n(t)|^2 ≤ C∑_n=1^∞ |λ_nu_n^r,+(t;a)|^2+CF(·,t)_L^2(Ω)^2.From the above proof for u^r,+^2_H^2(Ω), it holds u^r,+^2_L^2(Ω)≤ C F^2_C^θ([0,T];L^2(Ω))+CF(·,t)_L^2(Ω)^2, t∈[0,T], which gives u^r,+_C([0,T];L^2(Ω))≤ C F_C^θ([0,T];L^2(Ω)).Analogously, we can showu^r,-_C([0,T];L^2(Ω))≤ C F_C^θ([0,T];L^2(Ω)).The estimates of u^r,+, u^r,-,u^r,+ and u^r,-yield the desired result and complete this proof. §.§.§ Regularity of u^iIn this part we consider the regularity of u^i. Just as in the regularity results for u^r, we first state two lemmas whichconcern the positivity and monotonicity of u^i, respectively. With the representation (<ref>) and the fractional ODE(<ref>), for each +, b_n ≤ (≥)0 implies that u^i_n(t;a)≤ (≥)0 on [0,T].This is a directly result of Corollary <ref>. Given a_1, a_2∈ C^+[0,T] with a_1≤ a_2 on [0,T], for each +, we have0≤ u_n^i(t;a_2)≤ u_n^i(t;a_1), if b_n≥ 0; u_n^i(t;a_1)≤ u_n^i(t;a_2) ≤ 0, if b_n≤ 0.Fix +, from the fractional ODE (<ref>), the functions u^i_n(t;a_1) and u^i_n(t;a_2) satisfy the following system u_n^i(t;a_1)+λ_na_1(t)u_n^i(t;a_1)=0;u_n^i(t;a_2)+λ_na_2(t)u_n^i(t;a_2)=0; u_n^i(0;a_1)=u_n^i(0;a_2)=b_n.This givesw+λ_na_1(t)w(t)=λ_nu_n^i(t;a_2)(a_2(t)-a_1(t)), w(0)=0,where w(t)=u_n^i(t;a_1)-u_n^i(t;a_2).If b_n≥ 0, Corollary <ref> shows that u_n^i(t;a_1),u_n^i(t;a_2)≥ 0.Also, Lemma <ref> and a_1≤ a_2 ensures the right side of (<ref>) is nonnegative, which together with Corollary <ref> implies w≥ 0,i.e. 0≤ u_n^i(t;a_2)≤ u_n^i(t;a_1). The similar argument yields u_n^i(t;a_1)≤ u_n^i(t;a_2) ≤ 0 for the case b_n≤ 0. u^i_L^2(0,T;H^2(Ω))+ u^i_L^2([0,T]×Ω)≤ C T^1-α/2u_0_H^1(Ω).Given t∈ [0,T], the direct calculation and Lemma <ref>yield thatu^i(x,t;a)_H^2(Ω)^2 ≤ C-Ł u^i(x,t;a)_L^2(Ω)^2 =C∑_n=1^∞λ_nu^i_n(t;a)ϕ_n(x) _L^2(Ω)^2=C∑_n=1^∞|λ_nu^i_n(t;a)|^2 ≤ C∑_n=1^∞|λ_nu^i_n(t;q_a)|^2.Recall that <cit.> established the representation as u^i_n(t;q_a)=b_n E_α,1(-λ_nq_at^α), +. Hence, by Lemma <ref>, u^i(x,t;a)_H^2(Ω)^2 ≤ C-Ł u^i(x,t;a)_L^2(Ω)^2≤ C∑_n=1^∞|λ_nb_n E_α,1(-λ_nq_at^α)|^2≤ C∑_n=1^∞ |1/1+λ_nq_at^α|^2λ_n^2b_n^2= C∑_n=1^∞|(λ_nq_at^α)^1/2/1+λ_nq_at^α|^2t^-αq_a^-1λ_nb_n^2≤ Ct^-α∑_n=1^∞((-Ł)^1/2u_0,ϕ_n)^2≤ Ct^-αu_0_H^1(Ω)^2,which leads to u^i_L^2(0,T;H^2(Ω))^2≤ C∫_0^T t^-αu_0_H^1(Ω)^2dt=CT^1-αu_0_H^1(Ω)^2, i.e.u^i_L^2(0,T;H^2(Ω))≤ C T^1-α/2u_0_H^1(Ω). For the estimate of u^i(x,t;a),(<ref>) and (<ref>) yieldu^i(x,t;a)=∑_n=1^∞ u^i_n(t;a)ϕ_n(x) =-∑_n=1^∞λ_n a(t) u^i_n(t;a)ϕ_n(x),which together with (<ref>) givesu^i(x,t;a)_L^2(Ω)^2≤ Q_a^2 ∑_n=1^∞ |λ_n u^i_n(t;a)|^2=Q_a^2 -Ł u^i(x,t;a)_L^2(Ω)^2 ≤ Ct^-αu_0_H^1(Ω)^2, t∈[0,T],where the last inequality follows from (<ref>). This result implies that u^i(x,t;a)_L^2([0,T]×Ω)^2=∫_0^Tu^i(x,t;a)_L^2(Ω)^2dt≤ CT^1-αu_0_H^1(Ω)^2,i.e. u^i_L^2([0,T]×Ω)≤C T^1-α/2u_0_H^1(Ω), which together with (<ref>) completes the proof. Moreover, with a stronger condition on u_0, such as assuming u_0∈ H^2(Ω)∩ H^1_0(Ω), we can deduce the C-regularity estimate of u^i.With Assumption <ref> and u_0∈ H^2(Ω)∩ H^1_0(Ω), then u^i_C([0,T];H^2(Ω))+ u^i_C([0,T];L^2(Ω))≤ Cu_0_H^2(Ω). Lemma <ref> yields that ∑_n=1^∞|λ_nb_n E_α,1(-λ_nq_at^α)|^2≤ C∑_n=1^∞|λ_nb_n|^2=C-Ł u_0_L^2(Ω)^2 ≤ C u_0_H^2(Ω)^2, t∈[0,T];meanwhile, the following estimates have been shown in the proof of Theorem <ref>u^i(x,t;a)_H^2(Ω)^2≤ C-Ł u^i(x,t;a)_L^2(Ω)^2≤ C∑_n=1^∞|λ_nb_n E_α,1(-λ_nq_at^α)|^2,u^i(x,t;a)_L^2(Ω)^2 ≤ Q_a^2 ∑_n=1^∞ |λ_n u^i_n(t;a)|^2 =C -Ł u^i(x,t;a)_L^2(Ω)^2.Hence, it holds that u^i(x,t;a)_H^2(Ω)+ u^i(x,t;a)_L^2(Ω)≤ Cu_0_H^2(Ω), t∈[0,T],which leads to the claimed result. §.§ Main theorem for the direct problem The main theorem for the direct problem follows from Theorem<ref>, Lemmas<ref> and <ref>,Corollaries <ref> and <ref>, and the relationu(x,t;a)=u^r(x,t;a)+u^i(x,t;a).Let Assumption <ref> be valid, then under Definition <ref>, there exists a unique weak solution u(x,t;a) of FDE (<ref>) with the spectral representation (<ref>) and the following regularity estimates: u_L^2(0,T;H^2(Ω))+ u_L^2([0,T]×Ω)≤ C(F_L^2([0,T]×Ω) +T^1-α/2u_0_H^1(Ω)).Moreover, if the conditions u_0∈ H^2(Ω)∩ H^1_0(Ω) and F∈ C^θ([0,T];L^2(Ω)), 0<θ<1 are added, we have:u_C([0,T];H^2(Ω))+ u_C([0,T];L^2(Ω))≤ C(F_C^θ([0,T];L^2(Ω))+u_0_H^2(Ω)).§ INVERSE PROBLEM–RECONSTRUCTION OF THE DIFFUSION COEFFICIENT A(T) In this section, we discuss how to recover the coefficient a(t)through the output flux data a(t)∂ u/(x_0,t;a)=g(t), x_0∈∂Ω. All cross the inverse problem work, the operator -Ł is assumed to satisfy the condition (<ref>), then the expression ∂ϕ_n/(x_0) makes sense. We only consider this reconstruction in the space C^+[0,T], whichcan be regarded as the admissible set for a(t). To this end, we introduce an operator K, which will be shown to havea fixed point consisting of the desired coefficient a(t).§.§ Operator K The operator K is defined as K ψ(t):=g(t)/∂ u/(x_0,t;ψ) =g(t)/∑_n=1^∞ u_n(t;ψ) ∂ϕ_n/(x_0),t∈ [0,T] with domain :={ψ∈ C^+[0,T]:ψ(t)≥ g(t)[∂ u_0/(x_0) + [∂ F/(x_0,t)]]^-1, t∈[0,T] }. To analyze K, we make the following assumptions.u_0, F and g should satisfy the following restrictions:(a) u_0∈ H^3(Ω)∩ H_0^1(Ω) withb_n:=(u_0,ϕ_n)≥ 0, +;(b) ∃θ∈ (0,1) s.t. F(x,t)∈ C^θ([0,T];H^3(Ω)∩ H_0^1(Ω)) withF_n(t):=(F(·,t),ϕ_n)≥ 0 on [0,T] for each +;(c) ∃ N∈ℕ^+ s.t.∂ϕ_N/(x_0)>0, b_N>0 and F_N(t)>0 on [0,T]; (d) g∈ C^+[0,T]. The next remark shows that the equality in the definition of K is valid. Given ψ∈ C^+[0,T] and for each t∈[0,T], by the proofs of Corollaries <ref> and <ref>, we have u^r,+(x,t;ψ)^2_H^3(Ω) ≤ C (-Ł)^3/2u^r,+^2_L^2(Ω)≤ C ∑_n=1^∞ |λ_n^3/2u^r,+_n(t;ψ)|^2≤ C ∑_n=1^∞|λ_n^3/2∫_0^t F^+_n(τ) (t-τ)^α-1E_α,α(-λ_n q_ψ(t-τ)^α) dτ|^2≤ C ∑_n=1^∞|λ_n ∫_0^t λ_n^1/2|F^+_n(τ) -F^+_n(t)|(t-τ)^α-1E_α,α(-λ_n q_ψ(t-τ)^α) dτ|^2 +C ∑_n=1^∞|λ_n^1/2F_n^+(t)(1-E_α,1(-λ_n q_ψ t^α))|^2≤ C ∑_n=1^∞|λ_n ∫_0^t λ_n^1/2|F_n(τ) -F_n(t)|(t-τ)^α-1E_α,α(-λ_n q_ψ(t-τ)^α) dτ|^2 +C ∑_n=1^∞|λ_n^1/2F_n(t)(1-E_α,1(-λ_n q_ψ t^α))|^2≤ C (-Ł)^1/2 F_C^θ([0,T];L^2(Ω))^2+C(-Ł)^1/2 F(·,t)_L^2(Ω)^2≤ CF_C^θ([0,T];H^1(Ω))^2+CF(·,t)_H^1(Ω)^2and u^r,-(x,t;ψ)^2_H^3(Ω)≤ CF_C^θ([0,T];H^1(Ω))^2+CF(·,t)_H^1(Ω)^2,which giveu^r_C([0,T];H^3(Ω))≤ C F_C^θ([0,T];H^1(Ω));u^i(x,t;ψ)^2_H^3(Ω) ≤ C (-Ł)^3/2u^i^2_L^2(Ω)≤ C ∑_n=1^∞λ_n^3/2u_n^i(t;ψ)ϕ_n(x)^2_L^2(Ω)≤ C∑_n=1^∞|λ_n^3/2b_nE_α,1(-λ_nq_ψ t^α)|^2 ≤ C∑_n=1^∞|λ_n^3/2b_n|^2=C(-Ł)^3/2 u_0_L^2(Ω)^2 ≤ C u_0_H^3(Ω)^2,which gives u^i_C([0,T];H^3(Ω))≤ C u_0_H^3(Ω). Combining the above two results yields that u_C([0,T];H^3(Ω))≤ C ( F_C^θ([0,T];H^1(Ω))+u_0_H^3(Ω))<∞,which means for each t∈ [0,T],u_H^3(Ω)<∞. Recall that Ω⊂ R^n, n=1,2,3, then the Sobolev Embedding Theorem gives u(x,t;ψ)=∑_n=1^∞ u_n(t;ψ)ϕ_n(x)∈ C^1(Ω) for each t∈ [0,T]. Hence,∑_n=1^∞ u_n(t;ψ) ∂ϕ_n/(x_0) is well-defined and ∂ u/(x_0,t;ψ)=∑_n=1^∞ u_n(t;ψ) ∂ϕ_n/(x_0), t∈[0,T].The following two remarks will explain the reasonableness and reasonfor Assumption <ref>. For the inverse problem, the right-hand side function F(x,t)and the initial condition u_0(x) are input data, which, at least in some circumstance, can be assumed to be controlled. Even though Assumption <ref> (a), (b) and (c) appear restrictive, it is not hard to construct functions that satisfy them. For example, in (a) if u_0=cϕ_k for some c>0, then Assumption <ref> (a) will be satisfied. This will also be true if u_0=∑_k=1^M c_kϕ_k with all c_k>0. Similarly, (b) is satisfied if F(x,t) is also a linear combination of {ϕ_n:+} with positive coefficients. For (c), by the completeness of {ϕ_n:+} in L^2(Ω), there should exist N∈ℕ^+ s.t. ∂ϕ_N/(x_0)>0. Otherwise, for each ψ∈ H^3(Ω)⊂ L^2(Ω), ∂ψ/(x_0)=0 and obviously it is incorrect. Then for this N, we only need to set the coefficients of u_0 and F upon ϕ_N be strictly positive.The output flux data g(t), it is not under our control.However, if there exists a∈ C^+[0,T] s.t. a(t)∂ u/(x_0,t;a)=g(t),Assumption <ref> (a), (b) and Corollary <ref> yield that u_n(t;a)≥ 0;(<ref>) gives∂ϕ_n/(x_0)≥ 0, +;Assumption <ref> (c) ensures ∂ϕ_N/(x_0)>0 and u_N(t;a)>0 on [0,T], where the proof can be seen in Lemma <ref>. Consequently,∂ u/(x_0,t;a)=∑_n=1^∞ u_n(t;a)∂ϕ_n/(x_0)≥ u_N(t;a)∂ϕ_N/(x_0)>0, t∈[0,T].This together with a∈ C^+[0,T] gives that g>0.The continuity of g follows from the ones of a and u_n(t;a), +, which are derived from the admissible set C^+[0,T] and Theorem <ref>, respectively.Therefore, Assumption <ref> (d) is reasonable and can be attained. The well-definedness of the domainis guaranteed by Assumption <ref> (a), (b), (c) and (d) in the sense that the H^3-regularity of u_0, F and the Sobolev Embedding Theorem support that ∂ u_0/(x_0) and ∂ F/(x_0,t) are well defined, and the dominator of the lower bound of ∂ u_0/(x_0)+[∂ F/(x_0,t)]=∑_n=1^∞ (b_n+ F_n) ∂ϕ_n/(x_0)≥ (b_N+ F_N) ∂ϕ_N/(x_0)>0 on [0,T]. Recall that the numerator g>0, so that the lower bound g(t)[∂ u_0/(x_0)+[∂ F/(x_0,t)]]^-1>0, which gives thatis a subspace of C^+[0,T]. Also, F(x,t)∈ C^θ([0,T];H^3(Ω)∩ H_0^1(Ω)) yields that F_N(t) is continuous on [0,T], so is (b_N+ F_N) ∂ϕ_N/(x_0). Then ∃ C>0s.t. (b_N+ F_N) ∂ϕ_N/(x_0)>C>0, which leads to the dominator∂ u_0/(x_0)+[∂ F/(x_0,t)]>C>0 on [0,T].The strict positivity of the dominator avoidsdegenerating to an empty set. In order to show the well-definedness of K,Assumption <ref> (a), (b) and (c) will be used. Furthermore, Assumption <ref> (a) and (b) are crucial to build the monotonicity of operator K; meanwhile, Assumption <ref> (c) is stated for the uniqueness of fixed points of K. For the operator K, we have the following lemmas.The operator K is well-defined.For each ψ∈, Theorem <ref> ensures thatthere exists a unique u_n(t;ψ) for +, which implies the existence and uniqueness of Kψ.Then it is suffice to show the dominator ∑_n=1^∞ u_n(t;ψ) ∂ϕ_n/(x_0)>0 on [0,T]. With (<ref>), Lemma <ref> and Assumption<ref> (a) and (b), we haveu_n(t;ψ)≥ 0 on [0,T], which together with∂ϕ_n/(x_0)≥0 gives∑_n=1^∞ u_n(t;ψ)∂ϕ_n/(x_0) ≥ u_N(t;ψ)∂ϕ_N/(x_0). Due to the assumption ∂ϕ_N/(x_0)>0,we claim that u_N(t;ψ)>0.Assume not, i.e. ∃ t_0∈ [0,T] s.t. u_N(t_0;ψ)≤ 0.The result u_N(t;ψ)≥ 0 yields that u_N(t_0;ψ)=0 so thatu_N(t;ψ) attains its minimum at t=t_0.u_N(0;ψ)=b_N>0 implies t_0 0, i.e. t_0∈(0,T].Then Lemma <ref>, u_N(t_0;ψ)=0 and the ODE (<ref>) show that ^CD_t^αu_N(t_0;ψ)=F_N(t_0)≤ 0,which contradicts with Assumption <ref> (c)and confirms the claim. Hence, ∑_n=1^∞ u_n(t;ψ)∂ϕ_n/(x_0) ≥ u_N(t;ψ)∂ϕ_N/(x_0)>0, which completes the proof. K mapsinto .Given ψ∈. The continuity of Kψ follows from the continuity of u_n(t;ψ) for each + and the continuity of g, which are established by Theorem <ref> and Assumption <ref> (d) respectively.For each +, (<ref>) ensures u_n(t;ψ) satisfiesu_n(t;ψ)+λ_n ψ(t) u_n(t;ψ)=F_n(t),u_n(0;ψ)=b_n.Takingon both sides of the above ODE and using Lemma <ref> yield thatu_n(t;ψ)+λ_n [ψ(t) u_n(t;ψ)]= F_n+b_n. From the proof of Lemma <ref>, we have u_n(t;ψ)≥ 0 on [0,T], which together with λ_n>0, the positivity of ψ and the definition ofyields that λ_n [ψ(t) u_n(t;ψ)]≥ 0.Since u_n(t;ψ)≥ 0 and λ_n [ψ(t) u_n(t;ψ)]≥ 0, we deduce that 0≤ u_n(t;ψ)≤ F_n+b_n on [0,T].Hence, with ∂ϕ_n/(x_0)≥ 0 and the smoothness assumptions u_0∈ H^3(Ω)∩ H_0^1(Ω),F∈ C^θ([0,T];H^3(Ω)∩ H_0^1(Ω))stated in Assumption <ref> (a) and (b) respectively,the following inequality holds ∑_n=1^∞ u_n(t;ψ)∂ϕ_n/(x_0) ≤∑_n=1^∞ ( F_n+b_n) ∂ϕ_n/(x_0) =∂ u_0/(x_0)+[∂ F/(x_0,t)],which together with g>0 yields that Kψ(t) = g(t)/∑_n=1^∞ u_n(t;ψ) ∂ϕ_n/(x_0)≥ g(t)[∂ u_0/(x_0) + [∂ F/(x_0,t)]]^-1>0, t∈[0,T],where the last inequality follows from Remark <ref>. The above result and the continuity of Kψ lead toKψ∈, which is the expected result. §.§ MonotonicityIn this part, we show the monotonicity of the operator K. Given a_1,a_2∈ with a_1≤ a_2, then Ka_1≤ Ka_2 on [0,T]. Pick +, due to (<ref>), u_n(t;a_1) and u_n(t;a_2)satisfy u_n(t;a_1)+λ_n a_1(t) u_n(t;a_1)=F_n(t),u_n(0;a_1)=b_n;u_n(t;a_2)+λ_n a_2(t) u_n(t;a_2)=F_n(t),u_n(0;a_2)=b_n,which together with a_1≤ a_2 and Lemma <ref> yields w+λ_na_1(t)w(t)=λ_nu_n(t;a_2)(a_2(t)-a_1(t))≥ 0, w(0)=0,where w(t)=u_n(t;a_1)-u_n(t;a_2). Applying Lemma <ref> to the above ODE yields thatw≥ 0, i.e. u_n(t;a_1)≥ u_n(t;a_2)≥0, which together withassumption (<ref>) leads to ∑_n=1^∞ u_n(t;a_1)∂ϕ_n/(x_0) ≥∑_n=1^∞ u_n(t;a_2)∂ϕ_n/(x_0)> 0, t∈[0,T].Therefore, with the condition g>0 stated inAssumption <ref> (d), Ka_1(t)=g(t)/∑_n=1^∞ u_n(t;a_1) ∂ϕ_n/(x_0)≤g(t)/∑_n=1^∞ u_n(t;a_2) ∂ϕ_n/(x_0)=Ka_2(t), t∈[0,T], which completes this proof. §.§ UniquenessIn order to show the uniqueness, we state two lemmas.If a_1,a_2∈ are both fixed points of K with a_1≤ a_2, then a_1≡ a_2.Pick a fixed point a(t), thena(t)∑_n=1^∞ u_n(t;a) ∂ϕ_n/(x_0) =∑_n=1^∞ a(t)u_n(t;a) ∂ϕ_n/(x_0)=g(t),which gives ∑_n=1^∞[a(t)u_n(t;a)] ∂ϕ_n/(x_0)= gby takingon both sides. Similarly, takingon the both sides of (<ref>) and applying Lemma <ref> yield that [a(t)u_n(t;a)]=λ_n^-1 F_n+λ_n^-1b_n -λ_n^-1u_n(t;a), +,which together with (<ref>) generates ∑_n=1^∞λ_n^-1u_n(t;a)∂ϕ_n/(x_0)=∑_n=1^∞λ_n^-1( F_n+b_n)∂ϕ_n/(x_0)- g.In (<ref>), the convergence of the two series in C[0,T] is supported by Assumption <ref>, Remark <ref> and the fact that 0<λ_1≤λ_2≤⋯. Given two fixed points a_1,a_2 with a_1≤ a_2, then a_1 and a_2should satisfy (<ref>) simultaneously, which gives ∑_n=1^∞λ_n^-1∂ϕ_n/(x_0)(u_n(t;a_1)-u_n(t;a_2))=0.In the proof of Theorem <ref>, we have shown that u_n(t;a_1)≥ u_n(t;a_2)≥0. Also recall thatλ_n^-1∂ϕ_n/(x_0)≥ 0, +,then λ_n^-1∂ϕ_n/(x_0) (u_n(t;a_1)-u_n(t;a_2))≥ 0 on [0,T] for +. Hence, (<ref>) implies that λ_n^-1∂ϕ_n/(x_0) (u_n(t;a_1)-u_n(t;a_2))= 0, t∈[0,T], +.Let n=N, λ_N^-1∂ϕ_N/(x_0)>0 gives u_N(t;a_1)≡ u_N(t;a_2) on [0,T]. Set w(t)=u_N(t;a_1)-u_N(t;a_2)=0. Then (<ref>) yields that 0= w+λ_Na_1(t)w(t)=λ_Nu_N(t;a_2)(a_2(t)-a_1(t)),i.e. u_N(t;a_2)(a_2(t)-a_1(t))≡ 0 on [0,T]; while the proof of Lemma <ref> yields that u_N(t;a_2)>0. Hence, we have a_1= a_2 on [0,T], which completes the proof. Before showing uniqueness, we introduce a successive iteration procedurewhich will generate a sequence converging to a fixed point if it exists. Set a_0(t)=g(t)[∂ u_0/(x_0) + [∂ F/(x_0,t)]]^-1, a_n+1=Ka_n, .Then this iteration reproducesa sequence {a_n:} which is contained by due to Lemma <ref>.If there exists a fixed point a(t)∈ of operator K, then the sequence {a_n:} will converge to a(t).a_0 is the lower bound ofand {a_n:}⊂ yield that a_0≤a_1. Using Theorem <ref>, we have a_1=Ka_0≤ Ka_1=a_2, i.e. a_1≤a_2. The same argument gives a_2=Ka_1≤ Ka_2=a_3. Continue this process, we can deduce a_0≤a_1≤a_2≤…, which means {a_n:} is increasing. Since the results that a_0 is the lower bound ofand a(t)∈, it holds a_0≤ a. Applying Theorem <ref> to this inequality, we obtain a_1=Ka_0≤ Ka=a, i.e. a_1≤ a.This argument generatesa_n≤ a, , which means a(t) is an upper bound of {a_n:}.We have proved {a_n:} is an increasing sequence inwith an upper bound a(t), which leads to {a_n:}is convergent inand the limit is smaller than a(t). Denote the limit of {a_n:} by a. We have a∈, a≤ a and a is a fixed point of K in . Hence, Lemma <ref> yields a=a, which is the desired result.Now, we are able to prove the uniqueness of fixed points of K.There is at most one fixed point of K in .Let a_1, a_2∈ be both fixed points of K. Lemma <ref> implies that a_n→ a_1 and a_n→ a_2, which leads to a_1=a_2 and completes this proof.§.§ ExistenceAssumption <ref> is not sufficient to deduce theexistence of the fixed points of K sincehasno upper bound so that an increasingsequence inmay not be convergent. In this part, we discuss the existence of fixed points, by providing some extra conditions.Additional assumptions on u_0, F and g:(a) -Ł u_0∈ H^3(Ω)∩ H_0^1(Ω); (b) F(x,t)=-Ł u_0(x)· f(t) s.t. f∈ C^θ[0,T], 0<θ<1 and f(t)≥ g(t)[∂ u_0/(x_0)]^-1 on [0,T]. Assumption <ref> is set up to make sure thatF(x,t)=-Ł u_0(x)· f(t)∈ C^θ([0,T];H^3(Ω)∩ H_0^1(Ω)), so thatF(x,t) also satisfies Assumption <ref>.Fix u_0 and f, if the measured data g does not satisfyAssumption <ref> (b), then we can modifyu_0 by increasing the value of u_0 in a very small neighborhoodof the point x_0 so that the value of ∂ u_0/(x_0)becomes larger. Meanwhile, since u_0 is changed in a small domain,the coefficients {b_n:+} only vary slightly, so do u_n(t;a) and u(x,t;a).Hence, ∂ u/(x_0,t;a) and g(t) will not appeara significant change that can violate Assumption<ref> (b). Define the subspace ' ofas ':={ψ∈ C^+[0,T]: g(t)[∂ u_0/(x_0) + [∂ F/(x_0,t)]]^-1≤ψ(t)≤ g(t)[∂ u_0/(x_0)]^-1,t∈[0,T]}. We have proved the lower bound of ' is positive in Remark <ref> and clearly the upper bound of ' is larger than the lower bound. Consequently, ' is well-defined.The next lemma concerns the range of K with domain '. With Assumptions <ref> and <ref>, K maps ' into '.Given ψ∈', we have proved K ψ∈ C^+[0,T] andKψ(t) ≥ g(t)[∂ u_0/(x_0) + [∂ F/(x_0,t)]]^-1, t∈[0,T] in the proof of Lemma <ref>, so thatit is sufficient to show Kψ≤g(t)[∂ u_0/(x_0)]^-1 on [0,T].For each +, let w_n(t;ψ)=u_n(t;ψ)-b_n,(<ref>) yields the following ODE by direct calculationw_n(t;ψ)+λ_n ψ(t)w_n(t;ψ)=λ_nb_n(f(t)-ψ(t))≥0,w_n(0,ψ)=0,where λ_nb_n(f(t)-ψ(t))≥ 0 follows fromthe fact ψ(t) ≤ g(t)[∂ u_0/(x_0)]^-1and Assumption <ref> (b). Applying Corollary <ref> to the above ODE givesw_n(t;ψ)≥ 0, i.e. u_n(t;ψ)≥ b_n≥ 0 on [0,T].Hence, Kψ(t)=g(t)/∑_n=1^∞ u_n(t;ψ) ∂ϕ_n/(x_0)≤g(t)/∑_n=1^∞ b_n∂ϕ_n/(x_0) =g(t)[∂ u_0/(x_0)]^-1and this proof is complete. The existence conclusion is derived from Lemmas <ref>and <ref>.Suppose Assumptions <ref> and<ref> be valid, then there exists afixed point of K in '.Lemma <ref> yields the sequence {a_n:} is increasing, while Lemma <ref> gives {a_n:}⊂'. Then {a_n:} is an increasing sequence with anupper bound g(t)[∂ u_0/(x_0)]^-1, which implies the convergence of {a_n:}. Denote the limit by a, clearly a is a fixed point of K. Also, the closedness of ' yields that a∈'.Therefore, a is a fixed point of K in ', which confirms the existence. §.§ Main theorem for the inverse problem and reconstruction algorithm Lemma <ref>, Theorems <ref> and<ref> allow us to deduce the main theorem for thisinverse problem.Suppose Assumption <ref> holds. (a) If there exists a fixed point of K in , then it isunique and coincides with the limit of {a_n:}; (b) If Assumption <ref> is also valid,then there exists a unique fixed point of K in ', which is thelimit of {a_n:}. The following reconstruction algorithm for a(t) is based onTheorem <ref>. § NUMERICAL RESULTS FOR INVERSE PROBLEM§.§ L1 time-stepping of The fourth step of Algorithm <ref> includessolving the direct problem of FDE (<ref>) numerically.To this end, we choose L1 timestepping <cit.> to discretize the termu(x,t): u(x,t_N)= 1/Γ(1-α)∑^N-1_j=0∫^t_j+1_t_j∂ u(x,s)/∂ s (t_N-s)^-αds ≈1/Γ(1-α)∑^N-1_j=0u(x,t_j+1)-u(x,t_j)/τ∫_t_j^t_j+1(t_N-s)^-αds=∑_j=0^N-1b_ju(x,t_N-j)-u(x,t_N-j-1)/τ^α=τ^-α [b_0u(x,t_N)-b_N-1u(x,t_0)+∑_j=1^N-1(b_j-b_j-1)u(x,t_N-j)] ,where b_j=((j+1)^1-α-j^1-α)/Γ(2-α), j=0,1,…,N-1.§.§ Numerical results for noise free dataIn this part, we set Ω=(0,1), x_0=0, T=1, Ł u=u_xx,pick u_0(x)=-sinπ x,F(x,t)=-(t+1)sinπ xand consider the following two coefficients:(a1) smooth coefficient: a(t)=sin5π t+1.3; (a2) nonsmooth coefficient (“smile” function):a(t) =[0.8sin3π t+1.5]χ_[0,1/3]+[-0.5sin(3π t-π)+0.6]χ_(1/3,2/3)+[0.8sin(3π t-2π)+1.5]χ_[2/3,1].In experiment (a1), the exact coefficient we pick is a smoothfunction. Figure <ref> shows the initial guess and thefirst three iterations, while Figure <ref> presentsthe exact and approximate coefficients. From these two figures, we observe that {a_n:} converges to a(t) monotonically,which illustrates Theorems <ref> and <ref>.Moreover, the L^2 error of the approximation in Figure<ref> isa-a_N_L^2[0,T]=1.04× 10^-6, which implies us the L^2 error of this approximation may be boundedby the stopping criterion number ϵ_0. This guess is confirmedby Figure <ref> and can be expressed as a-a_N_L^2[0,T]=O(ϵ_0). Several attempts of experiment (a1) for different α∈(0,1)are taken to find the dependence of the convergence rate of Algorithm<ref> on the fractional order α, which is shown in Figure<ref>. This figure shows the amounts of iterationsrequired, i.e. N, corresponding to different α, which implythat restricted α∈(0,1), the larger α is, the faster theconvergence rate of Algorithm <ref> is. This phenomenonis explained in <cit.> by a property of the Mittag-Lefflerfunction; for α∈(0,1), the larger α is,the faster the decay rate of E_α,1(-z) is as z→∞. The definition ofrestricts the coefficient a(t) in the spaceC^+[0,T], however, the results of experiment (a2) indicate that Algorithm<ref> still works for nonsmooth a(t), which means the numericalrestriction on a(t) can possibly be extended from a(t)∈ C^+[0,T] toa(t)∈ L^∞ [0,T]. For discontinuous a(t), Figures <ref>and <ref> explain thatTheorems <ref> and <ref> still hold,while Figures <ref> and <ref>illustrate the similar conclusions as the larger α is, the faster the convergence rate of Algorithm<ref> is, and a-a_N_L^2[0,T]=O(ϵ_0). §.§ Numerical results for noisy dataIn this subsection, we will consider data polluted by noise.Set g be the exact data and denote the noisy data by g_δwith relative noise level δ, i.e.(g-g_δ)/g_L^∞[0,T]≤δ.Then the perturbed operator K_δ isK_δψ(t) =g_δ(t)/∑_n=1^∞ u_n(t;ψ) ∂ϕ_n/(x_0) with domain:={ψ∈ C^+[0,T]:g_δ(t)[∂ u_0/(x_0) + [∂ F/(x_0,t)]]^-1≤ψ(t), t∈[0,T] }.Also, the sequence {a_δ,n:n∈ℕ} can beobtained from the iteration a_δ,0=g_δ[∂ u_0/(x_0) + [∂ F/(x_0,t)]]^-1, a_δ,n+1=K_δa_δ,n, n∈ℕ.Since δ is a small positive number and g is a strictlypositive function, we can assume g_δ is still positive, whichmeans Theorem <ref> still holds for K_δ.Hence, if there exists a fixed point a_δ∈,the sequence {a_δ,n:n∈ℕ} will converge toa_δ monotonically and we denote the limit by a_δ. Algorithm <ref> is still able to be used to recovera_δ after a slightly modification-replacingg and K by g_δ and K_δ, respectively.We take the experiments (a1) and (a2) with noise level δ>0. Figures <ref> and<ref> presentthe exact and approximate coefficients under δ=3% forexperiments (a1) and (a2) respectively. From figures <ref> and <ref>,we observe that the smaller |a(t)| is, the better the approximation is.This can be explained by δ means the relatively noise level, i.e.we pick g_δ=(1+ζδ)g in the codes, whereζ follows a uniform distribution on [-1,1]. Figure <ref> illustrates thata-a_δ,N_L^2[0,T]/a_L^2[0,T]=O(δ), showing the domination of the noise level δ inrelatively L^2 error with the reason that ϵ_0 ≪δ. §.§ Numerical results in two dimensional caseIn this part, the numerical experiments on a two dimensional domain will be considered. We set α=0.9, ϵ_0=10^-6, Ω=(0,1)^2, x_0=(0,1/2), T=1, Ł u= u, choose u_0(x,y)=-sin[π xy(1-x)(1-y)], F(x,y)=-(t+1)·sin[π xy(1-x)(1-y)], and consider experiments (a1) and (a2).Figures <ref> and <ref> confirm the theoretical conclusions in section 4. § ACKNOWLEDGMENTThe author is indebted to William Rundell for assistance in this workand acknowledges partial support from NSF-DMS 1620138. abbrv
http://arxiv.org/abs/1708.07756v1
{ "authors": [ "Zhidong Zhang" ], "categories": [ "math.AP", "math-ph", "math.MP", "35R11, 35R30, 65M32" ], "primary_category": "math.AP", "published": "20170824131031", "title": "An undetermined time-dependent coefficient in a fractional diffusion equation" }
CHAPTER: DOUBLE, TRIPLE, AND N-PARTON SCATTERINGS IN HIGH-ENERGY PROTON AND NUCLEAR COLLISIONSDavid d'Enterria and Alexander Snigirev]David d'Enterria^1 and Alexander Snigirev^2 ^1CERN, EP Department, 1211 Geneva, Switzerland^2Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University, 119991, Moscow, Russia The framework to compute the cross sections for the production of particles with high mass and/or large transverse momentumin double- (DPS), triple- (TPS), and in general n-parton scatterings, from the corresponding single-parton () valuesin high-energy proton-proton, proton-nucleus, and nucleus-nucleus is reviewed. The basic parameter of the factorizedn-parton scattering ansatz is an effective cross sectionencoding all unknowns about the underlyinggeneralized n-parton distribution in the proton (nucleon).In its simplest and most economical form, theparameter can be derived from the transverse parton profile of the colliding protons and/or nucleus,using a Glauber approach. Numerical examples for the cross sections and yields expected for the concurrent DPS or TPS productionof heavy-quarks, quarkonia, and/or gauge bosons in proton and nuclear collisions at LHC and Future Circular Collider (FCC) energies are provided. The obtained cross sections are based on perturbative QCD predictions for at next-to-leading-order (NLO) or next-to-NLO (NNLO) accuracy including, when needed, nuclear modifications of the correspondingparton densities. § INTRODUCTION The extended nature of hadrons and their growing parton densities when probed at increasingly higher collision energies, make it possible to simultaneously produce multiple particles with large transverse momentum and/or mass(√(^2+m^2)≳ 2 GeV) in independent multiparton interactions (MPI) <cit.> inproton-(anti)proton (pp, ) <cit.>,as well as in proton-nucleus (pA) <cit.>, and nucleus (AA) <cit.> collisions. Double-, triple-, and in general n-parton scatterings depend chiefly on the transverse overlap of the matter densities of the colliding hadrons,and provide valuable information on (i) the badly-known tridimensional (3D) profile of the partons inside the nucleon,(ii) the unknown energy evolution of the parton density as a function of impact parameter (b), and(iii) therole of multiparton(space, momentum, flavour, colour,...) correlations in the hadronic wave functions. A good understanding of n-parton scattering (NPS) is not only useful to improve our knowledge of the 3D partonstructure of the proton, but it is also of relevance for a realistic characterization of backgrounds in searchesof new physics in rare final-states with multiple heavy-particles. The interest in MPI has increased in the last years, not only as a primary source of particle production athadron colliders <cit.>, but also due to their role <cit.> in the“collective” partonic behaviour observed in “central” pp collisions, bearing close similarities to thatmeasured in heavy-ion collisions <cit.>. As a matter of fact,the larger transverse parton density in a nucleus (with A nucleons) compared to that of a proton,significantly enhances double (DPS) and triple (TPS) parton scatteringcross sections coming from interactions where the colliding partons belong to the same or to different nucleons of the nucleus (nuclei), providing thereby additional information on the underlying multiparton dynamics. Many final-states involving the concurrent production ofheavy-quarks (c,b), quarkonia (, Υ),jets, and gauge bosons (γ, W, Z) have been measured and found consistent with DPS at the Tevatron(seeearly results from CDF <cit.> and more recent ones from D0 <cit.>)as well as at the LHC (seethe latest results from ATLAS <cit.>,CMS <cit.> and LHCb <cit.>). The TPS processes, although not observed so far, have visible cross sections for charm and bottom inpp <cit.> and pA <cit.> collisions at LHC and future circular(FCC) <cit.> energies. The present writeup reviews and extends our past work on DPS and TPS in high-energy pp, pA and AA collisions <cit.>, expanding the basic factorized formalism to generic NPS processes, and presenting realistic cross section estimatesfor the double- and triple-parton production of heavy-quarks, quarkonia, and/or gauge bosons in proton and nuclear collisionsat LHC and FCC. § In a generic hadroniccollision, the inclusive cross section to produce n hard particles in n independent hard parton scatterings,h h' → a_1… a_n, can be written as a convolution of generalized n-parton distribution functions (PDF)and elementary partonic cross sections summed over all involved partons,σ^nps_hh' → a_1… a_n=(m/n!) ∑_i_1,..,i_n,i'_1,..,i'_n∫Γ^i_1… i_n_h(x_1,..,x_n;b_1,.., b_n; Q^2_1,..,Q^2_n) × σ̂_a_1^i_1i'_1(x_1,x_1',Q^2_1) ⋯ σ̂_a_n^i_ni'_n(x_n,x_n',Q^2_n) × Γ^i'_1...i'_n_h'(x'_1,…, x'_n;b_1 -b,…, b_n -b; Q^2_1 ,…, Q^2_n) ×dx_1 … dx_n dx_1' ,…, dx_n' d^2b_1 ,…,d^2b_n d^2b.Here, Γ^i_1...i_n_h(x_1,…,x_n;b_1,…,b_n; Q^2_1,…, Q^2_n) are n-partongeneralized distribution functions, depending on the momentum fractions x_1,…,x_n, and energy scales Q_1,…, Q_n,at transverse positions b_1,…,b_n of the i_1,…,i_n partons, producing final-state particlesa_1,…,a_n with subprocess cross sections σ̂_a_1^i_1i'_1, …, σ̂_a_n^i_ni'_n.The combinatorial (m/n!) prefactor takes into account the different cases of (indistinguishable or not) final states.For a set of identical particles ( when a_1 =…=a_n) we have m=1, whereas m=2,3,6,… forfinal-states with an increasing number of different particles produced. In the particular cases of interest here, we have * DPS: m=1 if a_1=a_2; and m=2 if a_1≠ a_2. * TPS: m=1 if a_1=a_2=a_3; m=3 if a_1=a_2, or a_1=a_3, or a_2=a_3; and m=6 if a_1 ≠ a_2 ≠ a_3.The n-parton distribution function Γ^i_1...i_n_h(x_1,..,x_n; b_1,.., b_n;Q^2_1,..,Q^2_n)theoretically encodes all the 3D parton structure information of the hadron of relevance to compute the NPS cross sections,including the density of partons in the transverse plane and any intrinsic partonic correlations in kinematical and/orquantum-numbers spaces. Since Γ^i_1… i_n_h is potentially a very complicated object,one often resorts to simplified alternatives to compute NPS cross sections based on simpler quantities.As a matter of fact, without any loss of generality, any n-parton cross section can be always expressed in amore economical and phenomenologically useful form in terms of single-parton scattering (SPS) inclusive cross sections, theoretically calculable in perturbative quantum chromodynamics (pQCD) approaches through collinear factorization <cit.>as a function of “standard” (longitudinal) PDF, D^i_h(x,Q^2), at a given order of accuracy in the QCD coupling expansion (next-to-next-to-leading order, NNLO, being the current state-of-the-art for most calculations): _hh' → a = ∑_i_1,i_2∫ D^i_1_h(x_1; Q^2_1)σ̂^i_1i_2_a(x_1, x_1')D^i_2_h'(x_1'; Q^2_1) dx_1 dx_1' .More precisely, any n-parton cross section can be expressed as the nth-product of the corresponding SPS cross sections for theproduction of each single final-state particle, normalized by the (nth-1) power of an effective cross section,σ_hh' → a_1… a_n^nps = (m/n!) σ_hh' → a_1^sps⋯ σ_hh' → a_n^sps/^n-1, whereencodes all the unknowns related to the underlying generalized PDF.Equation (<ref>) encapsulates the intuitive result that the probability to produce n particlesin a given inelastic hadron-hadron collisionshould be proportional to the n-product of probabilities to independently produce each one of them,normalized by the nth-1 power of an effective cross section to guarantee the proper units of the final result(<ref>).[Indeed, in the simplest DPS case, the probability to produce particles a, b in a pp collision is: P_ pp→ ab = P_ pp→ a· P_ pp→ b =σ_ pp→ a/σ^ inel_ pp·σ_ pp→ b/σ^ inel_ pp,which implies: σ_ pp→ a,b = σ_ pp→ a·σ_ pp→ b/, with≈σ^ inel_ pp. In reality, the measured value of ≈ 15 mb is a factor of 2–3 lower ( the DPS probability is 2–3 times larger) than the naive ≈σ^ inel_ pp expectation for typical “hard” (minijet) inelastic pp partonic cross sections σ^ inel_ pp≈ 30–50 mb. This is so because the independent-scattering assumption does not hold as the probability to produce a second particle is higher in low-impact-parameter (large transverse overlap) pp eventswhere a first partonic scattering has already taken place.]The value ofin Eq. (<ref>) can be theoretically estimated making a few common approximations.First, the n-PDF are commonly assumed to be factorizable in terms of longitudinal and transverse components,Γ^i_1..i_n_h(x_1,…,x_n; b_1,…, b_n;Q^2_1,…,Q^2_n) = D^i_1..i_n_h(x_1, …, x_n; Q^2_1, …, Q^2_n) · f( b_1) ⋯ f( b_n), where f( b_1) describes the transverse parton density of the hadron, often considered a universal functionfor all types of partons, from which the corresponding hadron-hadron overlap function can be derived: T( b) = ∫ f( b_1) f( b_1 -b)d^2b_1, with the fixed normalization ∫ T( b)d^2b = 1. Making the further assumption that the longitudinal componentsreduce to the product of independent single PDF, D^i_1..i_n_h(x_1, …, x_n; Q^2_1, …, Q^2_n) = D^i_1_h(x_1; Q^2_1) ⋯ D^i_n_h(x_n; Q^2_n) ,the effective NPS cross section bears a simple geometric interpretation in terms of powers of the inverse of the integral of the hadron-hadron overlap function over all impact parameters, ={∫ d^2bT^n( b)}^-1/(n-1) . § DOUBLE AND TRIPLE PARTON SCATTERING CROSS SECTIONS IN HADRON-HADRON COLLISIONSThe generalized expression (<ref>) for the case of double-parton-scattering cross sections in hadron-hadron collisions,hh' → a_1a_2, reads_hh'→ a_1a_2 =(m/2)∑_i,j,k,l ∫Γ_h^ij(x_1,x_2;b_1, b_2; Q^2_1, Q^2_2)× σ̂^ik_a_1(x_1, x_1',Q^2_1) ·σ̂^jl_a_2(x_2, x_2',Q^2_2)× Γ_h'^kl(x_1', x_2';b_1 -b, b_2 -b; Q^2_1, Q^2_2)×dx_1 dx_2 dx_1' dx_2' d^2b_1 d^2b_2 d^2b .Applying the “master” equations (<ref>) and (<ref>) for n=2, one can express this cross sectionas a double product of independent single inclusive cross sectionsσ_hh' → a_1a_2^dps =(m/2) σ_hh' → a_1^sps·σ_hh' → a_2^sps/, where the effective DPS cross section (<ref>) that normalizes the double SPS product is=[ ∫ d^2b T^2( b)]^-1 . Similarly, the generic expression (<ref>) for the TPS cross section for the process hh' → a_1a_2a_3 reads <cit.>σ^tps_hh' → a_1a_2a_3=(m/3!) ∑_i,j,k,l,m,n∫Γ^ijk_h(x_1,x_2,x_3; b_1, b_2, b_3;Q^2_1,Q^2_2, Q^2_3)×σ̂_a_1^il(x_1, x_1',Q^2_1) ·σ̂_a_2^jm(x_2, x_2',Q^2_2) ·σ̂_a_3^kn(x_3, x_3',Q^2_3) ×Γ^lmn_h'(x_1', x_2', x_3';b_1 -b, b_2 -b, b_3 -b; Q^2_1, Q^2_2, Q^2_3) × dx_1 dx_2 dx_3 dx_1' dx_2' dx_3' d^2b_1 d^2b_2 d^2b_3 d^2b.which can be reduced to a triple product of independent single inclusive cross sectionsσ_hh' → a_1a_2a_3^tps =(m/3!) σ_hh' → a_1^sps·σ_hh' → a_2^sps·σ_hh' → a_3^sps/^2, normalized by the square of an effective TPS cross section (<ref>), that amounts to <cit.> ^2=[ ∫ d^2bT^3( b)]^-1 , One can estimate the values of the effective DPS (<ref>) and TPS (<ref>) cross sections via Eq. (<ref>) for different transverse parton profiles of the colliding hadrons, such as those typically implemented in the modernMonte Carlo (MC) event generators8 <cit.>, and  <cit.>.In8, theoverlap function as a function of impact parameter is often parametrized in the form:T( b)= m/2π r^2_pΓ (2/m)exp[-(b/r_p)^m] ,normalized to one, ∫ T( b)d^2b = 1, where r_p is the characteristic “radius” of the proton,Γ is the gamma function, and the exponent m depends on the MC “tune” obtained from fits to themeasured underlying-event activity and various DPS cross sections in pp collisions <cit.>.It varies between a pure Gaussian (m=2) to a more peaked exponential-like (m=0.7, 1) distribution.From the corresponding integrals of the square and cube of T( b), we obtain:=(∫ d^2b  T^2( b))^-1= 2π r^2_p 2^2/mΓ(2/m)/m , and =(∫ d^2b  T^3( b))^-1/2= 2π r^2_p 3^1/mΓ(2/m)/m.From Eq. (<ref>), in order to reproduce the experimental ≃ 15± 5 mb value extracted inmultiple DPS measurements at Tevatron <cit.> andLHC <cit.>,the characteristic proton “radius” parameter amounts to r_p ≃ 0.11 ± 0.02, 0.24 ± 0.04, 0.49 ± 0.08 fm for exponentsm= 0.7, 1, 2 as defined in Eq. (<ref>). The values ofand , Eqs. (<ref>)–(<ref>), are of course closely related:=(3/4)^1/m·. Such a relationship is independent of the exact numerical value of the proton“size” r_p, but depends on the overall shape of its transverse profile characterized by the exponent m. For typical -8 m= 0.7,1,2 exponents tuned from experimental data <cit.>, one obtains= [0.66,0.75,0.87]× respectively. Theevent generator uses an alternative parametrization of the proton profile described by the dipole fit of thetwo-gluon form factor in the momentum representation <cit.>F_2g( q)=1/(q^2/m^2_g+1)^2,where the gluon mass m_g parameter characterizes the transverse momentum q distribution of the proton,and the transverse density is obtained from its Fourier-transform: f( b)=∫ e^-i b· qF_2g( q)d^2q/(2π)^2. The corresponding DPS (<ref>) and TPS (<ref>) effective cross sectionsread <cit.>:= [∫F_2g^4(q)d^2q/(2π)^2]^-1=28 π/m^2_g ,and <cit.> =[∫ (2π)^2 δ( q_1+ q_2+ q_3) F_2g( q_1) F_2g( q_2) F_2g( q_3)× F_2g( -q_1) F_2g( -q_2) F_2g(- q_3) d^2q_1/(2π)^2d^2q_2/(2π)^2d^2q_3/(2π)^2]^-1/2.Numerically integrating the latter and combining it with (<ref>), we obtain = 0.83×,which is quite close to the value derived for the Gaussianoverlap function in8. In order to reproduce the experimentally measured ≃ 15 ± 5 mb values,the characteristic proton “size” for this parametrization amounts to r_g=1/m_g ≃ 0.13 ± 0.02 fm. Despite the wide range of proton transverse parton densities and associated effective radius parameters considered,we find that the ≲ result is robust with respect to the underlying parton profile. As a matter of fact, from the average and standard deviation of all typical parton transverse distributions studied inRef.<cit.>, the following relationship between double and triple scattering effectivecross sections can be derived:= k×, with k = 0.82± 0.11 .Thus, from the typical ≃ 15 ± 5 value extracted from a wide range of DPS measurementsat Tevatron and LHC, the following numerical effective TPS cross section is finally obtained:= 12.5 ± 4.5 mb.§.§ Many theoretical and experimental studies exist that have extractedfrom computed and/or measured DPS cross sections for a large variety of final-states in pp collisions <cit.>. In this subsection,we focuse therefore on the TPS case for which we presented the first-ever estimates in Ref <cit.>.The experimental observation of triple parton scatterings incollisions requires perturbatively-calculableprocesses with SPS cross sections not much smaller than 𝒪( 1 μ b) since, otherwise,the corresponding TPS cross sections (which go as the cube of the SPS values) are extremely reduced. Indeed,according to Eq. (<ref>) with the data-driven estimate (<ref>), a triple hard processpp→ a a a, with SPS cross sections _ pp→ a≈ 1 μb, has a very smallcross section σ^tps_ pp→ a a a≈ 1 fb. Evidence for TPS appears therebychallenging already without accounting for additional reducing factors arising from decay branching ratios, and experimental acceptances and reconstruction inefficiencies, of the produced particles. Promising processes to probe TPS, with not too small pQCD cross sections, are inclusive charm (→+X),and bottom (→+X), whose cross sections are dominated by gluon-gluon fusion (gg→) at small x, for which one can expect a non-negligible contributions of DPS <cit.>and TPS <cit.> to their total inclusive production (Fig. <ref>).The TPS heavy-quark cross sections can be computed with Eq. (<ref>) for m=1, σ_ pp→^tps = (σ_ pp→^sps)^3/(6 ^2) withgiven by (<ref>), and σ_ pp→^spscalculated via Eq. (<ref>) at NNLO accuracy using a modified version <cit.> of the(v2.0)code <cit.>, with N_f=3,4 light flavors, heavy-quark pole masses set at m_c,b=1.67, 4.66 GeV,default renormalization and factorization scales set at μ__R=μ__F=2m_c,b, and using the ABMP16 proton PDF <cit.>. Such NNLO calculations increase the total SPS heavy-quark cross sections by up to 20% at LHC energies compared to the corresponding NLO results <cit.>, reaching a better agreement with the experimental data,and featuring much reduced scale uncertainties (±50%,±15% for ,) <cit.>. Figure <ref> shows the resulting total SPS and TPS cross sections for charm and bottom production over = 35 GeV–100 TeV, and Table <ref> lists the results with associated uncertaintiesfor the nominal ppenergies at LHC and FCC. The PDF uncertainties are obtained from the corresponding 28 eigenvaluesof the ABMP16 set. The dominant uncertainty comes from the theoretical scales dependence, which is estimatedby modifying μ__R and μ__F within a factor of two. Figure <ref> shows that the TPS cross sections rise fast with ,as the cube of the corresponding SPS cross sections. Triple- production from three independent partonscatterings amounts to 5% of the inclusive charm yields at the LHC (= 14 TeV) and to more than half of the total charm cross section at the FCC. Since the totalinelastic cross section at = 100 TeV is σ_ pp≃ 105 mb <cit.>, charm-anticharm triplets are expected to be produced in ∼15% of thecollisions at these energies. Triple- cross sections remain quite small and reach onlyabout 1% of the inclusive bottom cross section at FCC(100 TeV). These results indicate that TPS is experimentally observable in triple heavy-quark pair final-states at the LHC and FCC. The possibility ofdetecting triple charm-meson production in pp collisions at the LHC has been discussed in more detail in Ref.<cit.> § DOUBLE AND TRIPLE PARTON SCATTERING CROSS SECTIONS IN PROTON-NUCLEUS COLLISIONSIn proton-nucleus collisions, the parton flux is enhanced by the number A of nucleons in the nucleus and the SPS cross section is simply expected to be that of proton-proton collisions or, more exactly, that ofproton-nucleon collisions (, with N = p,n being bound protons and neutrons with their appropriate relativefractions in the nucleus) taking into (anti)shadowing modifications of the nuclear PDF <cit.>,scaled by the factor A,  <cit.>_ pA→ a= _ pN→ a ∫ ( b) = A ·_ pN→ a. Here, ( r) is the standard nuclear thickness function, analogous to Eq.  (<ref>) for the pp case, as a function of the impact parameter r between the colliding proton and nucleus,given by an integral of the nuclear density function ρ_A( r) over the longitudinal direction ( r) = ∫ρ_A(√(r^2+z^2)) dz, normalized to∫( r)= A ,which can be easily computed using (simplified) analytical nuclear profiles, and/or employing realisticFermi-Dirac ( Woods-Saxon) nuclear spatial densities determined in elastic eA measurements <cit.>,via a MC Glauber model <cit.>.The most naive assumption is to consider that the NPS cross sections in pA collisions can be obtained by simply A-scaling the corresponding pp NPS values, as done via Eq. (<ref>) for the SPS cross sections. We show next that DPSand TPS cross sections in proton-nucleus collisions can be significantly enhanced, with extra A^4/3 (for DPS and TPS) and A^5/3 (for TPS alone) terms complementing the A-scaling, due to additional multiple scattering probabilities among partons from different nucleons.§.§ DPS cross sections in pA collisions The larger transverse parton density in nuclei compared to protons results in enhanced DPS cross sections, pA → ab, coming from interactions where the two partons of the nucleus belong to (1) the same nucleon, and (2) two differentnucleons <cit.> as shown in Fig. <ref>. Namely,_ pA = _ pA + _ pA ,where * The first term is just the A-scaled DPS cross section incollisions: _ pA→ a b = A ·_ pN→ a b ,* the second contribution, from parton interactions from two different nucleons, depends on the square of ,_ pA→ a b=_ pN→ a b·· F_ pA, F_ pA=A-1/A∫( r)= (A-1)/A·(0) ,where the (A-1)/A factor accounts for the difference between the number of nucleon pairs and the number ofdifferent nucleon pairs, and (0) is the nuclear overlap function at b = 0 for the corresponding AA collision.In the simplest approximation of a spherical nucleus with uniform nucleon density with radius R_a∝ A^1/3, the factor (<ref>) can be written asF_ pA = 9 A(A-1)/8 π R_a^2≈A^4/3/14 π [mb^-1] . where the second approximate equality (valid for large A) indicates the corresponding dependence on the A mass-number alone. For Pb, with A=208 and R_a≈ 7 fm≈ 22 mb^1/2, one obtains F_ pA≈ 31.5 mb^-1, in good agreement with the more accurate result, F_ pA = 30.25 mb^-1, computed with a Glauber MC <cit.>using the standard Woods-Saxon spatial density of the lead nucleus (radius R_a = 6.36 fm, and surface thickness a = 0.54 fm) <cit.>.The sum of (<ref>) and (<ref>) yields the inclusive cross section for the DPS production of particles a and b in acollision:_ pA→ a b= A·_ pN→ a b[1+ F_ pA/A] ≈A·_ pN→ a b[1+/14 [mb]πA^1/3],which is enhanced by the factor in parentheses compared to the A-scaled DPS cross section incollisions. Given the experimental ≈ 15 mb value, the pp-to-pA DPSenhancement factor can be further numerically simplified as [1+A^1/3/π], which goes from ∼1.4for small to ∼3 for large nuclei. Namely, the relative weight of the two DPS terms of Eq. (<ref>)goes fromσ^dps, 1_ pA→ a b:σ^dps, 2_ pA→ a b=0.7:0.3 (small A) to 0.33:0.66 (large A). Thus,in the case of pPb collisions, 33% of the DPS yields comefrom partonic interactions within just one nucleon of the Pb nucleus, whereas 66% of them involve parton scatterings fromtwo different Pb nucleons. The final factorized DPS formula in proton-nucleus collisions can be written as a function of theelementary proton-nucleon single-parton cross sections as _ pA→ a b = (m/2) _ pN→ a·_ pN→ b/ ,where the effective DPScross section in the denominator depends on the effective cross section measured in pp,and on a pure geometric quantity (F_ pA) that is directly derivable from the well-known nuclear transverse profile,namely= /A+F_ pA≈/A+ (0)≈/A+A^4/3/π.For a Pb nucleus (with A = 208, and F_ pA = 30.25 mb^-1) and taking = 15 ± 5 mb,one obtains = 22.5 ± 2.3 μb. The overall increase of DPS cross sections in pA compared to pp collisions is /≈ [A+A^4/3/π] which, in the case of pPb implies a factor of∼600 relative to pp (ignoring nuclear PDF effects here),a factor of [1+A^1/3/π]≈ 3higher than the naive expectation assuming the same A-scaling of the single-parton cross sections, Eq. (<ref>). One can thus exploit such large expected DPS signals over the SPS backgrounds in proton-nucleus collisions tostudy double parton scatterings in detail and, in particular, to extract the value ofindependentlyof measurements in pp collisions—given that the parameter F_ pA in Eq. (<ref>)depends on the comparatively better-known transverse density of nuclei. §.§.§One of the “cleanest” channels to study DPS in pp collisions is same-sign WW production <cit.> as it featuresprecisely-known pQCD SPS cross sections, a clean experimental final-state with two like-sign leptons plus missing transverse momentum from the undetected neutrinos, and small non-DPS backgrounds[The lowest order at which two same-signW bosons can be produced is accompanied with two jets (W^±W^±jj), q q →W^±W^± q' q' with q=u,c,… and q'=d,s,… whose leading contributions are (α_ s^2α_ w^2) for the mixed QCD-electroweak diagrams, and (α_ w^4) for the pure vector-boson fusion (VBF) processes, where α_ w is the electroweak coupling.]. The DPS cross section in pPb for same-sign WW production was first estimated in Ref.<cit.>, computing the SPS W^± cross sections (_ pN→ W)with  (v.6.2) <cit.> at NLO accuracy with CT10 proton <cit.>and EPS09 nuclear <cit.> PDF, and setting default renormalisation and factorisation theoretical scales to μ = μ__R = μ__F = m__W. The background W^±W^±jj cross sections are computed withfor the QCD part (formally at LO, but setting μ__R = μ__F = 150 GeVto effectively account for missing higher-order corrections), and with  (v.2.6) <cit.> for theelectroweak contributions with theoretical scales set to the momentum transfer of the exchanged boson, μ^2 = t__W,Z.In pPb at 8.8 TeV, the EPS09 nuclear PDF modifies the total W^+ (W^-) production cross section by about -7% (+15%) comparedto that obtained using the free proton CT10 PDF <cit.>. We extend here the results of Ref. <cit.>, using Eq. (<ref>) with m = 1 and= 22.5 ± 2.3 μb, and including FCC pPb energies (= 63 TeV). The resulting cross sections are listed in Table <ref>. The uncertainties of the SPS NLO single-W cross sections amount to about ±10% by adding in quadrature those fromthe EPS09 PDF eigenvector sets (the proton PDF uncertainties are much lower in the relevant x,Q^2 regions) and from the theoretical scales (obtained by independently varying μ__R and μ__F within a factor of two).The QCD W^±W^±jj cross sections uncertainties are those from the full-NLO calculations <cit.>, whereasthose of the VBF cross sections are much smaller as they do not involve any gluons in the initial state.The DPS cross section uncertaintiesare dominated by a propagated ±30% uncertainty from .Figure <ref> shows the computed total cross sections for all W processes considered over the energy = 2–65 TeV range. At the nominal LHC pPbenergy of 8.8 TeV, the same-sign WW DPS cross sectionis _ pPb→ WW≈ 150 pb (thick curve),larger than the sum of SPS backgrounds,_ pPb→ WWjj (lowest dashed curve) obtained adding the QCD and electroweak cross sections for the production of W^+W^+ (W^-W^-) plus 2 jets. In the fully-leptonic final-state (W^±W^±→ℓν ℓ'ν', with ℓ = e^±, μ^±)and accounting for decay branching ratios and standard ATLAS/CMS acceptance and reconstruction cuts(|y^ℓ|<2.5, ^ℓ>15 GeV), one expects up to 10 DPS same-sign WW events in L_ int = 2 pb^-1integrated luminosity <cit.>. At FCC energies (= 63 TeV), the ssWW DPS cross section is more thantwice larger than the ssWW(jj) SPS one. With L_ int≈ 30 pb^-1, and a factor twice larger rapiditycoverage <cit.>, one expects 10^4 ssWW pairs from DPS processes. Same-sign WW productionin pPb collisions constitutes thereby a promising channel to measure ,independently of the standard pp-based extractions of this quantity. Table <ref> collects the estimated DPS cross sections for the combined production of quarkonia (, Υ)and/or electroweak bosons (W, Z) incollisions at the nominal LHC energy of = 8.8 TeV.The quoted SPS pN cross sections have been obtained at NLO accuracy with the color evaporation model (cem) <cit.> for quarkonia (see details in Section <ref>), and withfor the electroweak bosons,using CT10 proton and EPS09 nuclear PDF.The DPS cross sections are estimated via Eq. (<ref>) with = 22.5 μb, and the visibleDPS yields are quotedfor L_ int = 1 pb^-1 integrated luminosities,taking into account the branching fractions BR(,Υ,W,Z) = 6%, 2.5%, 11%, 3.4% per dilepton decay;plus simplified acceptance and efficiency losses: A× E() ≈ 0.01 (over 1-unit ofrapidity at |y|=0, and |y|=2), and A× E(Υ; W,Z)≈ 0.2; 0.5 (over |y|<2.5). All listed processes are in principle observable in the LHC proton-lead runs, whereas rarer DPS processes like W+Z and Z+Zhave much lower cross sections and require much higher luminosities and/orenergies such as those reachable at the FCC.§.§ TPS cross sections in pA collisionsSimilarly to the DPS case, the proton-nucleus TPS cross section for the pA → abc process, is obtained from the sum of three contributions:_ pA = _ pA + _ pA + _ pA ,with* A cross section, scaling like Eq. (<ref>) for the SPS case, corresponding to theTPS value in pN collisions scaled by A, namely:σ^tps, 1_ pA→ a b c = A ·σ^tps_ pN→ a b c .* A second contribution, involving interactions of partons from two different nucleons in the nucleus, depending on the square of ,σ^tps, 2_ pA→ a b c = σ^tps_ pN→ a b c· 3^2/F_ pA, with F_ pA given by Eq. (<ref>).* A third term, involving interactions among partons from three different nucleons, depending on the cube of ,σ^tps, 3_ pA→ a b c = σ^tps_ pN→ a b c·^2 · C_ pA,C_ pA = (A-1)(A-2)/A^2∫ ^3( b) , with the (A-1)(A-2)/A^2 factor introduced to take into account the difference between thetotal number of nucleon TPS and that of different nucleon TPS. By using a hard-sphere approximation for a nucleus of radius R_a∝ A^1/3,the C_ pA factor can be analytically calculated asC_ pA= 27/4A (A-1) (A-2)/5 π^2 R_a^4≈A^5/3/160 π^2 [mb^-2] , where the last approximate equality holds for large A. For a Pb nucleus (A = 208, R_a = 22 mb^1/2) this factor amounts to C_ pA≈ 5.1 mb^-2, in agreement with the C_ pA = 4.75 mb^-2 numericallyobtained through a Glauber MC with a realistic Woods-Saxon Pb profile. The inclusive TPS cross section for the independent production of three particles a, b, and c incollisionsis obtained from the sum of the three terms (<ref>), (<ref>), and (<ref>):_ pA→ a b c= A_ pN → a b c[1+3^2/F_ pA/A + ^2 C_ pA/A] ≈A _ pN → a b c[1+^2/3 A^1/3/14 [mb]π+ ^2 A^2/3/160 [mb^2]π^2],where the last approximation holds for large A, and can be written as a function ofand A alone making use of Eq. (<ref>):_ pA→ a b c≈ A _ pN → a b c[1+A^1/3/5.7 [mb]π+ ^2 A^2/3/160 [mb^2]π^2] .The TPS cross section in pA collisions is enhanced by the factor in parentheses in Eqs. (<ref>)–(<ref>) compared to thecorresponding one incollisions scaled by A. The final formula for TPS in proton-nucleus readsσ_ pA→ abc^tps = (m/6) σ_ pN→ a^sps·σ_ pN→ b^sps·σ_ pN→ c^sps/^2 , where the effective TPScross section in the denominator depends on the effective TPS cross section measured in pp,and on purely geometric quantities (F_ pA, C_ pA) directly derivable from the well-known nuclear profiles <cit.>,=[A/^2 + 3 F_ pA[mb^-1]/ + C_ pA[mb^-2] ]^-1/2 which can be numerically approximatedas a function of the number A of nucleons in the nucleus (for A large) alone, as follows≈ [A/^2 + A^4/3/5.7[mb] π+ A^5/3/160[mb^2] π^2]^-1/2 . For a Pb nucleus (A = 208, F_ pA = 30.25 mb^-1, and C_ pA = 4.75 mb^-2) and taking = 12.5 ± 4.5 mb,the effective TPS cross section amounts to = 0.29 ± 0.04 mb. Thus, for pPb the relative importance of the three TPS terms of Eq. (<ref>) isσ^tps, 1_ pA→ a b c:σ^tps, 2_ pA→ a b c:σ^tps, 3_ pA→ a b c =1:4.54:3.56. Namely, in pPb collisions, 10% of the TPS yields come from partonic interactions within just one nucleonof the lead nucleus, 50% involve scatterings within two nucleons, and 40% come from partonic interactions in three different Pb nucleons. The sum of the three contributions in Eq. (<ref>), ignoring differences between pN and pp collisions, indicates that the TPS cross sections in pPb are about nine times larger than the naive expectation based on A-scalingof the corresponding pN TPS cross sections, Eq. (<ref>). One can thus exploit the large expected TPSsignals in proton-nucleus collisions to extract theparameter, and therebyvia Eq. (<ref>),independently of TPS measurements in pp collisions—given that the F_ pA and C_ pA parameters in Eq. (<ref>)depend on the comparatively better known transverse density of nuclei.§.§.§As a concrete numerical example in Ref.<cit.> we have computed the TPS cross sections for charm () and bottom () production, following the motivation for the similar measurement in pp collisions (Section <ref>), over a wide range ofenergies, ≈ 5–500 TeV, ofrelevance for collider (LHC and FCC) and ultra-high-energy cosmic rays physics. The TPS heavy-quark cross sections are computed via Eq. (<ref>) for m=1, σ_ pA→,^tps = (σ_ pN→,^sps)^3/(6 ^2) with the effective TPS cross sections given by Eq. (<ref>): = 0.29 ± 0.04 mb for pPb,and = 2.2 ± 0.4 mb for p-Air collisions[Using A = 14.3 for a 78%–21%mixture of ^14N–^16O,with F_ pA = 0.51 mb^-1, and C_ pA = 0.016 mb^-2 obtained via a Glauber MC <cit.>.].The SPS cross sections, σ_ pN→,^sps, are calculated at NNLO via Eq. (<ref>)with(v.2.0) with the same setup as described in Sec. <ref>, using the ABMP16 proton and EPS09 nuclear PDF.In thecase, the inclusion of EPS09 nuclear shadowing reduces moderately the total charm and bottomcross sections in pN compared to pp collisions, by about 10% (15%) and 5% (10%) at the LHC (FCC). At = 5.02 TeV, our prediction(σ_ pPb→^sps,nnlo = 650 ± 290_ sc± 60_pdf mb) agrees well with the ALICE total D-mesonmeasurement <cit.> extrapolated using FONLL <cit.> to a total charm cross sectionof σ_ pPb→^alice = 640 ± 60_ stat ^+60_-110|_ syst mb(data point in the top-left panel of Fig. <ref>). Since the TPS pPb cross section go as the cube of σ_ pN→^sps, the impact of shadowing isamplified and leads to 15–35% depletions of the TPS cross sections compared to results obtained with the free proton PDF.Table <ref> collects the total inelastic and the heavy-quarks cross sections at = 8.8 TeV and 63 TeV in pPb collisions,and at = 430 TeV in p-Air collisions. The latterenergy corresponds to the so-called “GZK cutoff” <cit.> reached in collisions of O (10^20 eV) proton cosmic-rays, with N and O nuclei at rest in the upper atmosphere.The PDF uncertainties include those from the proton and nucleus in quadrature, as obtained from the corresponding 28⊕30eigenvalues of the ABMP16⊕EPS09 sets. The dominant uncertainty is linked to the theoretical scale choice, estimatedby modifying μ__R and μ__F within a factor of two. At the LHC, the large SPScross section (∼1 b) results in triple- cross sections from independentparton scatterings amounting to about 20% of the inclusive charm yields. Since the total inelastic pPb cross sectionsis σ^ inel_ pPb≈ 2.2 b, charm TPS takes place in about 10% of the pPb events at 8.8 TeV. At the FCC, the theoretical TPS charm cross section even overcomes the inclusive charm one. Such an unphysical result indicatesthat quadruple, quintuple,... parton-parton scatterings are expected to produce extrapairs with non-negligible probability. The huge TPScross sections in pPb at = 63 TeV, will make triple- production, with σ(+X)≈ 1 mb, observable. Triple- cross sections remain comparatively small, in the 0.1 mb range,at the LHC but reach ∼10 mb ( 3% of the total inclusive bottom cross section) at the FCC. Figure <ref> plots the cross sections over ≈ 40 GeV–500 TeV for SPS (solid bands), TPS (dashed bands)for charm (left) and bottom (right) production, and total inelastic (dotted curve) in pPb (top panels) and p-Air (bottom panels) collisions.Whenever the central value of the theoretical TPS cross section overcomes the inclusive charm cross section, indicative of multiple(beyond three) -pair production, we equalize it to the latter. At ≈ 25 TeV, the total charm andinelastic pPb cross sections are equal implying that, above thisenergy, allinteractions produce at least three charm pairs.In thecase, such a situation only occurs at much higherenergies, above 500 TeV. For p-Air collisions at the GZK cutoff, the cross section for inclusive as well as TPS charm production equals the total inelastic cross section (σ^ inel__ pAir≈ 0.61 b) indicating that all p-Air collisions produce at least three -pairs in multiple partonic interactions. In thecase, about 20% of the p-Air collisions produce bottom hadrons, but only about 4% of them have TPS production. These results emphasize the numerical importance of TPS processes inproton-nucleus collisions at colliders, and their relevance for hadronic MC models commonly used for the simulation ofultrarelativistic cosmic-ray interactions with the atmosphere <cit.> which, so far, do not include anyheavy-quark production.§ DOUBLE AND TRIPLE PARTON SCATTERING CROSS SECTIONS IN NUCLEUS-NUCLEUS COLLISIONSIn nucleus-nucleus collisions, the parton flux is enhanced by A nucleons in each nucleus, and the SPScross section is simply expected to be that of NN collisions, taking into account (anti)shadowing effects in the nuclear PDF,scaled by the factor A^2,  <cit.>_aa→ a = ∫( b)d^2b = A^2 ·_nn→ a .where ( b) the standard nuclear overlap function, normalized to A^2,( b) = ∫( b_1) ( b_1-b) d^2b_1 d^2b,with ( b) being the nuclear thickness function at impact parameter b, Eq. (<ref>),connecting the centres of the colliding nucleus in the transverse plane. In the next two subsections, we present the estimates for DPS and TPS cross sections in AA collisions from the corresponding SPS values.§.§ The DPS cross section in AA is the sum of three terms, corresponding to the diagrams of Fig. <ref>,_aa = _aa + _aa + _aa , where * The first term, similarly to the SPS cross sections Eq. (<ref>), is just the DPS cross section in NN collisions scaled by A^2,_aa→ a b = A^2 ·_nn→ a b .* The second term accounts for interactions of partons from one nucleon in one nucleus with partonsfrom two different nucleons in the other nucleus,_aa→ a b = 2_nn→ a b·· A· F_ pA ,with F_ pA≈(0) given by Eq. (<ref>). * The third contribution from interactions of partons from two different nucleons in one nucleus withpartons from two different nucleons in the other nucleus, reads_aa→ a b = _nn→ a b·· T_ 3,aa with, T_ 3,aa =(A-1/A)^2∫( b_1)( b_2)( b_1-b)( b_2-b)d^2b_1 d^2b_2 d^2b=(A-1/A)^2∫ ^2( r) ≈A^2/2·(0) ,where the latter integral of the nuclear overlap function squared does not depend much on the precise shape of the transverse parton density in the nucleus, amounting to A^2/1.94·(0) for a hard-sphere and A^2/2·(0) for a Gaussian profile. The factor ((A-1)/A)^2 takes into accountthe difference between the number of nucleon pairs and the number of different nucleon pairs.Adding (<ref>), (<ref>), and (<ref>),the inclusive cross section of a DPS process with two hard parton subprocesses a and b in collisionscan be written as_aa→ a b= A^2_nn→ a b[1+2/A F_ pA+(A-1)^2/A^2∫^2( r)] ≈A^2_nn→ a b[1+2/A(0) + 1/2(0)] ≈A^2_nn→ a b[1+ /7 [mb] π A^1/3 + /28 [mb] π A^4/3],where the last approximation, showing the A-dependence of the DPS cross sections, applies for large nuclei.The factor in parentheses in Eqs. (<ref>)–(<ref>) indicates the enhancement in DPS cross sections in AA compared to the corresponding A^2-scaledvalues in nucleon-nucleon collisions, Eq. (<ref>), which amounts to ∼27 (for small A=40) or ∼215 (for large A=208). The overall mass-number scaling of DPS cross sections in AA compared to pp collisions is given by a (A^2+k A^7/3+w A^10/3) factor with k,w≈0.7,0.2, which is clearly dominated numerically by the A^10/3 term.The final DPS cross section “pocket formula” in heavy-ion collisions can be written as _aa→ ab = (m/2) _nn→ a·_nn→ b/,with the effectivenormalization cross section amounting to≈1/A^2[^-1+2/A T_aa(0) + 1/2 (0)] .For a value of ≈ 15 mb and for nuclei with mass numbers A = 40–240,we find that the relative weights of the three components contributing to DPS scattering in AA collisions are 1:2.3:23 (for A = 40) and 1:4:200 (for A = 208). Namely, only 13% (for ^40Ca+^40Ca) or2.5% (for ^208Pb+^208Pb) of the DPS yields in AA collisions come from the first two diagrams ofFig. <ref> involving partons from one single nucleon.Clearly, the “pure” DPS contributions arising from partonic collisions within a single nucleon (first and second termsof Eq. (<ref>)) are much smaller than the last term from double particle production coming fromtwo independent nucleon-nucleon collisions. The DPS cross sections inare practically unaffected by the value of, but dominated instead by double-parton interactions from different nucleons in both nuclei. In the case of ^208Pb -^208Pb collisions, the numerical value of Eq. (<ref>) is= 1.5 ± 0.1 nb, with uncertainties dominated by those of the Glauber MC determination of (0). Whereas the single-parton cross sections incollisions, Eq. (<ref>), are enhanced by afactor of A^2 ≃ 4· 10^4 compared to that in pp collisions, the corresponding double-parton crosssections are enhanced by a much higher factor of /∝ 0.2 A^10/3≃ 10^7. §.§.§ Centrality dependence of DPS cross sections in AA collisions The DPS cross sections discussed above are for “minimum bias” AA collisions without any selection in reaction centrality. The cross sections for single and double-parton scattering within an impact-parameter interval [b_1,b_2], corresponding to a given centrality percentile f_% of the totalcross section σ^ inel_aa,with average nuclear overlap function ⟨[b_1,b_2]⟩ read (for large A, so that A-1≈ A): _aa[b_1,b_2] → a =A^2_nn→ af_1[b_1,b_2] =_nn→ a·f_% σ^ inel_aa·⟨[b_1,b_2]⟩, _aa[b_1,b_2]→ a b =A^2_nn→ a bf_1[b_1,b_2] ×× [1+2/A(0)f_2[b_1,b_2]/f_1[b_1,b_2]+(0)f_3[b_1,b_2]/f_1[b_1,b_2]],where the latter has been obtained integrating Eq. (<ref>) over b_1 < b< b_2, andwhere the three dimensionless and appropriately-normalized fractions f_1, f_2, and f_3 are:f_1[b_1,b_2]= 2π/A^2∫_b_1^b_2bdb(b) = f_% σ^ inel_aa/A^2 ⟨[b_1,b_2]⟩,f_2[b_1,b_2]= 2π/A (0) ∫_b_1^b_2bdb ∫ d^2b_1 ( b_1) ( b_1-b) ( b_1-b),f_3[b_1,b_2]= 2π/A^2 (0) ∫_b_1^b_2bdb ^2(b).The integrals f_2, and f_3 can be evaluated <cit.> for small enough centrality bins around a given impact parameter b. The dominant f_3/f_1 contribution in Eq. (<ref>) is simply given by the ratio⟨[b_1,b_2]⟩/(0) which is practically insensitive (except for very peripheral collisions) to the precise shape of the nuclear density profile.The second centrality-dependent DPS term, f_2/f_1,cannot be expressed in a simple form in terms of (b), but it is of order unity for the most central collisions, f_2/f_1= 4/3, and 16/15 for Gaussian and hard-sphere profiles respectively,and it is suppressed in comparison with the third leading term by an extra factor ∼2/A.Finally, for not very-peripheral collisions (f_%≲ 0–65%), the DPS cross section in a (thin) impact-parameter [b_1,b_2] range can be approximated by _aa→ a b[b_1,b_2]≈ _nn→ a b··f_% σ^ inel_aa·⟨[b_1,b_2]⟩^2=(m/2) _nn→ a·_nn→ b·f_% σ^ inel_aa·⟨[b_1,b_2]⟩^2 .Dividing this last expression by Eq. (<ref>), one finally obtains the corresponding ratio of double- to single-parton-scattering cross sections as a function of impact parameter[Such analytical expression neglectsthe first and second terms of Eq. (<ref>). In the f_%≈ 65–100% centrality percentile, the secondterm would add about 20%more DPS cross-sections, and for very peripheral collisions (f_%≈ 85–100%, where⟨[b_1,b_2]⟩ is of order or less than 1/) the contributions from the first term are also non-negligible.]:(_aa→ a b/_aa→ a)[b_1,b_2] ≈(m/2)_nn→ b·⟨[b_1,b_2]⟩ . §.§.§ Quarkonia has been historically considered a sensitive probe of the quark-gluon-plasma (QGP) formed in heavy-ioncollisions <cit.>, and thereby their production channels need to be theoretically and experimentally wellunderstood in pp, pA and AA collisions <cit.>. Double-quarkonium (, Υ Υ) production is a typical channel for DPS studies in pp,given their large cross sections and relatively well-understood double-SPS backgrounds <cit.>. In Ref. <cit.>, the DPS cross section for double- production incollisionshas been estimated via Eq. (<ref>) with m=1, = 1.5 ± 0.1 nb,and prompt- SPS cross section computed at NLO via cem <cit.> with theCT10 proton and the EPS09 nuclear PDF, and theoretical scales μ__R = μ__F = 1.5m_c for a c-quark massm_c = 1.27 GeV. The EPS09 nuclear modification factors result in a reduction of 20–35% of thecross sections compared to those calculated using the free proton PDFs. Figure <ref> shows the -dependence of single- in , NN and PbPb collisions (top panel),and of double- cross sections in , as well as the fraction ofevents with double- produced via DPS (bottom panel).Our theoretical setup with CT10 (anti)proton PDF alone agrees well with the experimental pp, data <cit.> extrapolated to full phase space <cit.> (squares in Fig. <ref>). At the nominalenergy of 5.5 TeV, the single prompt- cross sections is ∼1 b, and ∼20% of such collisions are accompanied by the production of a secondfrom a double parton interaction.Accounting for dilepton decays, acceptance and efficiency, which reduce the yields by a factor of∼3·10^-7 in the ATLAS/CMS (central) and ALICE (forward) rapidities, the visible cross section isd/dy|_y=0,2≈ 60 nb,about 250 double- events per unit-rapidity(both at central and forward y) are expected in the four combinations of dielectron and dimuon channelsfor a  = 1 nb^-1 integrated luminosity (assuming no net in-mediumsuppression or enhancement).Following Eq. (<ref>), the probability ofDPS production increases rapidly withdecreasing impact parameter and ∼35% of the most central →+X collisions have a second produced in the final state (Fig. <ref>).These results show quantitatively the large probability for double-production ofmesons inhigh-energy nucleus-nucleus collisions. Thus, the observation of apair in a givenevent should not be(blindly) interpreted asindicative ofproduction viaregeneration in the QGP <cit.>, since DPS constitute an important fraction of the inclusiveyield, with or without final-state dense medium effects. Table <ref> collects the DPS cross sections for the (pair) production of quarkonia (, Υ)and/or electroweak bosons (W, Z) incollisions at the nominal LHC energy of 5.5 TeV,obtained via Eq. (<ref>) with= 1.5 nb.The visible DPS yields for L_ int = 1 nb^-1 are quotedtaking into account BR(,Υ,W,Z) = 6%, 2.5%, 11%, 3.4% per dilepton decay;plus simplified acceptance and efficiency losses: A× E() ≈ 0.01(over 1-unit ofrapidity at |y|=0, and |y|=2), and A× E(Υ;W,Z)≈ 0.2; 0.5 (over |y|<2.5).All listed processes are in principle observable in the LHC heavy-ion runs, whereas rarer DPS processes like W+Z and Z+Z have much lower visible cross sections and would require much higher luminosities and/orenergies such as those reachable at the FCC. §.§ TPS cross sections in AA collisions For completeness, we estimate here the expected scaling of TPS cross sections in nucleus-nucleus compared to proton-proton collisions.Following our discussion for pA in Sec. <ref>, the TPS cross section in AA collisions results fromthe sum of nine terms, schematically represented in Fig. <ref>,generated by three independent structures appearing in triple parton scatterings in :_aa→ a b c∝ A·A + 3 A ·A^2 + A· A^3 + 3 A^2·A + 9 A^2·A^2 + 3 A^2·A^3 + A^3·A + 3 A^3·A^2 + A^3·A^3.These nine terms have different prefactors that can be expressed as a function of the nuclearthickness function, and the effective TPS and DPS cross sections, as done previously for the simpler pA case, seeEq. (<ref>). For instance, the first A · A term is just the TPS cross section in NN collisions scaled by A^2:σ^tps, 1_aa→ a b c = A^2 ·_nn→ a b c , whereas the last A^3 · A^3 contributionarises from interactions of partons from three different nucleonsin one nucleus with partons from three different nucleons in the other nucleus ( they result from triplenucleon-nucleon scatterings):σ^tps, 9_aa→ a b c =_nn→ a b c·^2 · T_ 9,aa ,with T_ 9,aa = ∫ d^2 r ^3( r) .In this latter expression, for simplicity, we omitted the [(A-1)(A-2)/A^2]^2 factor needed to account for the difference between the total number of nucleon triplets and that of different nucleon triplets. The ratio _aa→ a b c/_aa→ a b c≈ [2/ (0)]^2 shows that the “pure” TPS contributions arising from partonic collisions within a single nucleon (which scale as A^2) are negligiblecompared to triple particle production coming from three independent nucleon-nucleon collisionswhich scale as A^6 (r_p/R_a)^4∝ A^14/3. In the PbPb case, the relative weights of these two “limiting” TPS contributions are 1:40 000, to be compared with 1:200 for the similar DPS weights. The many other intermediate terms of Eq. (<ref>) correspond to the various “mixed” parton-nucleoncontributions, which can be also written in analytical form in this approach but, however, are suppressed byadditional powers of A compared to the dominant nucleon-nucleon triple scattering. Thus, as found in the DPS case, TPS processes in AA collisions are not so useful to deriveor and thereby study the intranucleon partonic structure as in pp or pA collisions. The estimates presented here demonstrate that double- and triple- (hard) nucleon-nucleon scatterings represent a significant fraction of the inelastic hard AA cross section, and the standardGlauber MC provides a simper approach to compute their occurrence in a given heavy-ion collision. § SUMMARYMultiparton interactions are a major contributor to particle production in proton and nuclear collisions at high center-of-mass energies. The possibility to concurrently produce multiple particles with large transversemomentum and/or mass in independent parton-parton scatterings in a given proton (nucleon) collisionincreases with , and provides valuable information on the badly-known 3D partonic profile of hadrons,on the unknown energy evolution of the parton density as a function of impact parameter b, and on the role ofpartonic spatial, momentum, flavour, colour,... correlations in the hadronic wave functions.We have reviewed the factorized framework that allows one to compute the cross sections for the simultaneous perturbative production of particles in double- (DPS), triple- (TPS), and in general n-parton (NPS) scatterings, from the correspondingsingle-parton scattering (SPS) cross sections in proton-proton, proton-nucleus, and nucleus-nucleus collisions.The basic parameter of the factorized ansatz is an effective cross section parameter, , encoding allunknowns about the underlying generalized n-parton distribution function in the proton (nucleon). In the simplestand most phenomenologically-useful approach, we have shown thatbears a simple geometric interpretationin terms of powers of the inverse of the integral of the hadron-hadron overlap function over all impact parameters.Simple recursive expressions can thereby be derived to compute the NPS cross section from the n-th productof the SPS ones, normalized by nth-1 power of . In the case of pp collisions, a particularly simple and robust relationship between the effective DPS and TPS cross sections, = (0.82± 0.11)×, has been extracted from an exhaustive analysis of typical partontransverse distributions of the proton, including those commonly used in Monte Carlo hadronic generators such as  8 and .In proton-nucleus and nucleus-nucleus collisions, the parton flux is augmented by the number A and A^2, respectively,of nucleons in the nucleus (nuclei). The larger nuclear transverse parton density compared to that of protons, results in enhancedprobability for NPS processes, coming from interactions where the colliding partons belong to the same nucleon, and/or totwo or more different nucleons. Whereas the standard SPS cross sections scale with the mass-number A in pA relative to pp collisions,we have found that the DPS and TPS cross sections are further enhanced by factors of order (A+(1/π) A^4/3) and(A+(2/π) A^4/3+(1/π^2) A^5/3) respectively.In the case of pPb collisions, this implies enhancement factors of ∼600 (for DPS) and of ∼1900 (for TPS) with respect to the corresponding SPScross sections in pp collisions. The relative roles of intra- and inter-nucleon parton contributions to DPS and TPS cross sections in pA collisions have been also derived. In pPb, 1/3 of the DPS yields come from partonic interactionswithin just one nucleon of the Pb nucleus, whereas 2/3 involve scatterings from partons of two Pb nucleons; whereas forthe TPS yields, 10% of them come from partonic interactions within one nucleon, 50% involve scatterings withintwo nucleons, and 40% come from partonic interactions in three different Pb nucleons. In proton-nucleus collisions, one can thereby exploit the large expected DPS and TPS signals over the SPS backgrounds to study double- and triple- partonscatterings in detail and, in particular, to extract the value of the keyparameter independently of measurements in ppcollisions, given that the corresponding NPS yields in pA depend on the comparatively better-known nuclear transverse density profile.For heavy ions, the A^2-scaling of proton-proton SPS cross sections becomes ∝(A^2+(2/π) A^7/3+1/(2π) A^10/3)for DPS cross sections, and includes much larger powers of A (up to A^14/3) for TPS processes. In the PbPb case, thesetranslate into many orders-of-magnitude enhancements( the DPS cross sections are ∼10^7 larger than the corresponding SPS pp ones). In addition, the MPI probability is significantly enhanced forincreasingly central collisions: the impact-parameter dependence of DPS cross sections is basically proportional to the AA nuclearoverlap function at a given b. The huge DPS and TPS cross sections expected in AA collisions are, however, clearly dominatedby scatterings among partons of different nucleons, rather than by partons belonging to the same proton or neutron. For nuclei with mass numbers A = 40–240, the relative weights of the three components contributing to DPS scattering inAA collisions are 1:2.3:23 (for A = 40) and 1:4:200 (for A = 208). Namely, only 13% (for ^40Ca+^40Ca) or2.5% (for ^208Pb+^208Pb) of the DPS yields in AA collisions come from diagrams involving partons from one single nucleon.Clearly, the “pure” DPS contributions involving partonic collisions within a nucleon are much smaller than thoseissuing from two independent nucleon-nucleon collisions. In the TPS case, the relative weights of the two extreme contributions (three parton collisions within two single nucleons versus those from three different nucleon-nucleon collisions)are 1:40 000 for PbPb. The NPS cross sections in AA are practically unaffected by the value of and, although DPS and TPS processes account for a significant fraction of the inelastic hard AA cross section,they are not as useful as those in pp or pA collisions tostudy the partonic structure of the proton (nucleon).Numerical examples for the cross sections and visible yields expected for the concurrent DPS and TPS production of heavy-quarks,quarkonia, and/or gauge bosons in proton and nuclear collisions at LHC, FCC, and at ultra-high cosmic-ray energieshave been provided. The obtained DPS and TPS cross sections are based on perturbative QCD predictions for thecorresponding single inclusive processes at NLO or NNLO accuracy including,when needed, nuclear modifications of the correspondingparton densities. Processes such as double-, Υ,W, Z, double-Υ, ΥW, ΥZ, and same-sign W W production have large cross sections and visible event rates for the nominal LHC and FCC luminosities. The study of such processes in proton-nucleus collisions provides an independent means to extract the effectiveparametercharacterising the transverse parton distribution in the nucleon. In addition, we have shown that double- and double-Υ final states have to be explicitly taken into account in any event-by-event analysis of quarkonia production in heavy-ion collisions.The TPS processes, although not observed so far, have visible cross sections for charm and bottom in pp and pAcollisions at LHC and FCC energies. At the highestenergies reached in collisions of cosmic rays with the nuclei in the upper atmosphere, the TPS cross section for triple charm-pair production equals the total p-Air inelastic cross section,indicating that all such collisions produce at least three -pairs in multiple partonic interactions. The resultspresented here emphasize the importance of having a good understanding of the NPS dynamics in hadronic collisions at current and future colliders,both as genuine probes of QCD phenomena and as backgrounds for searches of new physics in rare final-states with multiple heavy-particles, and their relevance in our comprehension of ultrarelativistic cosmic-ray interactions with the atmosphere. ws-rv-van 9 Sjostrand:1987suT. Sjöstrand and M. van Zijl, Phys. Rev. D 36 (1987) 2019 Bartalini:2011jp P. Bartalini et al., arXiv:1111.0469 [hep-ph] Abramowicz:2013iva H. Abramowicz et al.,arXiv:1306.5413 [hep-ph] Bansal:2014paa S. Bansal et al., arXiv:1410.6664 [hep-ph] Astalos:2015ivw R Astalos et al., arXiv:1506.05829 [hep-ph] Proceedings:2016tff H. Jung, D. Treleani, M. Strikman and N. van Buuren, DESY-PROC-2016-01 Strikman:2001gzM. Strikman and D. Treleani, Phys. Rev. Lett. 88 (2002) 031801 DelFabbro:2003tjA. Del Fabbro and D. Treleani, Phys. Rev. D 70 (2004) 034022 Frankfurt:2004knL. Frankfurt, M. Strikman and C. Weiss, Annalen Phys. 13 (2004) 665 Cattaruzza:2004qbE. Cattaruzza, A. Del Fabbro and D. Treleani, Phys. Rev. D 70 (2004) 034022 DelFabbro:2004mdA. Del Fabbro and D. Treleani, Eur. Phys. J. A 19S1 (2004) 229 Cattaruzza:2005nvE. Cattaruzza, A. Del Fabbro and D. Treleani, IMPA 20 (2005) 4462 Treleani:2012zi D. Treleani and G. Calucci, Phys. Rev. D 86 (2012) 036003Blok:2012jrB. Blok, M. Strikman and U.A. Wiedemann,Eur. Phys. J. C 73 (2013) 2433dEnterria:2012jam D. d'Enterria and A. M. Snigirev, Phys. Lett. B 718 (2013) 1395 Calucci:2013pza S. Salvini, D. Treleani and G. Calucci,Phys. Rev. D 89 (2014) 016020 dEnterria:2014lwkD. d'Enterria and A. M. Snigirev, Nucl. Phys. A 931 (2014) 303 dEnterria:2014mzhD. d'Enterria and A. M. Snigirev,Nucl. Phys. A 932 (2014) 296 dEnterria:2016yhyD. d'Enterria and A. M. Snigirev, arXiv:1612.08112 [hep-ph] dEnterria:2013mrpD. d'Enterria and A. M. Snigirev, Phys. Lett. B 727 (2013) 157Sjostrand:2017cdmT. Sjöstrand,arXiv:1706.02166 [hep-ph]dEnterria:2010xip D. d'Enterria et al.,Eur. Phys. J. C 66 (2010) 173 Khachatryan:2010gv V. Khachatryan et al. [CMS Collaboration],JHEP 1009 (2010) 091 Aad:2015gqa G. Aad et al. [ATLAS Collaboration], Phys. Rev. Lett.116 (2016)172301Abe:1997xkF. Abe et al. [CDF Collaboration], Phys. Rev. D 56 (1997) 3811 Abe:1997bpF. Abe et al. [CDF Collaboration], Phys. Rev. Lett. 79 (1997) 584 Abazov:2014fha V. M. Abazov et al. [D0 Collaboration],Phys. Rev. D 89 (2014)072006 Abazov:2015nnnV. M. Abazov et al. [D0 Collaboration], Phys. Rev. D 93 (2016) 052008Aaboud:2016dea M. Aaboud et al. [ATLAS Collaboration], JHEP 1611 (2016) 110 Aaboud:2016fzt M. Aaboud et al. [ATLAS Collaboration],Eur. Phys. J. C 77 (2017)76 Chatrchyan:2013xxaS. Chatrchyan et al. [CMS Collaboration], JHEP 1403 (2014) 032 Khachatryan:2015peaV. Khachatryan et al. [CMS Collaboration], Eur. Phys. J. C 76 (2016) 155 Aaij:2015wpaR. Aaij et al. [LHCb Collaboration], JHEP 07 (2016) 052 dEnterria:2016ids D. d'Enterria and A. M. Snigirev,Phys. Rev. Lett.118 (2017) 122001 Mangano:2016jyjM. L. Mangano et al., CERN Yellow Report (2017) no.3,1 Dainese:2016gchA. Dainese et al., CERN Yellow Report (2017) no.3,635collinear J. C. Collins, D. E. Soper and G. F. Sterman, Adv. Ser. Direct. High Energy Phys.5 (1989) 1 Snigirev:2016uaq A.M. Snigirev, Phys. Rev. D 94 (2016) 034026 Sjostrand:2007gs T. Sjöstrand, S. Mrenna and P. Z. Skands,Comput. Phys. Commun.178 (2008) 852.Seymour:2013qkaM. H. Seymour and A. Siodmok, JHEP 10 (2013) 113 Blok:2010geB. Blok, Yu. Dokshitzer, L. Frankfurt, and M. Strikman, Phys. Rev. D 83 (2011) 071501MPIbookSee other chapters of this report.Luszczak:2011zpM. Luszczak, R. Maciula, and A. Szczurek, Phys. Rev. D 85 (2012) 094034 Berezhnoy:2012xqA.V. Berezhnoy et al.,Phys. Rev. D 86 (2012) 034017 Cazaroto:2013fuaE.R. Cazaroto, V.P. Goncalves, F.S. Navarra, Phys. Rev. D 88 (2013) 034005 Maciula:2017mebR. Maciula and A. Szczurek, arXiv:1703.07163 [hep-ph] DdEDavid d'Enterria, Proceeds. Moriond-QCD (2017), to be submitted.Czakon:2013goaM. Czakon, P. Fiedler and A. Mitov, Phys. Rev. Lett.110 (2013) 252004Alekhin:2016uxnS. Alekhin, J. Bluemlein, S.O. Moch, R. Placakyte, arXiv:1609.03327 [hep-ph] fonll M. Cacciari et al.,JHEP 10 (2012) 137 mnrM. L. Mangano, P. Nason and G. Ridolfi, Nucl. Phys. B 373 (1992) 295dEnterria:2016oxo D. d'Enterria and T. Pierog, JHEP 08 (2016) 170Armesto:2006phN. Armesto, J. Phys. G 32 (2006) R367 d'Enterria:2003qsD. d'Enterria, nucl-ex/0302016.deJagerC.W. deJager, H. deVries, and C. deVries, Atomic Data and Nuclear Data Tables 14 (1974) 485 Kulesza:1999zhA. Kulesza and W.J. Stirling, Phys. Lett. B 475 (2000) 168mcfmCampbell:2011bnJ. Campbell, R.K. Ellis and C. Williams, JHEP 1107 (2011) 018 Lai:2010vvH.-L. Lai et al.,Phys. Rev. D 82 (2010) 074024 eps09 K. J. Eskola, H. Paukkunen and C. A. Salgado, JHEP 0904 (2009) 065 vbfnloArnold:2012xnK. Arnold et al.,arXiv:1207.4975 [hep-ph] Paukkunen:2010qgH. Paukkunen and C. A. Salgado, JHEP 1103 (2011) 071 Melia:2010bmT. Melia, K. Melnikov, R. Rontsch, G. Zanderighi, JHEP 1012 (2010) 053Vogt:2012vr R. Vogt, R. E. Nelson and A. D. Frawley, Nucl. Phys. A 910-911 (2013) 231 Adam:2016ichJ. Adam et al. [ALICE Collaboration],Phys. Rev. C 94 (2016) 054908Greisen:1966jvK. Greisen,Phys. Rev. Lett.16 (1966) 748. Zatsepin:1966jvG. T. Zatsepin and V. A. Kuzmin, JETP Lett.4 (1966) 78[Pisma Zh. Eksp. Teor. Fiz.4 (1966) 114]. dEnterria:2011twhD. d'Enterria, R. Engel, T. Pierog, S. Ostapchenko and K. Werner, Astropart. Phys.35 (2011) 98 Lokhtin:2000wm I. P. Lokhtin and A. M. Snigirev,Eur. Phys. J. C 16 (2000) 527matsui_satzT. Matsui and H. Satz, Phys. Lett. B178 (1986) 416 Lansberg:2008zmJ. P. Lansberg et al.,AIP Conf. Proc.1038 (2008) 15 Baranov:2012reS. P. Baranov, A. M. Snigirev, N. P. Zotov, A. Szczurek and W. Schäfer, Phys. Rev. D 87 (2013) 034035 Lansberg:2014swaJ. P. Lansberg and H. S. Shao,Phys. Lett. B 751 (2015) 479 Sun:2014gca L. P. Sun, H. Han and K. T. Chao,Phys. Rev. D 94 (2016) 074033Acosta:2004ywD. Acosta et al. [CDF Collaboration], Phys. Rev. D 71 (2005) 032001Abelev:2012krB. Abelev et al. [ALICE Collaboration], Phys. Lett. B 718 (2012) 295 Aaij:2012anaR. Aaij et al. [LHCb Collaboration], JHEP 1302 (2013) 041 Abelev:2012gxB. Abelev et al. [ALICE Collaboration], JHEP 1211 (2012) 065 Khachatryan:2010yr V. Khachatryan et al. [CMS Collaboration], Eur. Phys. J. C 71 (2011) 1575 Aaij:2011jh R. Aaij et al. [LHCb Collaboration], Eur. Phys. J. C 71 (2011) 1645 Andronic:2010dtA. Andronic, P. Braun-Munzinger, K. Redlich and J. Stachel,J. Phys. G 37 (2010) 094014
http://arxiv.org/abs/1708.07519v1
{ "authors": [ "David d'Enterria", "Alexander Snigirev" ], "categories": [ "hep-ph", "astro-ph.HE", "hep-ex", "nucl-ex", "nucl-th" ], "primary_category": "hep-ph", "published": "20170824181919", "title": "Double, triple, and $n$-parton scatterings in high-energy proton and nuclear collisions" }
k-Nearest Neighbor Augmented Neural Networks for Text ClassificationZhiguo Wang1, Wael Hamza1, Linfeng Song21 IBM T.J. Watson Research Center, Yorktown Heights, NY 10598 2 Department of Computer Science, University of Rochester, Rochester, NY 14627===================================================================================================================================================================================================In recent years, many deep-learning based models are proposed for text classification. This kind of models well fits the training set from the statistical point of view.However, it lacks the capacity of utilizing instance-level information from individual instances in the training set.In this work, we propose to enhance neural network models by allowing them to leverage information from k-nearest neighbor (kNN) of the input text.Our model employs a neural network that encodes texts into text embeddings. Moreover, we also utilize k-nearest neighbor of the input text as an external memory, and utilize it to capture instance-level information from the training set.The final prediction is made based on features from both the neural network encoder and the kNN memory.Experimental results on several standard benchmark datasets show that our model outperforms the baseline model on all the datasets, and it even beats a very deep neural network model (with 29 layers) in several datasets.Our model also shows superior performance when training instances are scarce, and when the training set is severely unbalanced. Our model also leverages techniques such as semi-supervised training and transfer learning quite well.§ INTRODUCTION Text classification is a fundamental task in both natural language processing and machine learning research.Its goal is to assign specific labels to texts written in natural languages. Based on the definition of the specific labels, text classification has many practical applications,e.g., sentiment analysis <cit.> and news categorization <cit.>.In recent years, with the renaissance of neural networks, many deep-learning based methods were proposed for text classification tasks <cit.>. Basically, most of these methods construct some kinds of neural networks to encode a text into a distributed text embedding, and then predict the category of the text solely based on it.In the training stage, network parameters are optimized on the training set.In the testing stage, the entire training set can be discarded, and only the trained model is used for prediction.This method has acquired the state-of-the-art performance in many tasks.However, because it abstracts the training set from a statistical point of view, it cannot utilize instance-level information from individual instances in the training set very well.For example, in the news categorization task, the following news is annotated as the “Business” category.Eastman Kodak Company and IBM will work together to develop and manufacture image sensors used in such consumer products as digital still cameras and camera phones .A neural network model with the state-of-the-art performance will incorrectly predict it into the “Sci/Tech” category,because, in the training set,up to 1,166 instances about “IBM” are annotated as the “Sci/Tech” category,whereas only 278 instances about “IBM” are annotated as the “Business” category.From the statistical point of view, we cannot blame our model, because the single word “IBM” is a very strong signal for the “'Sci/Tech' category.However, when we look at the 278 instances with the “Business” category, we found a much relevant instance:IBM and Eastman Kodak Company have agreed to jointly develop and manufacture image sensors for mass - market consumer products , such as digital still cameras .Therefore, if we can make use of category information of the relevant training instance, we will have a big chance to correct the error.On the other hand, instance-based (or non-parametric) learning <cit.> provides us a good method to capture instance-level information.k-nearest neighbor (kNN) classification is the most representative method, where a predication is made for a new test instance only based on its kNN.<cit.> showed that a better performance can be achieved if combining the model-based learning and the instance-based learning.Therefore, in this work, we propose to enhance neural network models with information from kNN.Our model still employs a neural network encoder to abstract global information from the entire training set, and to encode texts into text embeddings.Moreover, we also take the kNN of the input text as an external memory, and utilize it to capture instance-level information from the training set.Then, the final prediction is made based on features from both the neural network encoder and the kNN memory.Concretely, for each input text, we first find its kNN in the training set.Second, a neural network encoder is utilized to encode both the input text and its kNN into text embeddings.Third, based on the text embeddings of the input text and the kNN, we calculate attention weights for each neighbor.Based on these attention weights, the model calculates an attentive kNN label distribution and an attentive kNN text embedding.The final prediction is made based on three sources of features: the text embedding of the input text, the attentive kNN label distribution, and the attentive kNN text embedding.Experimental results on several standard benchmark datasets show that our model outperforms the baseline model on all the datasets, and it even beats a very deep neural network model (with 29 layers) in several datasets.Our model also shows superior performance when training instances are scarce, and when the training set is severely unbalanced. Our model also leverages techniques such as semi-supervised training and transfer learning quite well.In following parts, we start with the description of our model, then evaluate the model on some standard benchmark datasets and different experimental settings. We then talk about related work, and finally conclude this work. § MODELIn this section, we propose a model to capture both global and instance-level information from the training set for text classification tasks.To capture global information, we train a neural network encoder to encode texts into an embedding space based on all training instances and their category information.To capture instance-level information, for each input text, we search its kNN from the training set, and then take the kNN as an external memory to augment the neural network.Figure <ref> shows the architecture of our model.The blue flow 1 is the typical method for text classification,where an input text is encoded into a text embedding by a neural network “Text Encoder”, and then a prediction is made based on the text embedding.The remaining flows are our kNN memory, which employs the attention mechanism to extract some instance-level features for prediction. Formally, given an input text x, its kNN {x_1,...,x_k,...,x_K} and their correct labels y and {y_1,...,y_k,...,y_K}, our task can be formulated as estimating a conditional probability (y|x,x_1,...,x_K,y_1,...,y_K) based on the training set, and predicting the labels for testing instances by y^* = _y ∈𝒜(y)(y|x,x_1,...,x_K,y_1,...,y_K),where 𝒜(y) is a set of all possible labels. §.§ Text Encoder Text Encoder is a critical component in both typical models and our model. Its task is to encode an input text into a text embedding. Typically, an encoder encodes a text in two steps: (1) word representation step represents all words in the text as word embeddings or character embeddings <cit.>; and (2) sentence representation step composes the word embedding sequence into a fixed-length text embedding with the Convolutional Neural Networks (CNN) <cit.> or the Long Short-Term Memory Network (LSTM) <cit.> models. For example, <cit.>, <cit.>, and <cit.> employed the CNN model to encode texts, <cit.> utilized the LSTM model to represent texts, and <cit.> combined both the CNN and the LSTM.In this work, we utilize a LSTM network to encode texts. For word representation step,inspired by <cit.> and <cit.>,we construct a d-dimensional vector for each word with two components: a word embedding and a character-composed embedding. The word embedding is a fixed vector for each individual word, which is pre-trained with GloVe <cit.> or word2vec <cit.>. The character-composed embedding is calculated by feeding each character (also represented as a vector) within a word into a LSTM. For sentence representation step, we apply a bi-directional LSTM (BiLSTM) to compose the word representation sequence, and then concatenate the two vectors from the last time-step of the BiLSTM (both the forward and the backward directions) as the final text embedding. §.§ kNN Memory kNN memory is the core component of our model.Its goal is to capture instance-level information for each input text from its kNN. This component includes the following six procedures. Searching for the kNNThis procedure, corresponding to the black flow 2 in Figure <ref>, is to find the kNN of the input text from the training set. In order to efficiently search over the large training set, we employ a traditional information retrieve method to find the kNN.We first build an inverted index for all texts in the training set with the open source toolkit Lucene (https://lucene.apache.org/).Then, we take the input text as the query, and utilize the simple and effective BM25 ranking function <cit.> to retrieve the kNN from the inverted index.Encoding for the kNN This procedure encodes all the kNN into text embeddings, which corresponds to the yellow flow 3 in Figure <ref>. We re-utilize the Text Encoder described above, and apply it to each of the K neighbors individually. Calculating the neighbor attention This procedure corresponds to the gray flow 4 in Figure <ref>, and its goal is to calculate similarities (neighbor attention) between the input text and each of the K neighbors in the embedding space. Formally, let's denote the text embeddings for the input text x and the k-th neighbor x_k as h and h_k, which are l-dimensional vectors calculated by the Text Encoder. Theoretically, any similarity metrics will fit here. Inspired from <cit.>, here, we adopt the effective multi-perspective cosine matching functionf_s to compute similarities between two vectors h and h_k:s_k = f_s(h,h_k;W)where W∈^I × l is a trainable parameter with the shape I × l, I is a hyper-parameter to control the number of perspectives, and the returned value s_k is a I-dimensional vector s_k=[s_k^1,...,s_k^i,...,s_k^I]. Each element s_k^i ∈s is a similarity between h and h_k from the i-th perspective, and it is calculated by the cosine similarity between two weighted vectorss_k^i=cosine(W_i ∘h, W_i ∘h_k)where ∘ is the element-wise multiplication, and W_i is the i-th row of W, which controls the i-th perspective and assigns different weights to different dimensions of the l-dimensional text embedding space.We set I=1 for the illustration in Figure <ref>, therefore the neighbor attention is just a vector and each neighbor has only one similarity to the input text. However, for the experiments in following sections, we will utilize multiple perspectives (I>1), and each neighbor could have multiple similarities to the input text. Calculating the attentive kNN label distribution Based on the neighbor attentions, we calculate the attentive kNN label distribution by weighted summing up the label distributions of all kNN (the green flow 5 in Figure <ref>). Formally, let's denote the label distribution of the k-th neighbor x_k as y_k, which is an one-hot c-dimensional vector for the correct label y_i, and c is the number of all possible labels in the classification task. Given the label distributions and the neighbor attentions of all kNN, we calculate the i-th perspective of the attentive kNN label distribution byŷ_i=∑_k=1^K s_k^i * y_kwhere * is an operation to multiply the left scalar with each element of the right vector. Then, the final attentive kNN label distribution ŷ is the concatenation of {ŷ_1,...,ŷ_i,...,ŷ_I} from all I perspectives. Calculating the attentive kNN text embedding Similarly, the attentive kNN text embedding is the weighted sum of text embeddings of all kNN (the orange flow 6 in Figure <ref>). Given the text embeddings and neighbor attentions of all kNN, we calculate the i-th perspective of the attentive kNN text embedding byĥ_i=∑_k=1^K s_k^i * h_kThen, the final attentive kNN text embedding ĥ is the concatenation of {ĥ_1,...,ĥ_i,...,ĥ_I} from all I perspectives. Concatenating all features to make the prediction As the final procedure, we concatenate three sources of features (or vectors): the input text embedding h, the attentive kNN label distribution ŷ, and the attentive kNN text embedding ĥ. Then, a fully-connected layer with the softmax function is applied to make the final prediction. § EXPERIMENTS DatasetsWe evaluate our model on eight publicly available datasets from <cit.>. Here are the brief descriptions for all datasets. * AG's News: This is a news categorization dataset. All news articles are obtained from the AG's corpus of news article on the web. Each news belongs to one out of the four labels {World, Sports, Business, Sci/Tech}.* Sogou News: This is a Chinese news categorization dataset. All news articles are collected from SogouCA and SogouCS news corpora <cit.>. Each news article belongs to one out of the five categories {sports, finance, entertainment, automobile, technology}. All Chinese characters have been transformed into pinyin (a phonetic romanization of Chinese).* DBPedia: This dataset is designed for classifying Wikipedia articles into 14 ontology classes from DBpedia. Each instance contains the title and the abstract of a Wikipedia article.* Yelp Review Polarity/Full: This is a sentiment analysis dataset. All reviews are obtained from the Yelp Dataset Challenge in 2015. Two classification tasks are constructed from this dataset. The first one predicts the number of stars the user has given, and the second one predicts a polarity label by considering stars 1 and 2 as negative, and 3 and 4 as positive.* Yahoo! Answers dataset: This is a topic classification dataset. The dataset is obtained from Yahoo! Answer Comprehensive Question and Answer version 1.0 dataset. Each instance includes the question title, the question content and the best answer. And each instance belongs to one out of 10 topics.* Amazon Reviews Polarity/Full: This is another sentiment analysis dataset. All reviews are obtained from the Stanford Network Analysis Project (SNAP). Similar to the Yelp Review dataset, a Full version and a Polarity version of the dataset are constructed.The original datasets didn't provide the devsets. To avoid tuning model parameters on the test sets, for each dataset, we build a devset by randomly holding out 500 instances for each class from the original training set, and take the remaining instances as our new training set. Table <ref> shows the statistics of all the datasets. Experiment Settings We initialize word embeddings with the 300-dimensional GloVe word vectors pre-trained from the 840B Common Crawl corpus <cit.>.For the out-of-vocabulary (OOV) words, we initialize their word embeddings randomly. For the character-composed embeddings, we represent each character with a 20-dimensional randomly-initialized vector, and feed characters of each word into a LSTM layer to produce a 50-dimensional vector.We set the hidden size to 100 for our BiLSTM Text Encoder. We train the entire model from end-to-end, and minimize the cross entropy of the training set. We use the ADAM optimizer <cit.> to update parameters, and set the learning rate as 0.0001.During training, we do not update the pre-trained word embeddings. For all experiments, we iterate over the training set for 15 times, and evaluate the model on the devset at the end of each iteration. Then, we pick the model which works the best on the devset as the final model, and all the results on the test sets are performed from the final models.§.§ Properties of our ModelThere are several hyper-parameters in our model. The choices of them may affect the final performance. In this subsection, we conduct some experiments to demonstrate the properties of our model, and select a group of proper hyper-parameters for subsequent experiments. All experiments in this subsection are conducted on the AG's News dataset, and evaluated on the devset.First, we study the effectiveness of the three sources of features: the input text embedding (text-embedding), the attentive kNN label distribution (attentive-kNN-label), and the attentive kNN text embedding (attentive-kNN-text).We create seven models according to different feature configurations.For all models, we set K=20 for the number of kNN, and I=20 for the number of perspectives in our multi-perspective cosine matching function (Equation (<ref>)). Table <ref> shows the corresponding results on the devset, where M1 is our BiLSTM model without using the kNN memory, M2/M3/M4 are the models only utilizing the kNN memory, and M5/M6/M7 are the models leveraging both the text encoder and the kNN memory. From this experiment, we get several interesting observations:(1) when comparing M2 with M1, we find that only utilizing the label information from the kNN can achieve a competitive performance to the typical BiLSTM model; (2) the performance from M3 is on par with M1, which indicates that features solely extracted from the kNN (but without label information) can represent the input text very well; (3) the performance of M4 shows that considering both the label and the text information from the kNN achieves a better performance than the typical BiLSTM model, which shows the effectiveness of our kNN memory; (4) from the results of M5/M6/M7, we can see that combining the kNN memory with our BiLSTM text encoder achieves even better accuracies. The best accuracy is obtained by M7.Therefore, we will use this configuration for the subsequent experiments.Second, to test the influence of the kNN, we vary K from 1 to 20.Figure <ref> shows the accuracy curve, where K=0 corresponds to the performance from our BiLSTM without using the kNN memory.We can see that even with only one neighbor (K=1), our model gets a significant improvement over the BiLSTM model. When increasing the number of neighbors, the performance improves at the beginning, and then drops when K exceeds 8.One possible reason is that the neighbors become noisy when increasing K. In the subsequent experiments, we fix K=5.Third, we investigate the influence of the hyper-parameter I in our multi-perspective cosine matching function (Equation (<ref>)) by varying I from 1 to 20.We build a baseline by replacing Equation (<ref>) with the vanilla cosine similarity function. Both of the baseline and our model with I=1 calculate a single attention for each neighbor, but the difference is that our model assigns some trainable parameters to each dimension of the embedding space.Figure <ref> shows the accuracy curve, where I=0 corresponds to the performance of our baseline. We find that even if utilizing only one perspective (I=1), our model achieves a significant improvement than the baseline.When increasing the number of perspectives, the accuracy improves at the beginning, and then decreases after I is over 5.Therefore, we fix I=5 in the subsequent experiments. §.§ Comparison with the State-of-the-art ModelsWe construct two models to evaluate on all of the test sets.The first model is the baseline: our BiLSTM model without using the kNN memory (M1 in Table <ref>).The second model is the BiLSTM model with the kNN memory (M7 in Table <ref>).Table <ref> gives the experimental results.We find that by utilizing the kNN memory, our BiLSTM with kNN model outperforms the baseline on all datasets.Among all the state-of-the-art models, the VDCNN model <cit.> is a very deep network with up to 29 convolutional layers. Our model even beats the VDCNN model on the AG's News, DBpeida and Yahoo! Answers datasets, which shows the effectiveness of our method. Moreover, our kNN memory can be easily adapted into these more complex neural network models. §.§ Evaluation in Other Training SetupsTo study the behaviors of our model, we further evaluate it in some other common training setups.All the experiments in this subsection are conducted on the AG's News dataset, and the accuracies are performed on the devset. Low-Resource Training Setup In the introduction section, we claimed that the kNN memory captures instance-level information from the training set.To verify this claim, we evaluate our model on a low-resource training setup.We construct a low-resource training set by randomly selecting 10% of all instances for each category from the original training set. Then, we train our “BiLSTM” and “BiLSTM with kNN” models, with the same configurations as before, on this low-resource training set.In Table <ref>, the third column, with the title “Low_Resource Setup”,shows the accuracies of our two models. Comparing with the models trained with the full training set (“Full Setup”), our “BiLSTM with kNN” model only drops 4.4 percent, which is lower than the 6.7 percent drop from our “BiLSTM” model.This result shows our kNN memory can capture instance-level information to remedy shortage of the low-resource training set. Unbalanced Training SetupIn this sub-section, we evaluate our model on an unbalanced training set, a scenario not uncommon in text classification.To severely skew the label distribution, we construct an unbalanced training set by randomly selecting 2,000 instances for the World category, 4,000 instances for the Sports category, 8,000 instances for the Business category and 16,000 instances for the Sci/Tech.We train both our BiLSTM and proposed models (with the same configuration as before) on this unbalanced training set.The last column of Table <ref> shows the accuracies.The BiLSTM model shows a severe (43.3%) degradation of accuracy in the unbalanced setting. On the other hand, our BiLSTM with kNN model only drops 4.0% when trained on the unbalanced training set.We believe this is because our model can capture instance-level information from the unbalanced training set. Semi-supervised Training and Transfer LearningSo far, we have verified that the effectiveness of the kNN retrieved from the same training set. What if we search the kNN from a dataset of a completely different task?If we consider the text (without the label) information of the kNN from a different task, then this becomes the semi-supervised training setup.If we utilize the label information of the kNN from a different task, where the definition of labels are quite different from the task at hand, then this becomes the transfer learning setup.To reveal the behavior of our model in these two setups, we take the training set of the DBPedia dataset as the corpus to find the kNN for each text in the AG's News dataset.We construct four models: “BiLSTM” using the “M1” configuration in Table <ref>, “Semi-supervised Training” using the “M6” configuration, “Transfer Learning (M5)” using the “M5” configuration,and “Transfer Learning (M7)” using the “M7” configuration. Here, we should notice that the “attentive-kNN-text” and the “attentive-kNN-label” features are extracted from the kNN from the DBPedia corpus, which is a different classification task. Table <ref> gives the results. We can see that our model achieves improvements from the baseline (BiLSTM) in both the Semi-supervised Training and the Transfer Learning setups. §.§ Qualitative AnalysisWe perform qualitative analysis by looking at some instances incorrectly predicted by the baseline (BiLSTM) but get corrected by adding the kNN memory (BiLSTM with kNN).First, for the error illustrated in the introduction section, our model corrected it as expected.Another example is “President George W. Bush 's campaign website was inaccessible from outside the United States .” with the correct label Sci/Tech.In the training set, the appearances of the phrase “George W. Bush” in each category are 1,659 (World), 410 (Business), 67 (Sports) and 299 (Sci/Tech), and the frequencies of the word “President” in each category are 4,486 (World), 1,021 (Business), 233 (Sports) and 554 (Sci/Tech).Based on these strong signals, the BiLSTM baseline assigned it with an incorrect label World.On the other hand, our BiLSTM with kNN corrected it, because there is a very similar neighbor in the training set “President George W. Bush 's official re-election website was down and inaccessible for hours , in what campaign officials said could be the work of hackers .” with the label Sci/Tech.§ RELATED WORK In recent years, there have been several studies trying to augment neural network models with external memories. Generally, these models utilize the attention mechanism to access useful information outside the model itself (so-called the external memory).For the machine translation task, <cit.> introduced the attention mechanism to access source side encoding information while generating the target sequence.For the question answering task, <cit.> proposed the memory network to access all supporting sentences before generating the correct answer word.<cit.> designed a more complicated “computer-like” external memory to simulate the Turning Machine.Our model also belongs to this group, but with the distinction that we construct the external memory with the K nearest neighbors of the input text, and utilize a multi-perspective attention model.<cit.> proposed a matching network for one shot learning task.Their model also classifies the input instance by utilizing a labeled support set as our model does.However, our model differentiates from their model in two ways.First,they assume the labeled support set is given beforehand, whereas our model searches the kNN independently based on the input instance.Second, their model only utilizes the label information of the support set for prediction, whereas our model makes use of information from both the input text and the kNN.Our model follows an old idea of combining the model-based (or parametric) learning and the instance-based (or non-parametric) learning <cit.>. We infuse the old idea with the advanced neural networks and the attention mechanism.§ CONCLUSION In this work, we enhanced neural networks with a kNN memory for text classification. Our model employs a neural network encoder to abstract information from the entire training set, and utilizes the kNN memory to capture instance-level information.The final prediction is made based on features from both the input text and the kNN. Experimental results on several standard benchmark datasets show that our model outperforms the baseline model on all the datasets, and it even beats a very deep neural network model (with 29 layers) in several datasets.Our model also shows superior performance when training instances are scarce, and when the training data is severely unbalanced. Our model also leverages techniques such assemi-supervised training and transfer learning quite well.aaai
http://arxiv.org/abs/1708.07863v1
{ "authors": [ "Zhiguo Wang", "Wael Hamza", "Linfeng Song" ], "categories": [ "cs.CL", "cs.AI" ], "primary_category": "cs.CL", "published": "20170825190425", "title": "$k$-Nearest Neighbor Augmented Neural Networks for Text Classification" }
[email protected] Address: Physics Department, Gustavus Adolphus College, 800 College Ave, St Peter, MN 56082 [email protected] Department of Physics, Purdue University, West Lafayette, IN 47907In a previous work [Dillon and Nakanishi, Eur. Phys.J B 87, 286 (2014)], we calculatedthe transmission coefficient of the two-dimensional quantum percolation model and found thereto be three regimes, namely, exponentially localized, power-law localized, and delocalized.However, the existence of these phase transitions remains controversial, with many other worksdivided between those which claim that quantum percolation in 2D is always localized, and thosewhich assert there is a transition to a less localized or delocalized state. It stood out thatmany works based on highly anisotropic two-dimensional strips fall in the first group, whereasour previous work and most others in the second group were based on an isotropic square geometry.To better understand the difference in our results and those based on strip geometry, we applyour direct calculation of the transmission coefficient to highly anisotropic strips of varyingwidths at three energies and a wide range of dilutions. We find that the localization length of the strips does not converge at low dilution as the strip width increases toward the isotropiclimit, indicating the presence of a delocalized state for small disorder. We additionallycalculate the inverse participation ratio of the lattices and find that it too signalsa phase transition from delocalized to localized near the same dilutions.Two-dimensional quantum percolation on anisotropic lattices Hisao Nakanishi December 30, 2023 ===========================================================§ INTRODUCTION Quantum percolation (QP) is one of several models used to study transport in disordered systems.It is an extension of classical percolation, in which sites/bonds are removed with someprobability q. In the classical percolation model, transmission on a disordered lattice isdependent solely on the connectivity of the lattice: the lattice is transmitting only if thereexists some connected path across the lattice.As the dilution increases, the probability offinding a connected path between any two arbitrarily chosen points decreases while there isstill a 100% guarantee of finding some connected path through the lattice; at somecritical dilution (q ≈ 41% in site percolation and q=50% in bond percolation, on thesquare lattice) there is zero chance of finding any connected path across the lattice and the system is insulating. In the quantum percolation model, the particle traveling throughthe lattice is a quantum one, and thus quantum mechanical interference also influencestransmission; in fact, even in a perfectly connected lattice (q=0) one may have partialtransmission depending on the energy of the particle and the boundary conditions of the lattice.Because of these interference effects, localization will occur at a lower dilution than in theclassical model, if at all.The question of whether disorder prevents conduction, especially in two dimensions, is one whichspans decades, beginning with the introduction of the Anderson model in 1958.<cit.>A few decades later, Abrahams et al used one-parameter scaling theory to show that in theAnderson model, any amount of disorder prevents conduction for d ≤ 2. <cit.>This result was confirmed through various studies in the years following, with some exceptionsfor special cases (see for example Refs lee:1981 and eilmes:2001 andreferences therein).Due to its similarities with the Anderson model, it was initially believed that the quantumpercolation model would likewise lack a transition to a delocalized state in two dimensions.However, this proved to not necessarily be the case, and some controversy arose over whatphase transitions may exist for the 2D QP model. Some found there to be only localizedstates <cit.>, while others found a transitionbetween strongly and weakly localized states but no delocalizedstates. <cit.> More recent work,including previous work by this group, has found that a localized-delocalized phase transitiondoes in fact exist for the quantum percolation model at finite disorder in twodimensions. <cit.> Most recently, present authors determined a detailed phase diagram showing a delocalized phaseat low dilution for all energies 0 < E ≤ 1.6, with a weak power-law localized stateat higher dilutions and an exponentially localized state at still higherdilutions.<cit.> Among the various calculations employed to study the quantum percolation model, it stands outthat most works based on two-dimensional, highly anisotropic strips yield results supportingone-parameter scaling's prediction of only localized states, whereas our calculationsin Ref. dillon:2014 and most others finding a delocalized state were based onan isotropic square geometry. One of the studies in the first group was by Soukoulis and Grest,who used the transfer matrix method to determine the localization length λ_M oflong, thin, quasi-one-dimensional strips of varying width M, after which they used finite sizescaling to determine the localization length λ in the two-dimensional limit and thusthe phase of the system (Ref. soukoulis:1991, see Ref. soukoulis:1982for more detail on the transfer matrix method in two dimensions). It is worth noting thatDaboul et al., who found at most weakly localized states on isotropic lattices, also founddisagreement with Soukoulis and Grest's results; they noted that the extreme anisotropy of thestrips used in the transfer-matrix method might overly influence its results toward a more 1-Dgeometry than 2-D.<cit.> Aside from geometry, Soukoulis and Grest also differ fromour previous work in that they only examined dilutions within the range 0.15 ≤ q ≤ 0.50,the lower limit of which is very close to the delocalization phase boundary we foundin Ref. dillon:2014, which could explain why they did not find any delocalizedstates.To better understand the differences between our results and theirs, in this work weapply our direct calculation of the transmission coefficient to the quasi-1D scaling geometryused by Soukoulis and Grest over the same energies E and widths M they used, but over alarger range of dilutions extending into lower dilutions than those they examined. Weadditionally examine the inverse participation ratio of the lattices, which, when extrapolated tothe thermodynamic limit, provides another indicator of localization.We start from the approach used by Dillon and Nakanishi <cit.> using the quantumpercolation Hamiltonian with off-diagonal disorder and zero on-site energyH = ∑_<ij> V_ij|i⟩⟨ j| + h.c where |i> and |j> are tight binding basis functions and V_ij is a binaryhopping matrix element between sites i and j which equals a finite constant V_0 ifboth sites are available and nearest neighbors and equals 0 otherwise. As in previous works,we normalize the system energy and use V_0 =1.We realize this model on an anisotropic square lattice of varying widths M and lengths Nto which we attach semi-infinite input and output leads at diagonally opposite corners andwhich we randomly dilute by removing some fraction q of the sites, thus setting theircorresponding V_ij to zero. N is chosen such that N=10*M at minimum to obtainquasi-one-dimensional geometry, and such that the diagonally opposite sites have the same parityin order to maintain the symmetry of the input and output leadsas N is varied for the same M (i.e., the leads are always on the same sublattice). The wavefunction for the entire lattice plus input and output leads can be calculated by solving the Schrödinger equation [ Hψ = Eψ; , ψ = [ [ψ⃗_in; ψ⃗_cluster; ψ⃗_out ]] ] and ψ⃗_in = {ψ_-(n+1)} andψ⃗_out = {ψ_+(n+1)}, n = 0,1,2 …, are the input and output lead parts of the wave function respectively.Using an ansatz by Daboul et al <cit.>, we assume that the input and output partsof the wavefunction are plane waves [ ψ_in→ψ_-(n+1) = e^-inκ + re^inκ; ψ_out→ψ_+(n+1) = te^inκ ] where r is the amplitude of the reflected wave and t is the amplitude of thetransmitted wave. This ansatz reduces the infinite-sized problem to a finite one including onlythe main M × N lattice and the nearest input and output lead sites, for the wavevectorsκ that are related to the energy E by: E = e^-iκ + e^iκ Note that the plane-wave energies are thereby restricted to the one-dimensional range of-2 ≤ E ≤ 2 rather than the full two-dimensional energy range -4 ≤ E ≤ 4;nonetheless this energy range has been sufficient for us to observe the localization behaviorof the wavefunction in prior work. The reduced Schrödinger equation after applying the ansatz can be written as a(M*N +2)×(M*N +2) matrix equation of the form [ [ -E + e^iκc⃗_⃗1⃗^t 0;c⃗_⃗1⃗ [ A ]c⃗_⃗2⃗; 0c⃗_⃗2⃗^t -E + e^iκ ]] [ [1 + r; ψ⃗_clust;t ]]= [ [ e^iκ - e^-iκ; 0⃗;0 ]] where A is an M*N × M*Nmatrix representing the connectivity of thecluster (with –E as its diagonal components), c⃗_⃗i⃗ is the M*N componentvector representing the coupling of the leads to the cluster sites, and ψ⃗_clustand 0⃗ are also M*N component vectors, the former representing the wavefunctionsolutions (e.g. on sites a-h in Fig.  <ref>). The cluster connectivity inA is represented with V_ij=1 in positions A_ij and A_ji if i and j areconnected, otherwise 0.Eq.  <ref> is the exact expression for a 2D system connected to semi-infinite chains withcontinuous eigenvalues within the range -2 ≤ E ≤ 2 as discussed above. The transmission(T) and reflection (R) coefficients are determined by T=|t|^2 and R=|r|^2. From the wavefunction solutions of Eq. <ref> we also calculate the Inverse ParticipationRatio (IPR), which measures the fractional size of the particle wavefunction across the latticeand gives a picture of the transport complementary to the picture provided by the transmissioncoefficient alone. The IPR is defined here by:IPR = 1/∑_i |ψ_i|^4 (M*N) where ψ_i is the amplitude of the normalized wavefunction for the main-clusterportion of the lattice on site i and M*N is the size of the lattice. (For our model, we havechosen to normalize the IPR by the lattice size rather than connected cluster size, as issometimes done, since the lattice size is the fixed parameter and doing so allows bettercomparison between different sizes when extrapolating to the thermodynamic limit.) It shouldbe noted that our ψ⃗ for given E is a continuum eigenstate of the system containingthe 1D lead chains, and ψ⃗_clust is expected to correspond to a mixed stateconsisting of eigenstates of the middle square portion of the lattice. We see that given twolattices of the same size, the one with the smaller IPR has the particle wavefunction residingon a smaller number of sites, though the precise geometric distribution cannot be known fromthe IPR alone. The IPR is often used to assess localization by extrapolating it to thethermodynamic limit; if the IPR approaches a constant fraction of the entire lattice, there areextended states, whereas if it decays to zero the states are localized.The remainder of this paper is organized as follows. In Section II, we study the transmissioncoeffient curves on the highly anisotropic lattices, scaling first to the quasi-one-dimensionallimit and then to the two-dimensional limit, and find a delocalization-localization transitionconsistent with our previous results. In Section III, we examine the inverse participation ratioas a function of lattice size, and find that the IPR's behavior also indicates delocalizedstates being possible. In Section IV, we summarize and analyze our results. § TRANSMISION COEFFICIENT FITS We calculated the transmission T at the energies E=0.05, 0.25, and 1.05 for anisotropicsquare lattices of width M=8, 16, 32, 64, and 128 and lengths N within the range10 M ≤ N ≤ 200 M, and dilutions 2%≤ q ≤ 50%. The energies, widths, anddilutions were chosen to match those studied by Ref soukoulis:1991, thoughadditional dilutions q < 15% were incorporated since our previous work showed a delocalizedregion at low dilution. The values of N were chosen as a*M, where a is an arbitrary/odd integer for even/odd M, so that N always has the same parity as M and the input and output leads connected to the corners are always on the same sublattice of the bipartite lattice as N is increased. This ensures that the transport symmetry is not disturbed as N is changed for the given M. In our case only even widths M were examined. The upper limit of N was determined by computational limitations, sincefor large N and q the transmission was small enough to result in an underflow 0. For mostdilutions this occurred for N ≥ 200 M, though for larger energies the cut-off was lower.Despite these computational limitations in calculating T, we found that the transmissiondropped off with N sufficiently smoothly and quickly that an accurate fit of the transmissionwas able to be found for all but the highest dilutions (q ≥ 30%), all of which fall wellabove the delocalization-localization phase boundary found in Ref. dillon:2014. From the transmission coefficients for lattices with the above parameters we are able todetermine the localization length λ_Mof the various strips of width M, andextrapolate the results in the isotropic limit to find the 2D localization length λ.First, we plot transmission T vs lattice length N for each width M and energy E, foreach dilution q that had a sufficient number of points to establish a fit for the transmission(2%≤ q ≤ 25% for smaller M and larger E). All dilution curves decay exponentially(T=a*exp(-b N)), as is to be expected given the highly anisotropic quasi-1D lattices.An example of the fitted T vs N curves for E=1.05 and M=64 at selected dilutions isgiven in Fig. <ref>.The other (E,M) pairs have similar transmission curves. To determine the inverse localizaion length b_M ∝ 1/λ_M in thequasi-one-dimensional thermodynamic limit of a strip of width M and lengthN →∞, we use the successive fitting procedure described inRef. dillon:2014 Section III. For each T vs N curve, we fit just the first 6points, then the first 7 points, etc until all points have been added, saving the parameter bfrom each exponential fit. We then plot the saved b vs N_max, where N_max is themaximum length included in the fit resulting in that value of b, and fit this new curve tofind the non-zero value b_M that the curve stablizes to as N_max→∞. Extrapolating b_M to the 2D isotropic limit determines the localization of the system:for b_M→ 0 (λ_M→∞) as M →∞,the system is delocalized, but if b_M→ b_∞(λ_M→λ), where b_∞ is some finite constant, then the system is localized. We fit b_M vs M for each E and q and find that forE=1.05 and q ≤ 12%, E=0.25 and q ≤ 15%, and E=0.05 and q ≤ 8% , a fitwhich decays to zero is the best fit, whereas above these dilutions a fit with a constant offsetb_∞ fits the b_M vs M curves better, as shown for example at E=1.05in Fig. <ref>. Thus there is in fact a delocalized phase at each energy for theselow values of disorder. Moreover, while our prior work did not study these energies specifically,the upper bounds for delocalization found here correspond roughly to those found inRef.  dillon:2014; an exact match is not expected due to the different geometry.At the dilutions and energies for which b_M stabilizes to a non-zero value b_∞,we calculate the localization length by λ = 1/b_∞. To most easily compare our results with those of Soukoulis and Grest, we plot λ_M/M vsλ/M (see Fig. <ref>a) as in their Fig. 1 fromRef. soukoulis:1991. There are two noticeable differences and one importantsimilarity between their results and ours. First, our figure has no points in the lower leftfor the smaller values of λ and λ_M. This area is where the small localizationlengths at high dilution should be, and their absence is simply a result of the computationallimitations of our transmission fit technique, described at the beginning of this section.Secondly, the localization lengths we do have do not exactly overlap in their values with thoseof Soukoulis and Grest, nor do they all collapse neatly onto one curve. These might partially beexplained by a missing factor in our transmission fits; when determining the localization length, we assumed that in the fit T=a*exp(-b_M N), b_M = 1/λ_M, when it may be thatb_M = c/λ_M, where c is another constant which may depend on M. If there is such aconstant, the vertical position of our localization points may be shifted from their true values. Additionally, the λ determined by Soukoulis and Grest were a fitting parameter chosen toinduce the points to collapse onto one curve, following the scaling procedure outlined inRef.s mackinnon:1981 and  soukoulis:1982, whereas our values ofλ_∞ were determined independently. Our λ do have error bars (omittedfrom Fig. <ref> to avoid cluttering the figure) and choosing different λwithin the bounds of our fit estimates at E=1.05 and E = 0.25, for instance, yeilds asomewhat better collapse of those energies' localization lengths onto one curve(see Fig. <ref>b).Despite the differences in our two figures, however, there is one significant similarity.That is, in the dilution range 15%≤ q ≤ 20% for which our work does haveoverlap with the dilutions studied by Soukoulis and Grest, our localization lengths fall withinthe same order of magnitude of those found by Soukoulis and Grest. They are not precisely thesame, but they are not wildly different, either. This gives us confidence that our technique isyielding the same results as theirs, leading us to believe that they simply did not look atsmall enough dilutions to see a transition, relying instead on an extrapolation that may notbe justified. § INVERSE PARTICIPATION RATIO CALCULATIONSTo corroborate our finding delocalized states at small disorder on the anisotropic quasi-1Dstrips, we also examined the Inverse Participation Ratio (IPR) as the system size increases.We observe that even at the largest and most anisotropic lattice studied, M=128, N=200M,the IPR distribution of all realizations at small dilution has a distinct peak at IPR ≠ 0,and the average value is greater than 1/M, which is the minimum fraction required to span thelattice, whereas at large dilution the peak is near zero (Fig. <ref>). This seemsto hint at the transition we observed in the previous section. When we plot the average IPRvs N for fixed width (as in Fig. <ref>), we also see that the average IPR decreasesmuch less rapidly than the corresponding transmission T (compare Fig. <ref>), with the IPRat large N remaining well above zero (in fact, the average IPR does not have the problem withcomputational underflow that the transmission does at large dilution). This illustrates the factthat while the transmission and IPR are related, they do represent different ways of examininglocalization. It is entirely possible to have many realizations with clusters spanning thelattice that are connected to the input site and to an edge site on the opposite end that isnot the output site, as illustrated in the hypothetical example in Fig. <ref>.If this is the case, the average IPR (which is measured over the entire lattice irrespectiveof the input) would be nonzero while the transmission (which is measured only corner-to-corner)would be very close to zero. When we fit the average IPR vs N curves at fixed M, we observe an interesting trend in thecurves as dilution increases. Surprisingly, the IPR vs N at low dilution can be fit very well to a curve with a nonzero offset that we call IPR_M. An example at E=1.05 and M=64 is shownin Fig. <ref> This is unexpected, since in the 1-D limit we know the states are localized,but can be explained as being a result of our including lengths only up to N=200M due to thecomputational limitations in T. However, as q increases to large disorder, we find that IPRvs N is best fit by a curve decreasing smoothly to zero. This change hints at the phasetransition we observed in the 2D isotropic limit in Section II: if we included still longerlengths N toward the 1-D limit, we should see the IPR → 0 since the 1D limit ispurely localized, but if we extrapolate toward an isotropic case, we should see the average IPRapproach a finite value for those dilutions at which we found delocalization. To capture thelatter situation, we plot the offset terms IPR_M from the lower dilution fits vs M(see Fig. <ref> for example at E = 1.05). Again, we find that at small dilution theIPR_M vs M curve is best fit by a curve with an offset (in this case, a power-law withoffset), meaning the IPR grows in proportion to the width and stabilizes to a nonzero fractionof the lattice as we scale toward the isotropic 2D limit. On the other hand, as we increasethe dilution, the IPR_M vs M curves eventually are better fit by a pure power law, meaningthat although the anisotropic lattices may have had spanning clusters, these clusters do notgrow proportionally with M and eventually become disconnected from the output edge, resultingin localization. For E = 1.05 this shift to pure power-law fit occurs at q ≥ 15%, forE=0.25 at q ≥ 15%, and for E = 0.05 at q ≥ 8%. The results of the IPR studydemonstrate that there are spanning clusters in the isotropic limit at low dilution, meaningthere are indeed delocalized states at these dilutions, with a transition to a localized state(isolated clusters) at sufficiently high disorder. Moreover, the transition to localized statesas disorder increases occurs at or very near the same dilutions at which we found a transitionusing the transmission calculations.§ SUMMARY AND CONCLUSIONS We have studied the quantum percolation model on highly anistotropic two-dimensional lattices,scaling toward the isotropic two-dimensional case (studied in previous works) to determine thelocalization state and localization length in the thermodynamic limit. We determined thelocalization length by a two-step process in which we first determined the inverse localiztionlength b_M = λ_M^-1 of the anisotropic strips by extrapolating N →∞,then extrapolated λ_M to the localization length λ of the isotropic system fromthe trend of the b_M as M →∞. Although the transmission calculations onlyallow us to study a limited range of dilutions effectively due to computational limitations,we nonetheless were able to detect a phase transition at specific dilutions, above which theM-width strip inverse localization lengths b_M converged to a finite value, and below whichthey decayed to zero, indicating an infinite, lattice-spanning extended state. The location ofthe phase transitions are consistent with the phase boundaries found in our previous work(Ref.  dillon:2014), but their existence is in contradition to the resultspredicted, e.g., by Soukoulis and Grest in their transfer-matrix method studies of quantumpercolation. <cit.> This contradiction can be resolved by observing that theyonly studied dilutions above q = 15%, which is above the delocalization-localization phaseboundary found in this work and our prior work. The localization lengths found in this work fordilutions within the localized region fall within the same order of magnitude of those foundby Soukoulis and Grest at the lower end of the range of dilutions they studied, leading us tobelieve that they simply did not look at small enough dilutions, thus missing the phasetransition.We additionally checked the localization state of the anisotropic strips by studying the inverseparticipation ratio of the lattices, which tells us what fraction of sites sustain the particlewavefunction. We find that even on narrow anisotropic strips, at small dilution the averageIPR shows a distinct peak away from zero at a value large enough to span the lattice, while atlarge dilutions the peak is near zero. When we scale toward the isotropic limit, we find thatthe IPR vanishes for large dilution, indicative of localization, while it approaches a finitevalue for low dilution, indicative of a delocalized state. Furthermore, the dilutions abovewhich the inverse participation ratio vanishes in the isotropic limit match the phase boundariesfound in the first part of the paper. The results of our work in this paper serve two purposes. First, by using the same basictechnique (transmission coefficient and inverse participation ratio measurements) as ourprevious work on a different geometry - that is, highly anisotropic lattices scaled to the 2Dthermodynamic limit - we obtain the same delocalization-localization phase boundary results,showing that the phase transition found previously was not dependent on using isotropic geometry.Secondly, by using the same geometry as Soukoulis and Grest (and other works that use the transfer matrix approach), we found overlap between ourlocalization length results and theirs at higher dilutions, but also examined smaller dilutionsand found a delocalized state, leading us to suspect that their extrapolation of the resultsfrom 15%≤ q ≤ 50% toward even smaller dilutions was not warranted. Had ourlocalization lengths within their range dramatically differed from theirs, we would perhapsconclude that the differing techniques used led to the difference in whether a delocalized statewas found, but as we have shown, this seems not to be the case.§ ACKOWLEDGEMENTS We thank Purdue Research Foundation and Purdue University Department of Physics for financial support and the latter for generously providing computing resources.99 anderson:1958 P. W. Anderson, Phys. Rev. 109, 1492 (1958) abrahams:1979 E. Abrahams, P.W. Anderson, D.C. Licciardello and T.V. Ramakrishnan,Phys. Rev. Lett., 42, 673 (1979) lee:1981 P. A. Lee and D. S. Fisher, Phys. Rev. Lett. 47, 882 (1981) eilmes:2001 A. Eilmes, R. A. Römer, and M. Schreiber, Physica B 296, 46 (2001) soukoulis:1991 C. M. Soukoulis and G. S. Grest, Phys. Rev. B 44, 4685 (1991) mookerjee:1995 A. Mookerjee, I. Dasgupta, and T. Saha, Int. J. Mod. Phy. B 9, (23), 2989 (1995) haldas:2002 G. Hałdas̀, A. Kolek, and A. W. Stadler, Phys. Status Solidi B 230, 249 (2002) odagaki:1984T. Odagaki and K. C. Chang, Phys. Rev. B 30, 1612 (1984) srivastava:1984V. Srivastava and m. Chaturvedi, Phys Rev. B 30, 2238 (1984) koslowski:1990 Th. Koslowski, W. von Niessen, Phys. Rev. B 42, 10342 (1990) daboul:2000 D. Daboul, I. Chang, A. Aharony, Eur. Phys. J B 16, 303 (2000) islam:2008 M. F. Islam and H. Nakanishi, Phys. Rev. E 77, 061109 (2008) schubert:2008 G. Schubert and H. Fehske, Phys. Rev. B 77, 245130 (2008) schubert:2009 G. Schubert and H. Fehske, in Quantum and Semi-classical Percolation and Breakdown in Disordered Solids, edited by A. K. Sen, K. K. Bardhan, and B. K. Chakrabarti (Springer, New York, 2009), pp. 163–189. gong:2009 L. Gong and P. Tong, Phys. Rev. B 80, 174205 (2009) nazareno:2002 H. N. Nazareno, P. E. de Brito, and E. S. Rodrigues, Phys. Rev. B 66, 012205 (2002) dillon:2014 B. S. Dillon and H. Nakanishi, Eur. Phys.J B 87, 286 (2014) soukoulis:1982 C. M. Soukoulis, I. Webman, G. S. Grest, and E. N. Economou, Phys. Rev. B, 26, 1838 (1982) mackinnon:1981 A. MacKinnon and B. Kramer, Phys. Rev. Lett, 47, 1546 (1981)
http://arxiv.org/abs/1708.07472v1
{ "authors": [ "Brianna S. Dillon Thomas", "Hisao Nakanishi" ], "categories": [ "cond-mat.stat-mech", "cond-mat.dis-nn" ], "primary_category": "cond-mat.stat-mech", "published": "20170824155813", "title": "Two-dimensional quantum percolation on anisotropic lattices" }
Novel Sensor Scheduling Scheme for Intruder Tracking in Energy Efficient Sensor NetworksRaghuram Bharadwaj D.^1, Prabuchandran K.J.^1, and Shalabh Bhatnagar^1 ^1 Authors are with the Department of Computer Science and Automation, Indian Institute of Science, Bangalore, India December 30, 2023 ================================================================================================================================================================================================================= In this paper, we derive the sharp lower and upper bounds of nodal lengths of Laplacian eigenfunctions in the disc. Furthermore, we observe a geometric property of the eigenfunctions whose nodal curves maximize the nodal length.§ INTRODUCTIONIn an n- smooth and compact Riemannian manifold (,g), let Δ=Δ_g be the Laplacian and u be an eigenfunction with eigenvalue λ, i.e. -Δ u=λ u. Ifhas smooth boundary, we impose Dirchlet or Neumann boundary condition. Yau <cit.> conjectured thatc_1√(λ)≤^n-1((u))≤ c_2√(λ)for some constants 0<c_1,c_2<∞ depending on (,g) and independent of the eigenvalues λ→∞. Here, ^n-1 denotes the (n-1)- Hausdorff measure and (u)={x∈:u(x)=0} denotes the nodal set of the function u.If the metric g is analytic, then (<ref>) was proved by Donnelly-Fefferman <cit.> (see also Lin <cit.> for the upper bound); if g is smooth, then Logunov <cit.> recently showed that c_1√(λ)≤^n-1((u))≤ c_2λ^α,in which α>1/2 depends only on n=. There are partial results in this direction <cit.> etc, c.f. the survey by Zelditch <cit.>.In this paper, we are concerned with the precise and sharp dependence of the constants c_1 and c_2 in (<ref>) on the geometry (,g). That is, writeH_1()=lim inf_λ→∞^n-1((u))/√(λ)and H_2()=lim sup_λ→∞^n-1((u))/√(λ).Thanks to <cit.>, we know that 0<H_1≤ H_2<∞ on analytic manifolds and H_1>0 on smooth manifolds. Regarding the two limits in (<ref>), we also pursue the categorization of the eigenfunctions (in relation to the geometry of the manifold) which saturate lim inf and lim sup of ^n-1((u))/√(λ) as λ→∞. That is, what geometric properties do the nodal sets of these eigenfunctions achieving the limits in (<ref>) have? Our primary interest is to categorize the sequence of eigenfunctions whose nodal curves are geodesics in the manifold, if these eigenfunctions exist. See Corollary <ref> and Problem <ref>.We begin from the easiest case, an interval 𝕀_L=[0,L]. The j-th Dirichlet eigenfunction isu_j(x)=sin(jπ/Lx) with eigenvalue λ_j=(jπ/L)^2, j=1,2,3,...The nodal set (in the interior of the domain) of u_j is a collection of nodal points(u_j)={L/j· l:l=1,...,j-1}.Hence, the size of the nodal set of the j-th Dirichlet eigenfunction u_j on [0,L], i.e. the number of nodal points, is ^0((u_j))=j-1=L√(λ_j)/π-1, j=1,2,3,...Therefore, in (<ref>) we have thatH_1(𝕀_L)=H_2(𝕀_L)=lim_λ→∞^0((u))/√(λ)=1/πL.One can similarly show the same results for Neumann eigenfunctions in [0,L].If ≥2, then very little is known about (<ref>). To the authors' knowledge, the first result in this direction is due to Brüning-Gromes <cit.>: They remarked that in an irrational rectangle R with side-lengths a and b for which a^2/b^2 is irrational, H_1(R)=1/π(R)and H_2(R)=√(2)/π(R).However, H_1 and H_2 are not known in more general rectangles. In <ref>, we discuss finding H_1 and H_2 in the rectangles and the tori.Gichev <cit.> proved that on the n- unit sphere ^n,H_2(^n)=lim sup_λ→∞^n-1((u))/√(λ)=(^n-1)=Γ((n+1)/2)/Γ(n/2)√(π)(^n),in which (^n-1) is the volume of ^n-1. For example, H_2(^2)=2π=(^2)/2. In fact, the eigenfunctions on the sphere are spherical harmonics (i.e. homogeneous harmonic polynomials restricted to the sphere) and Gichev proved a stronger result that ^n-1((u))≤ k(^n-1),in which k is the homogeneous degree of u. Moreover, the equation in the above inequality is obtained by the Gaussian beams (i.e. highest weight spherical harmonics). One then deduces (<ref>) by observing that λ=k(k+n-1). In the same paper <cit.>, Gichev conjectured that on ^2,H_1(^2)=lim inf_λ→∞^1((u))/√(λ)=4=1/π(^2),and the limit is achieved by the zonal harmonics.Our main result is to provide the case in the disc for which both of the sharp constants H_1 and H_2 in (<ref>) are explicitly proved, see Theorem <ref>; moreover, we observe the geometric properties of the eigenfunctions which achieve the lim sup^1((u))/√(λ) as λ→∞, see Corollary <ref>.Let _1={(x,y)∈^2:x^2+y^2<1} be the unit disc. Consider the eigenfunctions with Dirichlet boundary condition. In polar coordinates {(r,θ):0≤ r≤1,0≤θ<2π}, the real-valued Dirichlet eigenfunctions areu_k,s(r,θ)=J_k(√(λ_k,s)· r)sin(kθ+θ_0),where k=0,1,2,3,... and s=1,2,3,...Here, J_k is the k-th Bessel function and λ_k,s=j_k,s^2, where j_k,s is the s-th nonnegative zero of J_k, and θ_0∈[0,2π). So all the Dirichlet eigenvalues are the squares of zeros of Bessel functions. In particular, the eigenvalues j^2_k,s are distinct for different values of k and s, (c.f. <cit.>) and the multiplicity of j^2_k,s is two with eigenspace spanned by J_k(√(λ_k,s)· r)sin(kθ)and J_k(√(λ_k,s)· r)cos(kθ).The nodal set (in the interior of the disc) of the eigenfunction with eigenvalue λ_k,s=j_k,s^2 is a collection of 2k radials (i.e. k diameters) and s-1 concentric circles with radii j_k,l/j_k,s, l=1,...,s-1.Our main theorem states thatIn the disc _1, let u be a Dirichlet eigenfunction with eigenvalue λ. ThenH_1(_1)=lim inf_λ→∞^1((u))/√(λ)=1,in which the limit is achieved by the eigenfunctions u_0,s as s→∞; andH_2(_1)=lim sup_λ→∞^1((u))/√(λ)=2,in which the limit is achieved by the eigenfunctions u_k,1 as k→∞.The same results as in Theorem <ref> hold for Neumann eigenfunctions in the disc. See the discussion in <ref>. A simple dilation givesIn the disc _R={(x,y)∈^2:x^2+y^2<R^2}, let u be a Dirichlet eigenfunction with eigenvalue λ. ThenH_1(_R)=lim inf_λ→∞^1((u))/√(λ)=R^2=1/π (_R)andH_2(_R)=lim sup_λ→∞^1((u))/√(λ)=2R^2=2/π (_R). Some bounds of c_1 in (<ref>) on Riemannian surfaces are previously known, with which one can have some non-sharp estimates of H_1. In a Euclidean domain Ω⊂^2, Brüning-Gromes <cit.> proved thatH_1(Ω)≥1/2j_0,1 (Ω),where, as before, j_0,1≈2.4048 is the first zero of the Bessel function J_0. On a smooth Riemannian surface , Savo <cit.> proved that H_1()≥1/11 ().So our calculation of H_1 in Corollary <ref> can be regarded as the sharp improvement of these results applied to the disc. The other problem in question is to characterize the (possible) geometric properties of the eigenfunctions that achievelim inf or lim sup of ^1((u))/√(λ) as λ→∞. Here, we make the observation that the nodal set of u_k,1=J_k(√(λ_k,1)· r)sin(kθ+θ_0), is a collection of k diameters that pass through the origin. Hence,In the disc _R, H_2(_R)=lim sup_λ→∞^1((u))/√(λ)is saturated by a sequence of eigenfunctions whose nodal curves in the interior are geodesics, i.e. pieces of straight lines. Recall that on ^n proved by Gichev <cit.>, lim sup^n-1((u))/√(λ) is saturated by the Gaussian beams, whose nodal sets are totally geodesic. Based on these evidence, we propose the following problem.In what manifold (,g), one has thatH_2()=lim sup_λ→∞^n-1((u))/√(λ)=lim_k→∞^n-1((u_k))/√(λ_k)for a sequence of eigenfunctions {u_k}_k=1^∞ with eigenvalues λ_k such that the nodal sets of u_k are totally geodesic in the interior of ?The answer to Problem <ref> is positive on the spheres by Gichev <cit.> and in the disc by Corollary <ref>. In the irrational rectangles (see <ref>), the nodal curves of all eigenfunctions are geodesics so the answer to Problem <ref> is trivially positive. It would be interesting to see if Problem <ref> holds in other rectangles (or tori). On a general manifold, the answer to Problem <ref> is not known and in fact it is not even known whether there exists a sequence of eigenfunctions whose nodal sets are totally geodesic.In all the manifolds that we consider in this paper (irrational rectangles and tori, spheres, and discs), we have that H_1()<H_2(). So a natural question follows as In what manifold (,g), H_1()=H_2()?In the case when H_1()=H_2(), there is a unique limit of ^n-1((u))/√(λ) as λ→∞ for all the eigenfunctions. This is not known to be positive on any manifold with dimension higher than one. On an analytic manifold, one can extend the Laplacian eigenfunctions to a complex neighborhood of the manifold. In <cit.>, Zelditch showed that on an analytic manifold with ergodic geodesic flow, there is a full density subsequence of eigenfunctions for which [(u^)]/√(λ) has a unique limit as λ→∞. Here, u^ denotes the complex extension of the eigenfunctions u and [(u^)] denotes the complex hypersurface measure of the nodal set of u^. Even though this result is for the complex extensions of a full density subsequence of eigenfunctions, it suggests that on manifolds with ergodic geodesic flow (e.g. negatively curved manifolds), the answer to Problem <ref> might be positive.§ PROOF OF THEOREM <REF> §.§ The irrational rectangles and toriBefore proving Theorem <ref>, we discuss the proof of (<ref>) in an irrational rectangle R={(x,y)∈^2:0≤ x≤ a,0≤ y≤ b}, where a^2/b^2 is irrational. (This is observed in <cit.>.) We then make some remarks about finding H_1 and H_2 in more general rectangles and tori. The Dirichlet eigenfunctions in R have the formu_k,j(x,y)=sin(π k/ax)sin(π j/by)with the eigenvaluesλ_k,j=(π k/a)^2+(π j/b)^2,where k,j=1,2,3...Notice that if a^2/b^2 is irrational, then all the eigenvalues are simple. Indeed, if λ_k̃,j̃=λ_k,j for another pair (k̃,j̃), k̃,j̃=1,2,3..., then k^2-k̃^2/a^2=j̃^2-j^2/b^2,which forces (k̃,j̃)=(k,j) since a^2/b^2 is irrational. The nodal set (u_k,j) (in the interior of R) consists of (k-1) line segments of length b and (j-1) line segments of length a. So the nodal length of u_k,j is^1((u_k,j))=(k-1)b+(j-1)a.Hence,^1((u_k,j))/√(λ_k,j)=(R)/π×(k-1)b+(j-1)a/√((kb)^2+(ja)^2).One then sees from √(p^2+q^2)≤ p+q≤√(2)√(p^2+q^2) for p,q≥0 that H_1(R)=lim inf_λ→∞^1((u_k,j))/√(λ_k,j)=1/π(R),in which the limit is achieved by u_1,j as j→∞ and by u_k,1 as k→∞, andH_2(R)=lim sup_λ→∞^1((u_k,j))/√(λ_k,j)=√(2)/π(R),in which the limit is achieved by u_k,j such that k/j→ a/b as k,j→∞.* The above proof holds with little modification for Neumann eigenfunctions in these irrational rectangles.* In a more general rectangles, the eigenvalues may not be simple and can have high multiplicity, e.g. in the rectangle [0,π]×[0,π], there are eigenvalues λ with multiplicity of the order √(logλ) as λ→∞. In the case of high multiplicity, one has to estimate the precise nodal lengths of linear combinations of eigenfunctions of the form (<ref>). These linear combinations have complex nodal portraits and the problem of finding H_1 and H_2 becomes challenging.* In the disc, the eigenvalues have multiplicity two. Therefore, in the following subsection, we can use explicit formulae to deduce H_1 and H_2.If we identify the two opposing sides of the rectangle R=[0,a]×[0,b] and define the torus(i.e. without boundary), then the real-valued eigenfunctions are spanned bysin(2π k/ax±2π j/by)andcos(2π k/ax±2π j/by)with eigenvalue(2π k/a)^2+(2π j/b)^2,where k,j=0,1,2,3...Given that a^2/b^2 is irrational, the eigenvalues are not simple (except when k=j=0) but their multiplicity is uniformly bounded by 4. A similar argument as in the corresponding irrational rectangle shows that the same results of H_1 and H_2 in (<ref>) and (<ref>) hold on the torus. However, H_1 and H_2 remain unknown on other tori, for the same reason as described in the above remark. §.§ Proof of Theorem <ref>Now we prove Theorem <ref>. Recall that the Dirichlet eigenfunction with eigenvalue λ_k,s=j_k,s^2 has the formu_k,s=J_k(√(λ_k,s)· r)sin(kθ+θ_0),where k=0,1,2,3,... and s=1,2,3,...Since the nodal length ^1((u_k,s)) is independent of θ_0 here, we assume θ_0=0 without loss of generality. The nodal set of u_k,s is a collection of k diameters and s-1 concentric circles with radii j_k,l/j_k,s, l=1,...,s-1. In particular, (u_0,s) consists of circles only and (u_k,1) consists of diameters only. Here, we provide the graphs of the nodal curves of some eigenfunctions with different nodal portrait. From left to right: u_0,5, u_4,3, and u_10,1. Their eigenvalues are approximately 14^2 and one can check that ^1((u_0,5))<^1((u_4,3))<^1((u_10,1)), which is a reflection of Theorem <ref>.To estimate H_1(_1)=lim inf_λ→∞^1((u))/√(λ)and H_2(_2)=lim sup_λ→∞^1((u))/√(λ),we pick any subsequence of {u_k,s} such thatlim_λ→∞^1((u))/√(λ)exists,and divide into three cases.* Case 1: k tends to infinity and s is bounded;* Case 2: s tends to infinity and k is bounded;* Case 3: k and s both tend to infinity. Case 1. As λ=j_k,s^2→∞, s is bounded so k→∞. First set s=1. Then the nodal set of u_k,1 is the union of 2k radials, that is,^1((u_k,1))=2k,andlim_k→∞^1((u_k,1))/√(λ_k,1)=lim_k→∞2k/j_k,1=2.Here, we use the fact that from <cit.>,j_k,1=k+O(k^1/3)as k→∞.This argument works for all the subsequences of {u_k,s} for which s is bounded and k→∞. Indeed, If s is bounded by M, then2k≤^1((u_k,s))=2π∑_l=1^s-1j_k,l/j_k,s+2k≤2π M+2k.So by squeezing,lim_k→∞, s bounded^1((u_k,s))/√(λ_k,s)=2.Here, we need to use a formula <cit.> that if s is bounded, thenj_k,s=k+o(k)as k→∞.Case 2. As λ=j_k,s^2→∞, k is bounded so s→∞. First set k=0. Then the nodal set of u_0,s is the union of s-1 concentric circles with radii j_0,l/j_0,s, l=1,...,s-1, that is,^1((u_0,s))=2π∑_l=1^s-1j_0,l/j_0,s,andlim_s→∞^1((u_0,s))/√(λ_0,s)=2πlim_s→∞∑_l=1^s-1j_0,l/j_0,s^2=2πlim_s→∞∑_l=1^s(l-1/4)π/[(s-1/4)π]^2=1.Here, we use the fact that from <cit.>: If k≪ s, thenj_k,s=(s+k/2-1/4)π+O(s^-1)as s→∞.This argument works for all the subsequences of {u_k,s} for which k is bounded and s→∞. Indeed, If k is bounded by M, then (u_k,s) contains s-1 circles and at most M diameters. Hence,2π∑_l=1^s-1j_k,l/j_k,s≤^1((u_k,s))=2π∑_l=1^s-1j_k,l/j_k,s+2k≤2π∑_l=1^sj_k,l/j_k,s+2M.So by squeezing, using (<ref>) again, we have thatlim_s→∞, k bounded^1((u_k,s))/√(λ_k,s)=1. Case 3. As λ=j_k,s^2→∞, k,s→∞. Suppose that in such a subsequencelim_λ→∞^1((u_k,s))/√(λ_k,s)=p.Our goal is then to prove that 1≤ p≤2.We need the uniform bounds of the zeros j_k,s of Bessel functions as k,s→∞. By <cit.>, we have thatj_k,s>k+2/3|a_s-1|^3/2for k=0,1,2,3,... and s=1,2,3,...Here, a_s is the s-th negative zero of the Airy function. By <cit.>, we have that as s→∞,a_s=-[3π(4s-1)/8]^2/3[1+O(s^-2)].Hence,j_k,s>k+2/3|a_s-1|^3/2=k+π s+O(s^-1).We now proceed to prove the upper bound that p≤2. Using (<ref>),^1((u_k,s))/√(λ_k,s)=2π∑_l=1^s-1j_k,l/j_k,s+2k/j_k,s<2π s+2k/k+π s+O(s^-1)≤2as k,s→∞.We then prove the lower bound that p≥1. By <cit.>, we have thatj_k,s<π/2k+2/3|a_s|^3/2=π/2k+π s+O(s^-1)for k=0,1,2,3,... and s=1,2,3,...Using (<ref>) and (<ref>), we compute that∑_l=1^s-1j_k,l/j_k,s ≥ 1/π/2k+π s+O(s^-1)∑_l=1^s-1[k+π l+O(l^-1)]≥ π/2(s-1)s+(s-1)k+O(s)/π/2k+π s+O(s^-1)≥ s/2,if s and k are large enough.Notice that j_k,l/j_k,s, l=1,...,s-1, are fractions which are distributed in the interval [0,1]. If k≪ s, then by the asymptotic formula (<ref>), these fractions are rather equidistributed. So the above inequality is natural in this case. If s≪ k, then by (<ref>) and (<ref>), one sees that j_k,1/j_k,s≳ k/(π k/2+π s)>1/2 and therefore j_k,l/j_k,s>1/2 for all l=1,...,s-1. So the above inequality is natural in this case as well. The above inequality in fact shows that it is true for all sufficiently large k and s. Now by (<ref>) again,^1((u_k,s))/√(λ_k,s)=2π∑_l=1^s-1j_k,l/j_k,s+2k/j_k,s≥π s+2k/π/2k+π s+O(s^-1)≥1as k,s→∞.Hence, the lower bound is proved. §.§ Neumann eigenfunctionsThe Neumann eigenfunctions in _1 can be written asv_k,s(r,θ)=J_k(√(μ_k,s)· r)sin(kθ+θ_0),where k=0,1,2,3,... and s=1,2,3,...Here, J_k is the k-th Bessel function and μ_k,s=(j'_k,s)^2, where j'_k,s is the s-th nonnegative zero of J_k'. So all the Neumann eigenvalues are the squares of zeros of the derivatives of Bessel functions. Here for Neumann eigenfunctions, we could repeat the argument in the previous subsection, using instead the estimates of j_k,s'. However, notice that Neumann eigenfunctions in _1 extends to ^2 and in fact defines a Dirichlet eigenfunction in a slightly larger disc. So we can estimate the nodal set of Neumann eigenfunctions by Corollary <ref> for Dirichlet eigenfunctions.Indeed, the zeros j'_k,s and j_k,s interlace according tok≤ j_k,s'<j_k,s<j_k,s+1'.See <cit.>. Using this relation, we see that v_k,s extends from _1 to _R as a Dirichlet eigenfunction u_k,s withR=j_k,s/j'_k,s→1as k or s→∞.Now the nodal set of v_k,s in _1 and the nodal set of u_k,s in _R differ by the 2k radials in _R∖_1. That is,^1((v_k,s))=^1((u_k,s))-2k(R-1).Hence,^1((v_k,s))/j'_k,s=^1((u_k,s))-2k(R-1)/j_k,s· R.By (<ref>) and (<ref>), we see that2k(R-1)/j_k,s→0as j_k,s→∞.Then applying Corollary <ref> and again (<ref>), we have thatlim inf_λ→∞^1((v))/√(μ)=1andlim sup_λ→∞^1((v))/√(μ)=2. § ACKNOWLEDGMENTSXH wants to thank Stephen Breen, Andrew Hassell, Hamid Hezari, and Steve Zelditch for all the discussions that are related to this article, in particular, Problem <ref>; XH also wants to thank Zeév Rudnick for informing him the results in Gichev <cit.> and Werner Horn for his translation of Brüning-Gromes <cit.>.99 [AS]AS M. Abramowitz and I. Stegun, Handbook of mathematical functions with formulas, graphs, and mathematical tables. National Bureau of Standards. U.S. Government Printing Office, Washington, DC, 1964.[Bre]Bre S. Breen, Uniform upper and lower bounds on the zeros of Bessel functions of the first kind. J. Math. Anal. Appl. 196 (1995), no. 1, 1–17.[Bru]Bru J. Brüning,Über Knoten von Eigenfunktionen des Laplace-Beltrami-Operators. Math. Z. 158 (1978), no. 1, 15–21.[BG]BG J. Brüning and D. Gromes, Ïber die Länge der Knotenlinien schwingender Membranen. Math. Z. 124 (1972), 79–82.[ChMu]ChMu S. Chanillo and B. Muckenhoupt, Nodal geometry on Riemannian manifolds. J. Differential Geom. 34 (1991), no. 1, 85–91. [CoMi]CoMi T. Colding and W. Minicozzi, Lower bounds for nodal sets of eigenfunctions. Comm. Math. Phys. 306 (2011), no. 3, 777–784.[D]D R.-T. Dong, Nodal sets of eigenfunctions on Riemann surfaces. J. Differential Geom. 36 (1992), no. 2, 493–506.[DF1]DF1 H. Donnelly and C. Fefferman, Nodal sets of eigenfunctions on Riemannian manifolds. Invent. Math. 93 (1988), no. 1, 161–183. [DF2]DF2 H. Donnelly and C. Fefferman, Nodal sets for eigenfunctions of the Laplacian on surfaces. J. Amer. Math. Soc. 3 (1990), no. 2, 333–353.[DF3]DF3 H. Donnelly and C. Fefferman, Growth and geometry of eigenfunctions of the Laplacian. Analysis and partial differential equations, 635–655, Lecture Notes in Pure and Appl. Math., 122, Dekker, New York, 1990.[HL]HL X. Han and G. Lu, A geometric covering lemma and nodal sets of eigenfunctions. Math. Res. Lett. 18 (2011), no. 2, 337–352.[HaSi]HaSi R. Hardt and L. Simon, Nodal sets for solutions of elliptic equations. J. Differential Geom. 30 (1989), no. 2, 505–522.[HeSo]HeSo H. Hezari and C. Sogge, A natural lower bound for the size of nodal sets. Anal. PDE 5 (2012), no. 5, 1133–1137.[HW]HW H. Hezari and Z. Wang, Lower bounds for volumes of nodal sets: an improvement of a result of Sogge-Zelditch. Spectral geometry, 229–235, Proc. Sympos. Pure Math., 84, Amer. Math. Soc., Providence, RI, 2012.[G]G V. M. Gichev,Some remarks on spherical harmonics. St. Petersburg Math. J. 20 (2009), no. 4, 553–-567. [Li]Li F.-H. Lin, Nodal sets of solutions of elliptic and parabolic equations. Comm. Pure Appl. Math. 44 (1991), no. 3, 287–308.[Lo1]Lo1 A. Logunov, Nodal sets of Laplace eigenfunctions: polynomial upper estimates of the Hausdorff measure. https://arxiv.org/abs/1605.02587arXiv:1605.02587.[Lo2]Lo2 A. Logunov, Nodal sets of Laplace eigenfunctions: proof of Nadirashvili's conjecture and of the lower bound in Yau's conjecture. https://arxiv.org/abs/1605.02589arXiv:1605.02589. [S]S A. Savo, Lower bounds for the nodal length of eigenfunctions of the Laplacian. Ann. Global Anal. Geom. 19 (2001), no. 2, 133–151.[SZ1]SZ1 C. Sogge and S. Zelditch, Lower bounds on the Hausdorff measure of nodal sets. Math. Res. Lett. 18 (2011), no. 1, 25–37.[SZ2]SZ2 C. Sogge and S. Zelditch, Lower bounds on the Hausdorff measure of nodal sets II. Math. Res. Lett. 19 (2012), no. 6, 1361–1364. [W]W G. N. Watson,A treatise on the theory of Bessel functions. Cambridge University Press, Cambridge, England; The Macmillan Company, New York, 1944.[Y]Y S.-T. Yau, Open problems in geometry. Proc. Sympos. Pure Math. Vol. 54, Part 1, Providence RI: Amer. Math. Soc. 1993, pp. 1–28.[Z1]Z1 S. Zelditch, Complex zeros of real ergodic eigenfunctions. Invent. Math. 167 (2007), no. 2, 419–443.[Z2]Z2 S. Zelditch, Eigenfunctions and nodal sets. Surveys in differential geometry. Geometry and topology, 237–308, Surv. Differ. Geom., 18, Int. Press, Somerville, MA, 2013.
http://arxiv.org/abs/1708.08112v2
{ "authors": [ "Xiaolong Han", "Michael Murray", "Chuong Tran" ], "categories": [ "math.AP", "math.SP", "58J50, 35J05, 35P15" ], "primary_category": "math.AP", "published": "20170827171453", "title": "Nodal lengths of eigenfunctions in the disc" }
1]Daniele Giofrécorresponding [corresponding]Corresponding author: [email protected]]Till Junge2]W. A. Curtin1]Michele Ceriotti[1]Laboratory of Computational Science and Modeling, Institute of Materials,École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland[2]Laboratory for Multiscale Mechanics Modeling, Institute of Mechanical Engineering, EPFL, 1015 Lausanne, SwitzerlandAge hardening induced by the formation of(semi)-coherent precipitate phases is crucial for the processing and finalproperties of the widely used Al-6000 alloys.Early stages of precipitation are particularlyimportant from the fundamental and technologicalside, but are still far from beingfully understood. Here,an analysis of the energetics of nanometric precipitates of the meta-stableβ” phases is performed, identifying the bulk, elastic strain and interface energies that contribute to the stability of a nucleating cluster. Results show that needle-shape precipitatesare unstable to growth even at the smallest size β” formula unit, i.e. there is no energy barrier to growth. The small differences between different compositions points toward the need for the study of possible precipitate/matrix interface reconstruction. A classical semi-quantitative nucleation theory approach including elastic strain energycaptures the trends in precipitate energyversus size and composition.This validates the use of mesoscale models to assess stability and interactions of precipitates. Studies of smaller 3d clusters also show stability relative to the solid solution state, indicating that the early stages of precipitation may be diffusion-limited.Overall, these results demonstrate the important interplay among composition-dependent bulk, interface, and elastic strain energies in determining nanoscale precipitate stability and growth.ab initio simulations; aluminum alloys; precipitation; nucleation§ INTRODUCTION Pure aluminum islightweight metal that has little strength or resistance to plastic deformation.Alloying aluminum introduces either solutes or the formation of nanometric precipitatesthat hinder the motion of dislocations,thereby dramatically improving the mechanical properties <cit.>.A major alloy class used in the automotive industry is the Al-6000 series that contains silicon and magnesium in the range of0.4–1 wt% with a Si/Mg ratio larger than one. In the initial stages of processing at elevated temperatures,the alloy is a supersaturated solid solution (SSSS), with the solutes randomly dispersed in the Al matrix. After quenching to lower temperatures, the solutes aggregate to form nanometer-sized precipitates(e.gGuinier-Preston (GP) zones, metastable phases, or stable phases, depending on thethermal history).The time evolution of precipitate nucleation and growth is accompanied by a concomitant mechanical strengthening, referred to as age-hardening.Furthermore, precipitation proceeds through a sequence of competing phasesthat differ in composition, morphology, thermodynamic stability, and kinetics of growth and dissolution,as well as in the contributions to the mechanical properties <cit.>. Control of the kinetics of age-hardening is crucial for the optimization ofthe final mechanical properties.In commercial 6000-series Al alloys, precipitation commences at room temperatureshortly after quenching, and this “natural aging” is undesirable.Subsequent “artificial aging” at elevated temperature is then used to achieve the desired precipitate type(s) and sizes.The most effective hardeningconditions are obtained in the early stages of precipitation, wherefully-coherent GP zones coexist with the semi-coherent β”phase <cit.>, which forms needle-shaped precipitates 200-1000 Å in length and ≈ 60 Å indiameter <cit.>. High-resolution electron microscopy andquantitative electron diffraction <cit.> studies have revealed that the β” phase is characterized by a Mg/Si ratio close to 1but with different possible stoichiometries that include Mg5Si6, Mg4Al3Si4, Mg5Al2Si4. Recent first-principles calculations have predicted that thelatter composition is the most stable<cit.>. While considerable progress has been made in understanding the structure of the β” phase, and the behavior of theSSSS <cit.>, little is known on the early stages ofthe aging mechanism, and in particular on the thermodynamics ofthe initial clustering of solutes to formthe precipitate <cit.>. Such knowledge is crucial to gain better control over the balance between natural andartificial aging. In the present work we study the energetics of nanoscale precipitatesusing ab initio electronic structure methods so as to identify the different contributions to the thermodynamic in-situ precipitation energetics. We compute the energy contributions due to the precipitate formation energy, the precipitate/matrix interface energies, and the elastic energy due to lattice and elastic mismatch between precipitate and matrix.We show that these contributions semi-quantitatively capture the total energy of in-situ precipitates as a function of precipitate size. Our results demonstrate that – down to the size of a single formula unit of the β” phase, fully encapsulated in the Al matrix – the precipitate growth process can proceed withoutenergetic barriers.Since the nucleation process of the β” phase has nearly zero barrier, control of precipitation kinetics should focuson aggregates of atoms of even smaller size.The remainder of this paper is organized as follows.In Section <ref> wedescribe the details of our ab initio simulations.In Section <ref> wereport a few benchmarks on the bulk properties of the different stoichiometries proposed for the β” phases. In Section <ref> we discuss a classical-nucleation-theory (CNT) model of precipitate stability, including surface energies and thecontinuum elasticity model of lattice mismatch relaxation, and compare with DFT resultsfor needle-like precipitates.InSection <ref> we present abinitio simulations of fully-encapsulated clusters. We finally draw conclusions.§ COMPUTATIONAL DETAILS Density functional theory (DFT) has been shown to provide reliableenergetics for aluminum and itsalloys <cit.>. We have used self-consistent DFT as implemented inthe Quantum ESPRESSO (QE) package<cit.>. We used a gradient corrected exchange and correlationenergy functional (PBE)<cit.>, together with a plane-waves expansion of Kohn-Sham orbitals and electronic density, using ultra-soft pseudopotentials for all the elements involved <cit.>. All calculations were performed with a k-point sampling of the Brillouin zone using a griddensity of ≈ 5·10^-6 Å^-3 and aMokhorst-Pack mesh<cit.>.The plane-wave cut-offenergywas chosen to be 35 (280) Ry for the wavefunction (the charge density) when evaluating theenergetics of defects (i.e. for computingformation, surface, and precipitation energies). Test calculations performed at larger cutoffsshowed that these parameters are sufficient to converge the atomization energy of Al at a level of 0.3 meV/atom.Cutoffs were increased to 50 (400) Ry so as toconverge the value of the elastic constants to an error below 1 GPa. Comparison with previous literature results, where available, will be presented below. § BULK PROPERTIES OF MATRIX AND PRECIPITATE PHASES Bulk properties (lattice structure, lattice constants, elastic constants) of Al and the various β”-precipitates studied here have been previously computed in the literature. Here, we present our results as a means of benchmarking our methods, verifying literature results, and most importantly obtaining reference values that are fully consistent with our computational details – which is crucial to evaluate the energy differences that determine surface and defect energies. For bulk fcc Al, we computed the lattice parameter to be4.057 Å, in excellent agreement with the experimental value and with previousmodelling using the same functional <cit.>. These lattice parameters are used throughout our study to build supercellsrepresenting the Al matrix. All of the β” phases we consider can be described by a monoclinic cell containing twoformula units (f.u.).We consider three compositions,Mg5Si6, Mg5Al2Si4 andMg4Al3Si4, as shown in Figure <ref>. We computed the crystal structures of these β”-precipitates starting from the geometriesproposed in previous works <cit.>. The equilibrium lattice parameters and monoclinic angles are shown in Table <ref>, and agree well with existing literature <cit.>. Inside the Al matrix, the main crystallographic directions (lattice vectors) of the precipitate are aligned with those in the fcc lattice of aluminum as follows:[100]_β”∥ [203]_Al [010]_β”∥ [010]_Al [001]_β”∥ [3̅01]_Al.The ideal monoclinic unit cell can be deformed, relative to thefully relaxed structures, to substitute for 22 Al atoms.The corresponding lattice vectors and lattice constants of the 22-atom Al are shown Table <ref>. The difference between the ideal monoclinic unit cell and the 22-atom Al unit cell uniquely determines the misfit strain tensor of the precipitate in the Al lattice, which will be used below to determine the corresponding elastic energy of precipitates in the matrix.We computed the elastic constants of all bulk phases by evaluating the stresses generated by small displacements of the unit cell around the equilibrium structure.A suitable set of displacements was used, and the stresses were then modelled as a linear function of the displacements to obtain the elastic constants  <cit.>.The elastic constantsfor bulk Al and for the three β” phases studied here are shown in Table <ref>, and were computed according to a reference system consistent with the Al matrix, as shown in Fig. <ref>. Our values are in good agreement with available experimental values <cit.> and previous computations  <cit.>. In order to define a reference state for the thermodynamics of the precipitates we define the solid solution energies asE^ss_Al = E^tot_Al_M/ME^ss_x = E^tot_Al_M-1(x)-(M-1) E^ss_Al,for x = Si,Mg. Here, E^tot_Al_M and E^tot_Al_M-1(x)are the total energies of a bulk-Al supercell containing M Al atoms and (M-1) Al atoms and 1 atom of x = Si,Mg, respectively.The energy E^tot_Al_M-1(x) is computed using a single solute in a 4x4x4 unit periodic cell with the cell volume held fixed. The cell develops asmall pressure due to the misfit volume of the solute, but this contribution to the energyis negligible for the large cell size used.The formation energy for a precipitate can then be defined as the total energy of a precipitate formula unit relative to that of the total energies of the precipitate atoms in the solid solution state. Thus, the formation energy isE_form=1/2E^tot_β” - ∑_x=Al, Si, Mgn_x· E^ss_x,where E^tot_β” is the (DFT) total energy ofa fully-relaxed unit cell of the β” phase containing22 atoms (2 formula units), n_x is the number of atoms of element x in one formula unit, and E^ss_x is the energy of solute x in the (dilute) solid solution state.Knowing all the terms in eq. <ref>, we can computethe formation energies of the three proposed β”-phase compositions as shown in Table <ref>). The precipitates are strongly favorable, with negative formation energies in excessof -2eV/f.u., or greater than -0.2 eV/atom on average.Precipitate formation is thus thermodynamically highly preferable relative to the solid solution state. § IN-SITU PRECIPITATES Bulk properties provide important information on the thermodynamic driving forces for precipitation, but are incomplete for understanding in-situ precipitation nucleation and growth. The system of precipitate plus matrix has additional energetic contributions from the precipitate/matrix interfaces, precipitate/matrix lattice and elastic constant mismatches that give rise to elastic energies when the precipitate is coherent, and precipitate/matrix edge and corner energies.All of these additional contributions determine the total thermodynamic drivingforce for precipitate growth as a function of precipitate size, shape, and density. While not addressed here, the elastic interactions between precipitates at finite densitiesalso influences their spatial arrangement and orientation <cit.>.We thus need to predict the size, shape, and energy of a critical precipitate nucleus.At some critical precipitate size, the precipitate becomes thermodynamically unstableto further growth, i.e. increasing size leads to decreasing total energy.Below the critical precipitate size, the precipitate is unstable and should re-dissolve in the solid solution.Here, we take a model based on classical nucleation theory (CNT) to assess the precipitate stability as a function of size, shape anddensity (which influences the elastic energy).In this analysis, we ignore edge and corner energies.Also assuming, for the moment, a low density of precipitates, the total energy of a precipitate containing N formula units, relative to the SSSS, can be written as E_prec(N) = N E_form + N E_strain + E_surf(N). There are two new terms in Eq. <ref>.First, there is the elastic strainenergy E_strain due tothe lattice and elastic mismatch betweenthe precipitate and the Al-matrix per β” formula unit for a single precipitate in an infinite matrix (the dilute limit).Second, there is the surface (interface) energy E_surf of theprecipitate, which will depend on both the size and the shape of the nucleus.In order to evaluate the precipitation energy, we first obtain quantitative values for the strain andinterface energies. Then, we will make predictions for the thermodynamics in the dilute limit. Finally, we will perform DFT studies of in-situ precipitates and compare the DFT energies versus the CNT model, adapted to the geometry of the DFT supercells.§.§ Interface energies for beta” precipitates Based on TEM analyses <cit.>,and the correspondence between β” structure and the closely-related 22-atom Al unitthat accommodates one precipitate unit cell, we study three interface orientations as shown in Figure <ref>.The orientations are denoted A≡ (103)_Al≡ (100)_β”,B≡ (010)_Al≡ (010)_β”, and C≡ (3̅02)_Al≡ (001)_β”. Given the relatively complex structure of the β” phase, there are many possible ways to terminate theprecipitate.Previous computational studies of the β”-Mg5Si6/α-Al interface have found that the associated surface energies can change significantlybetween different choices <cit.>.To compare with previous studies of finite-size precipitates,we chose the interfaces used in Ref. . Figure <ref> shows only one monoclinic unitcell of the precipitate and one for the matrix for Mg5Al2Si4 but all three compositions were studied, and simulations were performed with much larger supercells of sizes 4 β” + 4 Al unit cells for the A orientation,6 β” + 6 Al unit cells for the B orientation, and6 β” + 6Al unit cells for the C orientation. Since the precipitate and matrix have a structural mismatch, the total energy computed in a given simulation cell includes an elastic deformation energy.Thisenergy must be computed independently and subtracted from the total energy obtained in the interface simulation to estimate the specific interface energy γ_Λ= A,B,C. First, we compute the energy per formula unit of the partially-relaxedβ” phase. For each interface orientation, we defineE_Λ^β” as the energy per formula unit of a β” cell that is fully coherent with the Al matrix in the Λ plane, andrelaxed in the orthogonal direction.We then prepared an interface between the Al matrix and the β” phase, once again fixing the dimensions parallel to the interface to be fully coherent with the matrix, and relaxing it in the orthogonal direction.The interface energy can then be obtainedfrom the total energy of this supercell E^sc_Λ as γ_Λ= A,B,C =( E^sc_Λ-n_Al E^SS_Al - n_β” E_Λ^β”)/2 S^Λ_supercell,where S^Λ_supercell is the cross-section of the simulation supercell corresponding to theorientation of the interface, n_Al is the number of Al atoms in the matrix,and n_β” is the number of β” formula units inside the supercell. The computed surface energies for each orientation are shown in Table <ref>.As previously noted <cit.>, the B surface energy is relatively large but the anisotropy is not sufficientto fully explain the observed needle-shaped habit of the precipitates.Given the large range of values observed for different terminations <cit.>,a change in composition or some degree of interface reconstruction may significantly lower the energies of the A and C interfaces, leading tolarger anisotropy.For instance, we obtain a considerably lower surface energy for the C interface in Mg5Si6 than any of the values reported in Ref. . For this specific case – that is associated with a relatively large mismatch in the unit cells between the β” phase and the matrix – we observe significant relaxation of atoms at the interface, extending for several layers in the bulk, that was probably not captured fully in the smaller supercells[Calculations in Ref.   used 44+44 atoms supercells, while our calculations for the C interface contained 132+132 atoms. We verified that when using a supercell with 66+66 atoms the surface energy for Mg5Si6(C) increased to 63 mJ/m^2, getting closer to previous results.] used in Ref. .The issue of interface energies of β” phases in Al thus merits further study.§.§ Elastic strain energies of needle-like beta” precipitates During the aging process,β” precipitates show astrongly anisotropic habit, extending along the b≡[010] direction forming needle-like semi-coherent particles.The lattice mismatch between Al and β” along the crystallographic b direction is also quite small.For this reason, two-dimensional slicesalong the a,c axes of the precipitate capture the main contributions to the energetics of large precipitates, and have already been studied to characterize both theenergetics and elastic deformation of the matrix in this regime <cit.>. To compute themagnitude of the elastic strain energy contribution for such a two-dimensional slice, we will use anisotropic continuum elasticity.The boundary value problem is formulated to correspond to the direct DFT studies below. We study a periodic two-dimensional plane-strain problem with a fully three-dimensional eigenstrain within the precipitate due to the misfit between the precipitate and the matrix.Figure <ref> shows a schematic of the geometry with the relevant coordinate axes.The Al matrix Ω is modeled as linearly elastic, σ =Cϵ in Ω,where σ and ϵ are the Cauchy stress and strain tensors and C is the anisotropic fourth-order stiffness tensor of the matrix expressed in the global frame of reference ê⃗̂_x-ê⃗̂_y-ê⃗̂_z aligned with the cubic lattice vectors of the pure aluminum matrix. The precipitate Ω is also linearly elastic, but with an additional eigenstrain ϵ̅ relative to the reference Al lattice that accounts for the size and shape misfit of the precipitate,σ =C(ϵ-ϵ̅) in Ω.Determination of the eigenstrain ϵ̅ and the rotation of the stiffness tensor C into the global frame of reference are described in the<ref>.As a plane-strain problem, there is zero out-of-plane displacement u_z = 0. Therefore the total strain tensor has ϵ_xz=ϵ_yz=ϵ_zz = 0.The eigenstrain ϵ̅ retains these components, however, so that the effects of the mismatch in the z direction are included.We impose periodic Dirichlet boundary conditions on thedisplacement u⃗ in the horizontal and vertical directionsu⃗(x⃗) = u⃗(x⃗ + nl⃗_x + ml⃗_y),n,m ∈ℤ,∀x⃗∈∂Ω,where l⃗_x and l⃗_y are the vectors linking the bottom left corner to the bottom right and the top left, respectively. We fix an arbitrary point u(x⃗_p) =0 to exclude solid body motion. The static equilibrium stress and strain fields throughout the body are then determined by solving the standard equilibrium equation ∇·σ = 0⃗. With the computed stess field, the strain fields are obtained from the constitutive models above andthe elastic strain energy (per unit length in the out-of-plane direction) E_strain is then computed asE_strain = 1/2(∫_Ω_matrixϵ C_matrixϵ dΩ + ∫_Ω_prec (ϵ - ϵ̅)C_prec(ϵ - ϵ̅) dΩ)Note that the energy per unit length is independent of absolute model size and so the energy only depends on the size of the precipitate relative to the size of the computational cell, or equivalently on the area fraction (equal to the volume fraction) of the precipitate.The boundary value problemis solved using the finite-element method (see <ref>). Note that, although the problem is nominally two-dimensional (plane-strain),the evaluation of the elastic strain energy remains fully three-dimensional due to the eigenstrain ϵ̅. Using the above implementation, we first computed the elastic strain energy per formula unit in the dilute limit where interactions among precipiates are negligible.This is done by using one formula unit in a cell of 96 x 96 fcc unit cells, and the results are shown as the “dilute" limit in Table <ref>.The elastic energies are small compared to the chemical energies, but are not small compared to differences in energies among precipitate compositions. §.§ In-situ energetics of dilute needle-like beta" precipitates Having evaluated separately the bulk,surface, and elastic relaxation energies for a needle-like precipitate of the β” phases, we can then proceed to estimate the overall energetics of a nucleus. Assuming for simplicity the surface area of the interfaces to be that of thematrix-coherent unit cell (that is 26.02 Å^2 for each formula unitalong the A facets, and 29.7Å^2 for each formula unit along the C facets)we find that a needle-like precipitate with a cross-section of a single formula has already a negative formation energy. Considering the elastic energy associated with the infinite-dilution limit, one obtains E= -1.872 eV for 1 f.u. ofMg5Si6, E= -1.486 eV/f.u. for Mg5Al2Si4, andE= -1.278 eV/f.u. for Mg4Al3Si4. The formation of the β” phases starting from the SSS is soexoenergetic that needle-like precipitates can form without overcoming a free energy barrier.Due to the much lowersurface energy for the C interface, in the small-precipitate limit Mg5Si6 forms the most stable precipitate.In the limit of macroscopic precipitates, the energy per f.u. tends to the precipitation energy plus the dilute-limit elastic contribution, given asE_∞= -2.467 eV/f.u. forMg5Si6, E_∞= -2.651 eV/f.u. for Mg5Al2Si4, andE_∞= -2.292 eV/f.u. for Mg4Al3Si4. Thus, Mg5Al2Si4 is predicted to bethe most stable form in the large-precipitate limit.The elastic strain energy does not change the order of stability but does narrow the energy difference between the most and least stable down to  0.35 eV/f.u. or 0.032 eV/atom §.§ DFT of needle-shaped precipitates and comparison to CNT model The CNT model of precipitate energetics we have introduced in Eq. <ref>, including self-consistent elasticity terms, could bevery useful to examine the interaction between growing precipitates.In order to assess its accuracy, we use the same needle-like geometry to evaluate the energetics of precipitates using DFT, and perform a comparison with the results of the model. To be consistent with the definition of formation energies used above, we define the precipitation energy using the SSSS as reference, i.e. E(N)= E_sys^tot(N)-M E^ss_Al -N∑_x=Si, Mg, Al n_x· E^ss_x,where M is thenumber of Al atoms in the matrix for a give simulation supercell,andn_x and E^ss_x indicatethe β” composition and the solid-solution energy for Al,Si and Mg,as in Eq. (<ref>).To benchmark the model across different precipitate sizes,we study three systems whose cross-section contains 1, 4, and 16 formula units of precipitate in an equiaxed geometry. These precipitates are embedded in an Al matrix supercells of sizes (a× b× c)5× 1× 5,7× 1× 7, and 12× 1× 12 fcc unit cells, respectively, as shown in Fig. <ref>(a)for the supercell containing 16 f.u. of the β” phase).As noted above, the elastic energy depends on the precipitate density or cell geometry. The DFT cells are not in the dilute limit.Therefore, for comparison to the DFT energies, the CNT model is modified to account for the elastic energy changes in the non-dilutelimit asE(N) = N E_form + N E_strain(N,V) + E_surf(N), where E_strain(N,V) is the elastic strain energy per formula unit in a supercell of volume V containing a precipitate of size N formula units.Elasticity calculations have been performed using the method described earlier for precisely the geometries studied in DFT, and the strain energies E_strain(N,V) are shown in Table <ref>.These values are generally larger than the dilute limit, and increase with increasing N due the larger fraction of β” precipitateincluded in the supercell.Figure <ref> compares the DFT precipitate energies, per formula unit, versus precipitate size with predictions obtained using (i) surface energy terms only (CNT(γ)) and(ii)surface energies terms plus elastic strain energy in the DFT simulation cell (CNT(γ+ϵ)). The results generally follow the expected trend, in that larger precipitates are thermodynamically more stable due to the reduction in relative importance of the interface, edge, and corner energies with increasing size, and the energies approach the (size-independent) formation energies plus dilute-limit elastic energies for each of thethree stoichiometries (Table <ref>). A CNT model that uses only the surface energies captures qualitatively the asymptotic behavior for different β” compositions, as well as relative ordering. However, it under-estimatesthe energy of the precipitates in the large-precipitate-size limit,due to the absence of the positive contribution of the elastic energy. The CNT(γ+ϵ) model predicts quite accurately the energetics of the larger precipitates.However, it significantly overestimates the energy at the smaller sizes.The full DFT energies are upto 0.4eV/f.u lower than predicted byEqn (<ref>). One would normally expect thatedge and corner terms would destabilize the nucleus (increase the energy) at the smaller sizes.Thus, the fact that the self-consistent energetics leads to stronger stabilization suggests that the surface energies computed assuming ideal interfaces provides only an upper-bound to the actual γ^A,B,C. Further relaxation (which is hindered for the larger precipitates, and for periodic surface calculations)could significantly lower the interface energy.Searching for reconstructions of the β”∥Al interfaceswith a top-down approach and using electronic structure calculations constitutes a formidable challenge.We expect that the development of machine-learning models<cit.> for classical inter-atomic potentials,together with Monte Carlo sampling techniques,might help elucidate this important contribution to the stability and morphology of precipitates in the Al-6000 series.Comparison between the calculations we report here andthose presented in Ref.  underscore the importance of accounting for elastic relaxation in this kind of simulations. While part of the discrepancy could be attributed to minor differences in the computational details, we note a general trend where the energies for the 4× 4 precipitates reported by Ref. are considerably lower than those for the smaller 2× 2 precipitates,values – and in all cases but for Mg5Si6 – lower than our values. As shown in the Appendix, this trend can be understood in terms of the boundary conditions chosen for DFT calculations. Simulations in Ref.  allowed the supercell dimensions to relax, which underestimates the energy of the encapsulated precipitate relative to the dilute limit. In our calculations, instead, we fixedfixed the cell parameters to match the Al bulk lattice parameter, which, conversely, overestimates the energy.Use of a fixed supercell simplifies the comparison between calculations, and the definition of consistent surface energies. However, only a multi-scale analysis that includes a FE model makes it possible to compute the elastic corrections to the “dilute” limit and to interpret quantitatively DFT results in terms of the physical contributions to the precipitate energy.§ NUCLEATION OF A PRECIPITATE IN 3DThe analyses in the previous section show that there is no barrier for the growth of needle-like precipitates starting at the smallest size N=1 for the in-plane precipitate structure.Theinclusion of interface and elastic energies was essential in this analysisto verify that nanoscopic precipitates are stable despite the high interface and elastic energy contributions. We note that possible lower-energy interfaces will only enhance the stabilization of thesmallest precipitates.Therefore, nucleation of all three β” phases studied here occurs at the in-plane unit cell level or below. However, the in-plane analysis neglects the additional energy cost of the high-energy B [010]_β” interface.We thus investigate here the formation energy of 3D precipitates, to better understand theprecipitate nucleation process and possible nucleation barriers.We simulated 3D precipitates composed of a single formula unit fully-embedded in the Al matrix. As shown in Table <ref>,the fully-relaxed DFT energy is negative for all compositions.This confirms thatprecipitation is barrierless down to a single 3D formula unit even when considering the high-γ B interfaces. At this scale, the CNT(γ) model is very inaccurate, predicting positive formation energy for all thestoichiometries except Mg5Si6.The elastic strain energy computation requires a full 3d analysis, and is not performed here since the elastic term would increase the energy relative to the CNT(γ) model.It is not surprising that a mesoscopic model cannot capture the energetics of a precipitatethat consists of just eleven atoms. It is howeverinteresting that – just as for the needle-likegeometry – the mesoscale model overestimates the energy cost associated with the precipitate-matrix interfaces, indicating that local relaxations can significantly lower the interface excess energy as compared to the ideal unreconstructed interfaces. § CONCLUSION By clearly identifying the chemical, surface, and elastic strain energies that contribute to the total precipitation energy versus size and composition,and demonstrating that the overall trends are consistent with a thermodynamic classical-nucleation-theory-like model, we have provided new insights into the early stages of the formation of β” precipitates in Al-6000 alloys.The in-situ needle-like β" precipitates are found to be stable relative to the solid solution down to the smallest in-plane formula unit, indicating barrier-less growth at and above this size.The composition dependence of the totalenergies is subtle, with two compositions being quite close in energy.Thus, the inclusion of surface energies and elastic energies due to the different precipitate structures and compositions is essential for interpreting the DFT results and for thendetermining the energetics in the more-dilute limit of real materials. The benchmarking of the CNT-type model also provides a validation for the use of such mesoscopic models in other systems.The largest discrepancy between the thermodynamic CNT model and DFTcalculations is seen for the smallest precipitates, with the ab initio energiesbeing consistently much lowerthan those predicted based on surface energies computed for a coherent interface between the precipitate and the matrix. Together with the fact that the anisotropy of γ is not sufficient to justify the aspect ratio of needle-like β” precipitates, this observation hints strongly atthe need for consideration of more complex models of the interfaces of the precipitates – including variable composition anda significant degree of reconstruction – that may help reduce the interface and elastic energies and further stabilize thesmall precipitates.We further show that, down to a single formula unit that is fully encapsulated in the Al matrix, the DFT energy of a nanoscale precipitate is lower than the reference supersaturated solidsolution.This underscores the fact that precipitation kinetics is likely to be diffusion-limited.Aggregates of a few solute atoms that can act as vacancy traps <cit.> would thus slow vacancy-mediated solute diffusion that is necessary to form larger precipitates, greatly affecting the aging times.This conclusion of dominance of diffusion-controlled aging is also consistent with recent findings that the addition of 100 ppm of Sn to Al-6061 can significantly delay aging, attributed to trapping of the quenched-in vacancies by the Sn atoms <cit.>. Our results thus point toward the need for a systematic study of the energetics of aggregates in the GP-zone regime, and the interactions between those aggregates and vacancies and/or trace elements in the alloy to understand and fine-tune the behavior of Al-6000 alloys in the early stages of precipitation. Acknowledgements The authors acknowledgeinsightful discussion with Dr.Christophe Sigli andDr. Timothy Warner. DG and MC acknoweledge support for this work by an Industial Research Grant funded by Constellium. TJ and WC acknowledge support for this workthrough a European Research Council Advanced Grant,“Predictive Computational Metallurgy", ERC Grant agreement No. 339081 - PreCoMet. § CALCULATION OF EIGENSTRAIN AND STIFFNESS TENSORS The eigenstrain ϵ̅ is the strain required to compensate for the misfit between the matrix and precipitate lattices, i.e., the strain that deforms a formula unit of precipitate into the shape of a formula unit of undeformed matrix. Subsequently, we show how to compute ϵ̅ in the global frame of reference ê⃗̂_x-ê⃗̂_y-ê⃗̂_z described in Figure <ref>. The formula unit geometries of the matrix and the precipitates are monoclinic cells for which the directions of c⃗ and b⃗ coincide but differ in the angle β and the edge lengths a, b, c. We start by determining the material frame of reference ê⃗̂_α-ê⃗̂_β-ê⃗̂_z as itsimplifies both the expression of the edge vectors a⃗, b⃗, c⃗ and, since the elastic constants reported in Table <ref> are computed in that frame, is required to compute stiffness tensors in the global frame.The basis vectors ê⃗̂_β and ê⃗̂_z are collinear with the formula unit cell edge vectors defined in (<ref>), c⃗ and b⃗, respectively, and the third basis vector ê⃗̂_α is chosen to complete a right-handed orthonormal basisê⃗̂_β= c⃗/c = 1√(10)(-3, 1, 0)^T, ê⃗̂_z = b⃗/b = (0, 0, 1)^T, ê⃗̂_α = ê⃗̂_β×ê⃗̂_z =1√(10)(1, 3, 0)^T.We use the basis vectors to express the edge vectors in the global frame of reference using Table <ref>c⃗ = c ê⃗̂_β, b⃗ = b ê⃗̂_z, a⃗ = a (sinβ ê⃗̂_α + cosβ ê⃗̂_β).The eigenstrain ϵ̅ corresponds to a displacement gradient ∇u⃗ that transforms the precipitate edge vectors into the matrix edge vectors, see Figure <ref> (left).After defining matrices composed of the edge vectors for a precipitate V = (a⃗, b⃗,c⃗) and the matrix V = (a⃗, b⃗,c⃗), the displacement gradient ∇u⃗ can be expressed as V = ∇u⃗V +V ⇒ ∇u⃗=VV^-1- I,where I is the identity matrix. The eigenstrain ϵ̅ is the symmetric part of ∇u⃗ϵ̅ = 12(∇u⃗ + ∇u⃗^T).The elastic constants of the precipitates have been calculated in the material frame of reference ê⃗̂_α-ê⃗̂_β-ê⃗̂_z and the corresponding stiffness tensor has to be rotated into the global frame of reference for the finite-element analysis. The stress σ and strain ϵ in the global frame of reference are related to the material frame stress σ' and strain ϵ' by the rotation R = (ê⃗̂_α,ê⃗̂_β,ê⃗̂_z)ϵ' =Rϵ R^T,σ' =Rσ R^T,and the relationship between σ' and ϵ' is governed by elasticityσ' =C' ϵ',where C' is the stiffness tensor in the material frame of reference. The stiffness tensor in the global frame of reference C can be obtained by combination (<ref>) and (<ref>) in index notation (Einstein summation applies to repeated indices)R_ijσ_jkR_lk = C'_ilmnR_moϵ_opR_np, R_iaR_ij_δ_ajσ_jkR_lkR_lb_δ_kb = R_iaC'_ilmnR_moϵ_opR_npR_lb, σ_ab = R_iaR_lb C'_ilmnR_moR_npϵ_op,C_abop = R_iaR_lbR_moR_npC'_ilmn. § ELASTIC CALCULATIONSThe elastic calculations use the finite element method <cit.> and have been performed using a modified version of the open-source finite-element code Akantu <cit.>. This section explains the chosen procedure.We modeled the elastic problem using a structured, quadrilateral, and periodic two-dimensional mesh of bi-quadratic serendipity elements with eight nodes <cit.>. The element type was chosen over linear elements for its high accuracy in static problems. In order to enforce periodic boundary conditions, we define the boundary nodes i_s of the upper and right boundary as slave nodes to their counterparts on the bottom and left boundary (master nodes i_m). During the evaluation of nodal forces on master nodes f⃗_i_m, the forces acting their slave nodes are also assembled on the master f⃗_i_m^tot = f⃗_i_m + f⃗_i_s and the slave node displacement is set to be equal to the displacement of their master u⃗_i_s = u⃗_i_m. In order to preclude solid body motion (and, thus, a singular stiffness matrix K), the center node in the precipitate is fully blocked u⃗_c =0.Figure <ref> (center and right) shows such a mesh in its original and deformed state where the displacements have been amplified by a factor five for better visibility. The structured mesh follows the boundary of the precipitate, such that any element is either of matrix material (blue) or precipitate material(red). Note the periodic deformation of the simulation cell. The precipitate is preloaded with the eigenstrain ϵ̅ as described in Section <ref> and the stiffness tensors for matrix C and precipitate and C are assigned to the blue and red elements respectively. In absence of external loads, the assembled system of equations to solve isK U⃗ = 0⃗,where K is the assembled stiffness matrix and U⃗ is the vector of all displacement degrees of freedom. We solve this system using the direct solver Mumps <cit.>. The calculation of strain energy exploits the quadrature routines of Akantu using the shape functions of the elements to evaluate the integrals in (<ref>). Figure <ref> shows the distribution of strain energy density e_strain for the geometries considered using the example of Mg4Al3Si4.A mesh that is eight times finer than the one represented in Figure <ref> was used for smooth visualization. §.§ Relaxation of boundary conditionsIn order to compare our results more readily to those presented in <cit.>, we have additionally performed elastic calculations with fully relaxed periodic boundary conditions, in which the simulation box was allowed to expand and tilt as needed to have no average stress. This was done by following the procedure described in Appendix <ref>, but with an additional uniform eigenstrain added to all elements. This additional eigenstrain was used as a degree of freedom in a minimization of the total strain energy.Table <ref> compares the strain energies per formula unit obtained with fixed periodic boundary conditions like the ones used in all DFT calculations in this work to the energies obtained using the relaxed boundary conditions used in <cit.>. One can see that the relaxed conditions lead to a consistent underestimation of the strain energy, while the fixed periodic conditions lead to overestimated energies.§ REFERENCES 10 url<#>1urlprefixURL href#1#2#2 #1#1murayama_pre-precipitate_1999 M. Murayama, K. Hono, http://linkinghub.elsevier.com/retrieve/pii/S1359645499000336Pre-precipitate clusters and precipitation processes in Al–Mg–Si alloys, Acta Materialia 47 (5) (1999) 1537–1548. http://dx.doi.org/10.1016/S1359-6454(99)00033-6 doi:10.1016/S1359-6454(99)00033-6. <http://linkinghub.elsevier.com/retrieve/pii/S1359645499000336>edwards_precipitation_1998 G. Edwards, K. Stiller, G. Dunlop, M. Couper, http://linkinghub.elsevier.com/retrieve/pii/S1359645498000597The precipitation sequence in Al–Mg–Si alloys, Acta Materialia 46 (11) (1998) 3893–3904. http://dx.doi.org/10.1016/S1359-6454(98)00059-7 doi:10.1016/S1359-6454(98)00059-7. <http://linkinghub.elsevier.com/retrieve/pii/S1359645498000597>ringer_microstructural_2000 S. Ringer, K. Hono, http://linkinghub.elsevier.com/retrieve/pii/S1044580399000510Microstructural Evolution and Age Hardening in Aluminium Alloys, Materials Characterization 44 (1-2) (2000) 101–131. http://dx.doi.org/10.1016/S1044-5803(99)00051-0 doi:10.1016/S1044-5803(99)00051-0. <http://linkinghub.elsevier.com/retrieve/pii/S1044580399000510>ravi-wolv04am C. Ravi, C. Wolverton, First-principles study of crystal structure and stability of Al/Mg/Si/(Cu) precipitates, Acta Materialia 52 (2004) 4213–4227.marioara_influence_2005 C. D. Marioara, S. J. Andersen, H. W. Zandbergen, R. Holmestad, http://link.springer.com/article/10.1007/s11661-005-0185-1The influence of alloy composition on precipitates of the Al-Mg-Si system, Metallurgical and Materials Transactions A 36 (3) (2005) 691–702. <http://link.springer.com/article/10.1007/s11661-005-0185-1>takeda_stability_1998 M. Takeda, F. Ohkubo, T. Shirai, K. Fukui, http://link.springer.com/article/10.1023/A:1004355824857Stability of metastable phases and microstructures in the ageing process of Al–Mg–Si ternary alloys, Journal of materials science 33 (9) (1998) 2385–2390. <http://link.springer.com/article/10.1023/A:1004355824857>andersen_crystal_1998 S. J. Andersen, H. W. Zandbergen, J. Jansen, C. Traeholt, U. Tundal, O. Reiso, http://www.sciencedirect.com/science/article/pii/S135964549700493XThe crystal structure of the β ″phase in Al–Mg–Si alloys, Acta Materialia 46 (9) (1998) 3283–3298. <http://www.sciencedirect.com/science/article/pii/S135964549700493X>zandbergen_data_2015 M. W. Zandbergen, Q. Xu, A. Cerezo, G. D. W. Smith, http://www.sciencedirect.com/science/article/pii/S2352340915002395Data analysis and other considerations concerning the study of precipitation in Al–Mg–Si alloys by Atom Probe Tomography, Data in Brief 5 (2015) 626–641. http://dx.doi.org/10.1016/j.dib.2015.09.045 doi:10.1016/j.dib.2015.09.045. <http://www.sciencedirect.com/science/article/pii/S2352340915002395>marioara_influence_2003 C. Marioara, S. Andersen, J. Jansen, H. Zandbergen, http://linkinghub.elsevier.com/retrieve/pii/S1359645402004706The influence of temperature and storage time at RT on nucleation of the β″ phase in a 6082 Al–Mg–Si alloy, Acta Materialia 51 (3) (2003) 789–796. http://dx.doi.org/10.1016/S1359-6454(02)00470-6 doi:10.1016/S1359-6454(02)00470-6. <http://linkinghub.elsevier.com/retrieve/pii/S1359645402004706>niniveIpaperthesis2014 P. H. Ninive, A. Strandlie, S. Gulbrandsen-Dahl, W. Lefebvre, C. D. Marioara, S. J. Andersen, J. Friis, R. Holmestad, O. M. Løvvik, Detailed atomistic insight into the phase in al–mg–si alloys, Acta Materialia 69 (2014) 126–134. http://dx.doi.org/10.1016/j.actamat.2014.01.052 doi:10.1016/j.actamat.2014.01.052.poga+14prl S. Pogatscher, H. Antrekowitsch, M. Werinos, F. Moszner, S. S. A. Gerstl, M. F. Francis, W. A. Curtin, J. F. Löffler, P. J. Uggowitzer, Diffusion on Demand to Control Precipitation Aging: Application to Al-Mg-Si Alloys, Phys. Rev. Lett. 112 (2014) 225701.marioara_atomic_2001 C. D. Marioara, S. J. Andersen, J. Jansen, H. W. Zandbergen, http://www.sciencedirect.com/science/article/pii/S1359645400003025Atomic model for GP-zones in a 6082 Al–Mg–Si system, Acta materialia 49 (2) (2001) 321–328. <http://www.sciencedirect.com/science/article/pii/S1359645400003025>derlet_first-principles_2002 P. M. Derlet, S. J. Andersen, C. D. Marioara, A. Frøseth, http://iopscience.iop.org/article/10.1088/0953-8984/14/15/315/metaA first-principles study of the β”-phase in Al-Mg-Si alloys, Journal of Physics: Condensed Matter 14 (15) (2002) 4011. <http://iopscience.iop.org/article/10.1088/0953-8984/14/15/315/meta>hasting_composition_2009 H. S. Hasting, A. G. Fro̸seth, S. J. Andersen, R. Vissers, J. C. Walmsley, C. D. Marioara, F. Danoix, W. Lefebvre, R. Holmestad, http://scitation.aip.org/content/aip/journal/jap/106/12/10.1063/1.3269714Composition of β[sup ʺ] precipitates in Al–Mg–Si alloys by atom probe tomography and first principles calculations, Journal of Applied Physics 106 (12) (2009) 123527. http://dx.doi.org/10.1063/1.3269714 doi:10.1063/1.3269714. <http://scitation.aip.org/content/aip/journal/jap/106/12/10.1063/1.3269714>gian+09jpcm P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, I. Dabo, A. D. Corso, S. de Gironcoli, S. Fabris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussis, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, R. M. Wentzcovitch, QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials, J. Phys. Condens. Matter 21 (39) (2009) 395502–395519.perd+96prl J. P. Perdew, K. Burke, M. Ernzerhof, Generalized Gradient Approximation made simple, Phys. Rev. Lett. 77 (18) (1996) 3865.vand90prb D. Vanderbilt, Soft self-consistent pseudopotentials in a generalized eigenvalue formalism, Phys. Rev. B 41 (1990) 7892–7895.kresse_ultrasoft_1999 G. Kresse, D. Joubert, http://journals.aps.org/prb/abstract/10.1103/PhysRevB.59.1758From ultrasoft pseudopotentials to the projector augmented-wave method, Physical Review B 59 (3) (1999) 1758. <http://journals.aps.org/prb/abstract/10.1103/PhysRevB.59.1758>materialcloud I. Castelli, N. Marzari, http://materialscloud.org/Standard solid state pseudopotentials (2015). <http://materialscloud.org/>monk-pack76prb H. J. Monkhorst, J. D. Pack, Special points for Brillouin-zone integrations, Phys. Rev. B 13 (12) (1976) 5188–5192.davey_precision_1925 W. P. Davey, http://journals.aps.org/pr/abstract/10.1103/PhysRev.25.753Precision measurements of the lattice constants of twelve common metals, Physical Review 25 (6) (1925) 753. <http://journals.aps.org/pr/abstract/10.1103/PhysRev.25.753>tambe_bulk_2008 M. J. Tambe, N. Bonini, N. Marzari, http://link.aps.org/doi/10.1103/PhysRevB.77.172102Bulk aluminum at high pressure: A first-principles study, Physical Review B 77 (17). http://dx.doi.org/10.1103/PhysRevB.77.172102 doi:10.1103/PhysRevB.77.172102. <http://link.aps.org/doi/10.1103/PhysRevB.77.172102>nielsen_first-principles_1983 O. H. Nielsen, R. M. Martin, http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.50.697First-principles calculation of stress, Physical Review Letters 50 (9) (1983) 697. <http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.50.697>elasticonstant2002 G. V. Sin’ko, N. A. Smirnov, http://stacks.iop.org/0953-8984/14/i=29/a=301Ab initio calculations of elastic constants and thermodynamic properties of bcc, fcc, and hcp al crystals under pressure, Journal of Physics: Condensed Matter 14 (29) (2002) 6989. <http://stacks.iop.org/0953-8984/14/i=29/a=301>bercegeay_first-principles_2005 C. Bercegeay, S. Bernard, http://link.aps.org/doi/10.1103/PhysRevB.72.214101First-principles equations of state and elastic properties of seven metals, Physical Review B 72 (21). http://dx.doi.org/10.1103/PhysRevB.72.214101 doi:10.1103/PhysRevB.72.214101. <http://link.aps.org/doi/10.1103/PhysRevB.72.214101>yu_calculations_2010 R. Yu, J. Zhu, H. Ye, http://linkinghub.elsevier.com/retrieve/pii/S0010465509003932Calculations of single-crystal elastic constants made simple, Computer Physics Communications 181 (3) (2010) 671–675. http://dx.doi.org/10.1016/j.cpc.2009.11.017 doi:10.1016/j.cpc.2009.11.017. <http://linkinghub.elsevier.com/retrieve/pii/S0010465509003932>li_computer_1998 D. Li, L. Chen, http://linkinghub.elsevier.com/retrieve/pii/S1359645497004783Computer simulation of stress-oriented nucleation and growth of θ′ precipitates inAl–Cu alloys, Acta Materialia 46 (8) (1998) 2573–2585. http://dx.doi.org/10.1016/S1359-6454(97)00478-3 doi:10.1016/S1359-6454(97)00478-3. <http://linkinghub.elsevier.com/retrieve/pii/S1359645497004783>luo_stress/strain_2014 K. Luo, B. Zang, S. Fu, Y. Jiang, D.-q. Yi, http://linkinghub.elsevier.com/retrieve/pii/S1003632614633239Stress/strain aging mechanisms in Al alloys from first principles, Transactions of Nonferrous Metals Society of China 24 (7) (2014) 2130–2137. http://dx.doi.org/10.1016/S1003-6326(14)63323-9 doi:10.1016/S1003-6326(14)63323-9. <http://linkinghub.elsevier.com/retrieve/pii/S1003632614633239>fu_effects_2014 S. Fu, D.-q. Yi, H.-q. Liu, Y. Jiang, B. Wang, Z. Hu, http://linkinghub.elsevier.com/retrieve/pii/S1003632614633458Effects of external stress aging on morphology and precipitation behavior of θ″ phase in Al-Cu alloy, Transactions of Nonferrous Metals Society of China 24 (7) (2014) 2282–2288. http://dx.doi.org/10.1016/S1003-6326(14)63345-8 doi:10.1016/S1003-6326(14)63345-8. <http://linkinghub.elsevier.com/retrieve/pii/S1003632614633458>yao_tem_2001 J.-Y. Yao, D. A. Graham, B. Rinderer, M. J. Couper, http://www.sciencedirect.com/science/article/pii/S0968432800000950A TEM study of precipitation in Al–Mg–Si alloys, Micron 32 (8) (2001) 865–870. <http://www.sciencedirect.com/science/article/pii/S0968432800000950>wang_first-principles_2007 Y. Wang, Z.-K. Liu, L.-Q. Chen, C. Wolverton, http://linkinghub.elsevier.com/retrieve/pii/S1359645407004673First-principles calculations of β″-Mg5si6/α-Al interfaces, Acta Materialia 55 (17) (2007) 5934–5947. http://dx.doi.org/10.1016/j.actamat.2007.06.045 doi:10.1016/j.actamat.2007.06.045. <http://linkinghub.elsevier.com/retrieve/pii/S1359645407004673>niniveIIpaperthesis2014 P. H. Ninive, O. M. Løvvik, A. Strandlie, http://link.springer.com/10.1007/s11661-014-2214-4Density Functional Study of the β″ Phase in Al-Mg-Si Alloys, Metallurgical and Materials Transactions A 45 (6) (2014) 2916–2924. http://dx.doi.org/10.1007/s11661-014-2214-4 doi:10.1007/s11661-014-2214-4. <http://link.springer.com/10.1007/s11661-014-2214-4>Ryo R. Kobayashi, D. Giofré, T. Junge, M. Ceriotti, W. A. Curtin. private communication[link]. <private communication>Pogatscher2011 S. Pogatscher, H. Antrekowitsch, H. Leitner, T. Ebner, P. Uggowitzer, https://doi.org/10.1016 the artificial aging of al–mg–si alloys, Acta Materialia 59 (9) (2011) 3352–3363. http://dx.doi.org/10.1016/j.actamat.2011.02.010 doi:10.1016/j.actamat.2011.02.010. <https://doi.org/10.1016Francis2016 M. Francis, W. Curtin, https://doi.org/10.1016 controllable delay of precipitate formation in metal alloys, Acta Materialia 106 (2016) 117–128. http://dx.doi.org/10.1016/j.actamat.2016.01.014 doi:10.1016/j.actamat.2016.01.014. <https://doi.org/10.1016zienkiewicz_finite_1977 O. C. Zienkiewicz, The finite element method, 3rd Edition, McGraw-Hill, London - New York, 1977.richart2015implementation N. Richart, J.-F. Molinari, Implementation of a parallel finite-element library: test case on a non-local continuum damage model, Finite Elements in Analysis and Design 100 (2015) 41–46.ergatoudis1968isoparametric J. G. Ergatoudis, Isoparametric finite elements in two and three dimensional stress analysis., Ph.D. thesis, University College of Swansea (1968).amestoy98mumpsmultifrontal P. Amestoy, I. Duff, J.-Y. L'Excellent, Mumps multifrontal massively parallel solver version 2.0 (1998). >>
http://arxiv.org/abs/1708.07908v1
{ "authors": [ "Daniele Giofré", "Till Junge", "W. A. Curtin", "Michele Ceriotti" ], "categories": [ "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170825234833", "title": "Ab initio Modelling of the Early Stages of Precipitation in Al-6000 Alloys" }
apsrev4-1 shapes decorations arrows matrix [1][1][1] (<ref>) [1] Eq. (<ref>) [1] Ref. <cit.>𝒫𝒩 [1][2] [1] #1[3] #1#2#3 d[1]#1|⟩[1]| #1 ⟩⟨|[1]⟨ #1 |[1]| #1 |[1]⟨ #1 ⟩⟨|⟩[2]⟨ #1 | #2 ⟩[1]| #1 |[][email protected] Quantique et Photophysique, Université libre de Bruxelles, B-1050 Brussels, Belgium[][email protected] Fizyki imienia Mariana Smoluchowskiego, Uniwersytet Jagielloński, PL-30-348 Kraków, Poland[][email protected] of Theoretical Physics and Astronomy, Vilnius University, LT-10222 Vilnius, Lithuania[][email protected] Quantique et Photophysique, Université libre de Bruxelles, B-1050 Brussels, Belgium[][email protected] for Materials Science and Applied Mathematics, Malmö University, S-20506 Malmö, Sweden The present work reports results from systematic multiconfiguration Dirac-Hartree-Fock calculations of electronic isotope shift factors for a set of transitions between low-lying states in neutral zinc. These electronic quantities together with observed isotope shifts between different pairs of isotopes provide the changes in mean-square charge radii of the atomic nuclei. Within this computational approach, different models for electron correlation are explored in a systematic way to determine a reliable computational strategy and to estimate theoretical error bars of the isotope shift factors. 31.30.Gs, 31.30.jcMulticonfiguration calculations of electronic isotope shift factors in Zn i Per Jönsson December 30, 2023 =========================================================================== § INTRODUCTIONWhen the effects of the finite mass and the extended spatial charge distribution of the nucleus are taken into account in a Hamiltonian describing an atomic system, the electronic energy levels undergo a small, isotope-dependent shift <cit.>. The isotope shift (IS) of spectral lines, which consists of the mass shift (MS) and the field shift (FS), plays a key role in extracting the changes in mean-square charge radii of the atomic nuclei <cit.>. For a given atomic transition k with frequency ν_k, it is assumed that the electronic response of the atom to variations of the nuclear mass and charge distribution can be described by only two factors: the mass-shift factor Δ K_k,MS and the field-shift factor F_k. The observed IS δν_k^A,A' between any pair of isotopes with mass numbers A and A' is related to the difference in nuclear masses and in mean-square charge radii, δ⟨ r^2 ⟩^A,A' <cit.>.This work focuses on two transitions between low-lying levels of neutral zinc (Zn i), the lightest element of group 12 (IIB), that have been under investigation in laser spectroscopy experiments along the Zn isotopic chain. Campbell et al. <cit.> measured the isotope shifts between stable isotopes (^64,66-68,70Zn) for the 4s^2 ^1S_0→ 4s4p ^3P^o_1 (307.6 nm) transition using a crossed atomic-laser beam experiment. Specific mass shifts (SMSs) were extracted, and a large value has been assigned to the ground state, emphasizing the substantial 3d core-valence polarization. Recently, Yang et al. <cit.> investigated the 4s4p ^3P^o_2→ 4s5s ^3S_1 (481.2 nm) transition in a bunched-beam collinear laser spectroscopy experiment to determine nuclear properties of the ^79Zn isotope. The isomer shift between the nuclear ground state and the long-lived 1/2^+ isomeric state was measured, and the change of the mean-square charge radii of ^79,79mZn has been extracted via the MS and FS electronic factors. The latter were obtained from a King-plot process <cit.> using the root-mean-square charge radii of isotopes from Refs. <cit.>.There are many theoretical studies of properties such as oscillator strengths, lifetimes, polarizabilities and hyperfine structure constants in Zn i and Zn-like ions <cit.>. By contrast, to the best of our knowledge, no recent paper reporting on theoretical IS electronic factors in Zn i has been published since the pioneer works led by Bauche and Crubellier <cit.> reporting only on SMS factors, and by Blundell et al. <cit.> only on FS factors. Hence, we reinvestigate the two above-cited transitions in Zn i by performing ab initio calculations of IS electronic factors using the multiconfiguration Dirac-Hartree-Fock (MCDHF) method implemented in the ris3/grasp2k program package <cit.>. Using the MCDHF method, the computational scheme is based on the estimation of the expectation values of the one- and two-body recoil Hamiltonian for a given isotope, including relativistic corrections derived by Shabaev <cit.>, combined with the calculation of the total electron densities at the origin.This approach has recently been performed on neutral copper (Cu i) <cit.> to determine a set of δ⟨ r^2 ⟩^65,A' values from the corresponding observed IS. Later on, it has been applied to neutral magnesium (Mg i) <cit.> and neutral aluminium (Al i) <cit.>, where IS factors have been computed for transitions between low-lying states. In the present work, different electron correlation models are applied to Zn i to estimate theoretical error bars of the IS factors.In Sec. <ref>, the principles of the MCDHF method are summarized. In Sec. <ref>, the relativistic expressions of the MS and FS factors are recalled. Section <ref> presents the active space expansion strategies adopted for the electron correlation models. In Sec. <ref>, numerical results of the MS and FS factors are reported for each of the two studied transitions in Zn i. Section <ref> reports conclusions. § NUMERICAL METHODThe MCDHF method <cit.>, as implemented in the grasp 2k program package <cit.>, is the fully relativistic counterpart of the non-relativistic multiconfiguration Hartree-Fock (MCHF) method <cit.>. The MCDHF method is employed to obtain wave functions that are referred to as atomic state functions (ASF), i.e., approximate eigenfunctions of the Dirac-Coulomb Hamiltonian given by ℋ_DC = ∑_i=1^N [cα_i ·p_i + (β_i - 1)c^2 + V_nuc(r_i)] + ∑_i<j^N 1/r_ij, eq_DC_Hamiltonian where V_nuc(r_i) is the nuclear potential corresponding to an extended nuclear charge distribution function, c is the speed of light and α and β are the (4 × 4) Dirac matrices. An ASF, Ψ(γ Π JM_J), is given as an expansion over N_CSFsjj-coupled configuration state functions (CSFs), Φ(γ_νΠ JM_J), with the same parity Π, total angular momentum J and its projection on the z-axis, M_J: |Ψ(γ Π JM_J) ⟩ = ∑_ν=1^N_CSFs c_ν |Φ(γ_ν Π JM_J) ⟩. eq_ASFIn the MCDHF method, the one-electron radial functions used to construct the CSFs and the expansion coefficients c_ν are determined variationally so as to leave the energy functionalE = ∑_μ,ν^N_CSFs c_μ c_ν⟨Φ(γ_μ Π JM_J) |ℋ_DC|Φ(γ_ν Π JM_J) ⟩eq_energy_functional and additional terms for preserving the orthonormality of the radial orbitals stationary with respect to their variations. The resulting coupled radial equations are solved iteratively in the self-consistent field (SCF) procedure. Once radial functions have been determined, a configuration-interaction (CI) diagonalization of Hamiltonian eq_DC_Hamiltonian is performed over the set of configuration states, providing the expansion coefficients for building the potentials for the next iteration. The SCF and CI coupled processes are repeated until convergence of the total wave function eq_ASF and energy eq_energy_functional is reached. § ISOTOPE SHIFT THEORYThe finite mass of the nucleus gives rise to a recoil effect that shifts the level energies slightly, called the mass shift (MS). Due to the variation of the IS between the upper and lower levels, the transition IS arises as a difference between the IS for the two levels. Furthermore, the transition frequency MS between two isotopes, A and A', with nuclear masses M and M', is written as the sum of normal mass shift (NMS) and specific mass shift (SMS), δν_k,MS^A,A'≡ν_k,MS^A - ν_k,MS^A' = δν_k,NMS^A,A' + δν_k,SMS^A,A', eq_delta_nu_MS and can be expressed in terms of a single parameter δν_k,MS^A,A' = ( 1/M - 1/M') Δ K_k,MS/h = ( 1/M - 1/M') ΔK̃_k,MS. eq_delta_nu_MS_2 Here, the mass shift factor Δ K_k,MS=(K_MS^u-K_MS^l) is the difference of the K_MS=K_NMS+K_SMS factors of the upper (u) and lower (l) levels involved in the transition k. For the ΔK̃ factors, the unit (GHz u) is often used in the literature. As far as conversion factors are concerned, we use Δ K_k,MS [m_eE_h]=3609.4824 ΔK̃_k,MS [GHz u].Neglecting terms of higher order than δ⟨ r^2 ⟩ in the Seltzer moment (or nuclear factor) <cit.>λ^A,A' = δ⟨ r^2 ⟩^A,A' + b_1 δ⟨ r^4 ⟩^A,A' + b_2 δ⟨ r^6 ⟩^A,A' + ⋯, eq_Seltzer_moment the line frequency shift in the transition k arising from the difference in nuclear charge distributions between two isotopes, A and A', can be written as <cit.>δν_k,FS^A,A'≡ν_k,FS^A - ν_k,FS^A' = F_kδ⟨ r^2 ⟩^A,A'. eq_delta_nu_FS In the expression above δ⟨ r^2 ⟩^A,A'≡⟨ r^2 ⟩^A-⟨ r^2 ⟩^A' and F_k is the electronic factor. Although not used in the current work, it should be mentioned that there are computationally tractable methods to include higher order Seltzer moments in the expression for the transition frequency shift <cit.>.The total transition frequency shift is obtained by merely adding the MS, eq_delta_nu_MS, and FS, eq_delta_nu_FS, contributions: δν_k^A,A'=δν_k,NMS^A,A' + δν_k,SMS^A,A' +δν_k,FS^A,A'=( 1/M - 1/M') ΔK̃_k,MS + F_kδ⟨ r^2 ⟩^A,A'. eq_ISIn this approximation, it is sufficient to describe the total frequency shift between the two isotopes A and A' with only the two electronic parameters given by the mass shift factor ΔK̃_k,MS and the field shift factor F_k. Furthermore, they relate line frequency shifts to nuclear properties given by the change in mass and mean-square charge radius. Both factors can be calculated from atomic theory, which is the subject of this work.The main ideas of the method that is applied to compute these quantities are outlined here. More details can be found in the works by Shabaev <cit.> and Palmer <cit.>, who pioneered the theory of the relativistic mass shift used in the present work. Gaidamauskas et al. <cit.> derived the tensorial form of the relativistic recoil operator implemented in ris 3 <cit.> and its extension <cit.>.The nuclear recoil corrections within the (α Z)^4m_e^2/M approximation <cit.> are obtained by evaluating the expectation values of the one- and two-body recoil Hamiltonian for a given isotope, ℋ_MS = 1/2M∑_i,j^N( p_i ·p_j - α Z/r_i( α_i + (α_i ·r_i) r_i/r_i^2) ·p_j ). eq_H_MS Separating the one-body (i=j) and two-body (i≠ j) terms that, respectively, constitute the NMS and SMS contributions, the Hamiltonian eq_H_MS can be written ℋ_MS=ℋ_NMS+ℋ_SMS. eq_H_NMS_SMS The NMS and SMS mass-independent K factors are defined by the following expressions:K_NMS≡ M ⟨Ψ|ℋ_NMS|Ψ⟩, eq_K_NMS K_SMS≡ M ⟨Ψ|ℋ_SMS|Ψ⟩. eq_K_SMSWithin this approach, the electronic factor F_k for the transition k is estimated byF_k = Z/3ħ( e^2/4πϵ_0) Δ|Ψ(0) |_k^2, eq_F_k which is proportional to the change of the total electron probability density at the origin between levels l and u, Δ|Ψ(0) |_k^2 = Δρ_k^e(0) = ρ_u^e(0) - ρ_l^e(0). eq_Delta_Psi_0As potential V_nuc(r_i) of eq_DC_Hamiltonian is isotope-dependent, the radial functions vary from one isotope to another, which defines isotopic relaxation. However, the latter is very small and hence neglected along the isotopic chain. Thus, the wave function Ψ is optimized for a specific isotope within this approach. § ACTIVE SPACE EXPANSIONTo effectively capture electron correlation, CSFs of a particular symmetry J and parity Π are generated through substitutions within an active space (AS) of orbitals, consisting of orbitals occupied in the reference configurations and correlation orbitals. From hardware and software limitations, it is impossible to use complete AS wave functions that would include all CSFs with appropriate J and Π for a given orbital AS. Hence the CSF expansions have to be constrained ensuring that major correlation substitutions are accounted for <cit.>.Single (S), double (D) (and triple (T), see Sec. <ref>) substitutions are performed on either a single-reference (SR) set or a multireference (MR) set, the latter containing the CSFs that have large expansion coefficients and account for the major correlation effects. These substitutions take into account valence-valence (VV) and core-valence (CV) correlations. While the VV correlation model only allows SD substitutions from valence orbitals, the VV+CV correlation model considers restricted substitutions from core and valence orbitals. The restriction is applied to double (and triple) substitutions, denoted as SrD(T), in such a way that only one electron is substituted from the core shells, the other one (or two) has (have) to be substituted from the valence shells.Zn i has two valence electrons (n=4) outside an [Ar]3d^10 core. The MR sets (see Sec. <ref>) are obtained by performing SrDT substitutions from the 3d and the occupied valence orbitals to the n=4 valence orbitals + 5s/{5s,5p}/{5s,6s}, depending on the targeted state 4s^2 ^1S_0/4s4p ^3P^o_1,2/4s5s ^3S_1 (maximum of one hole in the 3d orbital). An SCF procedure is then applied to the resulting CSFs, providing the orbital set and the expansion coefficients. Due to limited computer resources, such an MR set would be too large for subsequent calculations. Hence, only the CSFs whose expansion coefficients are, in absolute value, larger than a given MR cutoff are kept, i.e., | c_ν| >ε_MR. The ε_MR values and the resulting MR sets are listed in  <ref> for both transitions.The 1s orbital is kept closed in all calculations, i.e., no substitutions from this orbital are allowed. Tests show that opening the 1s orbital does not affect the MS and FS factors within the accuracy attainable in the present calculations. Only orbitals occupied in the single configuration DHF approximation are treated as spectroscopic, i.e., are required to have a node structure similar to the corresponding hydrogenic orbitals <cit.>. The occupied reference orbitals are frozen in all subsequent calculations. A layer is defined as a subset of virtual orbitals with different angular symmetries, optimized simultaneously in one step, and frozen in all subsequent ones <cit.>. One layer of {s, p, d, f, g} symmetries and four of {s, p, d, f, g, h} are successively generated. At each generation step, only the orbitals of the last layer are variational in the SCF procedure, all previously generated layers being kept frozen.The effect of adding the Breit interaction to the Dirac-Coulomb Hamiltonian, eq_DC_Hamiltonian, is found to be much smaller than the uncertainty in the transition IS factors with respect to the correlation model. This interaction has therefore been neglected in the procedure.Within the three following correlation models, separate orbital basis sets are optimized for the lower state and the upper state of each studied transition. For each state, the optimization procedures are summarised as follows:§.§ SrD-SR model (1) Perform a calculation using an SR set consisting of CSF(s) with the form 2s^22p^63s^23p^63d^10nln'l' J^Π, with nln'l'=4s^2/4s4p/4s5s (following the considered state).(2) Keep the orbitals fixed from step (1), and optimize an orbital basis layer by layer up to nl=9h described by CSFs with the J^Π symmetry of the state. These CSFs are obtained by SrD substitutions (at most one from the 2s^22p^63s^23p^63d^10 core) on the SR set from step (1).§.§ SrD-MR model (1) Perform a calculation using an MR set consisting of CSFs with two forms: 2s^22p^63s^23p^63d^10nln'l' J^Π with nl,n'l'=4s,4p,4d,4f + 5s/{5s,5p}/{5s,6s}, and 2s^22p^63s^23p^63d^9nln'l'n”l” J^Π with nl,n'l',n”l”=4s,4p,4d,4f + 5s/{5s,5p}/{5s,6s} (following the considered state). These CSFs account for a fair amount of the VV correlation, and for CV correlations between the 3d core orbital and the valence orbitals. (2) Keep the orbitals fixed from step (1), and optimize an orbital basis layer by layer up to nl=9h described by CSFs with the J^Π symmetry of the state. These CSFs are obtained by SrD substitutions (at most one from the 2s^22p^63s^23p^63d^10 core) on the MR set from step (1).§.§ SrDT-SS model (1) Perform a calculation using a set consisting of CSFs with two forms: 2s^22p^63s^23p^63d^9nln'l'n”l” J^Π and 2s^22p^63s^23p^53d^10nln'l'n”l” J^Π with nl,n'l',n”l”=4s,4p,4d,4f + 5s/{5s,5p}/{5s,6s} (following the considered state). These CSFs also account for a fair amount of the VV correlation, and for CV correlations between the 3p and 3d core orbitals and the valence orbitals. Add single s-substitutions (SS) by including the following CSFs: 2s^22p^63s3p^63d^10nln'l'n”l” J^Π and 2s2p^63s^23p^63d^10nln'l'n”l” J^Π, with nln'l'n”l”=4s^25s/4s4p5s/4s5s6s.(2) Keep the orbitals fixed from step (1), and optimize an orbital basis layer by layer up to nl=9h described by CSFs with the J^Π symmetry of the state. These CSFs are obtained by SrDT-SS substitutions (at most one from the 2s^22p^63s^23p^63d^10 core) in the same way as in step (1). Although this model does not include all CV effects deep down in the core, it includes the ones that are important for getting accurate electron densities.It is important to mention that core-core (CC) contributions, i.e., unrestricted SD substitutions from core orbitals, are not accounted for, contrary to the strategy adopted in the papers on Mg i <cit.> and Al i <cit.>. Indeed, the CSFs expansions in Zn i become too large when CC correlations within the complete 2s^22p^63s^23p^63d^10 core orbitals are added to the nl=9h AS, counting for the J^π=2^- state more than 10^8 CSFs for the SrD-MR and SrDT-SS models. Such expansions exceed the capacity of our current computer resources by an order of magnitude. Restricting the CC correlations to only those within the 3d core orbital leads to around 10^7 CSFs. Applying an SCF procedure takes too much computing time, but the use of the CI method would be feasible by means of a Brillouin-Wigner perturbative zero- and first order partition of the CSF space <cit.>. However, the computational task for estimating the IS factors with ris 3 would exceed our current CPU time resources for such large expansions.The CC correlation effects are known to be more balanced with a common orbital basis for describing both upper and lower states, resulting in more accurate transition energies, as mentioned in Refs. <cit.>. Hence, neglecting CC contributions enables us to use separate orbital basis sets, in which orbital relaxation is allowed. § NUMERICAL RESULTSLet us first study the convergence of the level MS factors, K_NMS and K_SMS (in m_eE_h), and the electronic probability density at the origin, ρ^e(0) (in a_0^-3), of a given transition as a function of the increasing AS. Tables <ref> and <ref> display the SrD-SR, SrD-MR and SrDT-SS values respectively for the 4s^2 ^1S_0→ 4s4p ^3P^o_1 and 4s4p ^3P^o_2→ 4s5s ^3S_1 transitions. The AS is extended until convergence of the differential results Δ^u_l is achieved, which requires the nl=9h correlation layer (“CV 9h”).Let us start the analysis with the 4s^2 ^1S_0→ 4s4p ^3P^o_1 transition. A satisfactory convergence is found for the three correlation models. The relative difference between the “CV 8h” and “CV 9h” values is 0.3-2.2% for Δ K_NMS, 0.8-2.5% for Δ K_SMS and 0.4-0.5% for Δρ^e(0), following the model. The analysis is similar for the 4s4p ^3P^o_2→ 4s5s ^3S_1 transition, where the CV 8h-CV 9h relative differences reach 0.3-1.1% for Δ K_NMS, 1.4-2.4% for Δ K_SMS and 0.5-1.6% for Δρ^e(0).For both transitions, the relative differences are larger for Δ K_SMS, as expected from the two-body nature of the SMS operator, which makes it more sensitive to electron correlation than the one-body NMS and density operators. However the convergence achieved for the SMS factors is highly satisfactory, remembering that small variations in the level values due to correlation effects can lead to a significant variation in the transition values. This illustrates the challenge of obtaining reliable values for the SMS factors with such a computational approach.At this stage, convergence within the three correlation models has been investigated. However accuracy is not obviously implied, simply because the models may not be suitable for the studied properties. Hence, one also needs to compare the obtained results of the transition energies and IS factors with reference values.  <ref> displays the energies, Δ E (in cm^-1), of the two studied transitions in Zn i. The SrD-SR, SrD-MR and SrDT-SS CV 9h values are compared with experimental NIST data <cit.> and theoretical results <cit.>. Let us consider the 4s^2 ^1S_0→ 4s4p ^3P^o_1 transition. Głowacki and Migdałek <cit.> performed relativistic CI computations with Dirac-Fock wave functions using an ab initio model potential. Liu et al. <cit.> used the MCDHF method adopting a strategy on which the SrDT-SS model is based. Froese Fischer et al. <cit.> carried out MCHF and B-spline R-matrix calculations including Breit-Pauli corrections. Finally, Chen and Cheng <cit.> used B-spline basis functions for large-scale relativistic CI computations including QED corrections.  <ref> shows that the SrD-SR model provides a relative error of 1.9% in comparison with NIST data. Better agreement is found with the more elaborate SrD-MR (0.2%) and SrDT-SS models (0.1%). It is clear from the comparison with the four above-cited theoretical works that our SrD-MR and SrDT-SS results show better agreement with NIST data.In contrast to the 4s^2 ^1S_0→ 4s4p ^3P^o_1 transition, very few papers investigated the 4s4p ^3P^o_2→ 4s5s ^3S_1 transition. To our knowledge, the only existing theoretical works were led by Biémont and Godefroid <cit.> using the MCHF method and by Liu et al. <cit.> using the R-matrix method in the LS-coupling scheme. Both works are non-relativistic, and the transition energies must be compared with the J-averaged value Δ E=20 975.905 cm^-1 from NIST.  <ref> shows that the SrD-SR model provides a relative error of 0.06% in comparison with NIST data, while the SrD-MR and SrDT-SS models respectively provide 0.06% and 0.24%. Excellent agreement is thus found for all three models, and correlation beyond the SrD-SR model does not improve the accuracy on Δ E.Let us now compare the computed ab initio IS electronic factors with reference results from the literature. As pointed out in Sec. <ref>, most theoretical works report on properties in Zn i and Zn-like ions other than IS factors. The only existing papers discussing SMS factors in Zn i are seminal works in which low associated confidence is shown, compared with the accuracy to which IS measurements can be made <cit.>. In addition, high-precision study of ISs has been carried out in the Zn^+ ion (Zn ii). Kloch et al. <cit.> published measurements of optical ISs in the stable ^64,66-68,70Zn isotopes for the 3d^104p ^2P^o_1/2→ 3d^94s^2 ^2D_3/2 (589.4 nm) transition in Zn ii. Foot et al. <cit.> interpreted these measurements in terms of variations in the nuclear charge distribution. The measured ISs were separated into MS and FS contributions by combining the data with δ⟨ r^2 ⟩ results from electron scattering and muonic (μ-e) IS experiments performed by Wohlfahrt et al. <cit.>. Campbell et al. <cit.> measured ISs between the same stable isotopes for the 4s^2 ^1S_0→ 4s4p ^3P^o_1 (307.6 nm) transition. The ratio of FS factors, F_589.4/F_307.6=-3.06(16), was extracted from a King plot using the IS measurements from Refs. <cit.>. The F-factor calculations of Blundell et al. <cit.> enabled an estimate of F_307.6=-1260 MHz/fm^2 to be made. Note that the original value of -1510 MHz/fm^2 appearing in CBG97 is actually a misprint <cit.>.Finally, the separation of the MS contribution proceeded through a King plot using the correctedF_307.6 value together with δ⟨ r^2 ⟩_μ-e data from WSF80. Dividing the obtained MS between ^66Zn and ^64Zn isotopes, δν_MS^66,64=921(31) MHz, by (1/M_66-1/M_64) yields ΔK̃_MS=-1970(29) GHz u. The nuclear masses M_66 and M_64 are calculated by subtracting the mass of the electrons from the atomic masses, and by adding the binding energy <cit.>.Yang et al. <cit.> measured ISs between the same stable isotopes for the 4s4p ^3P^o_2→ 4s5s ^3S_1 (481.2 nm) transition. To calibrate the FS factor, a King plot was made using their set of ISs against the measured ISs from CBG97. This process enabled an estimate of F_481.2=301(51) MHz/fm^2 to be made, assuming an error of 10% on the erroneousF_307.6 value of -1510 MHz/fm^2. To calibrate the MS factor, another King plot involving their set of ISs together with the calibrated F_481.2 value and δ⟨ r^2 ⟩_μ-e data from WSF80 enabled the extraction of ΔK̃_MS=-59(18) GHz u, adopting the sign conventions eq_delta_nu_MS and eq_delta_nu_FS of the present work. After correction of the F_307.6 value from -1510 MHz/fm^2 to -1260 MHz/fm^2, the FS and MS factors become F_481.2=251(42) MHz/fm^2 and ΔK̃_MS=-73(15) GHz u <cit.>.Experimentalists often split the total MS into the NMS and SMS contributions by estimating the NMS factor, ΔK̃_k,NMS, with the scaling law approximation as ΔK̃_k,NMS≈ -m_e ν_k^expt, eq_sl1 where m_e is the mass of the electron and ν_k^expt is the experimental transition energy of transition k, available in the NIST database <cit.>. Doing so, one obtains ΔK̃_NMS=-535 GHz u for the 4s^2 ^1S_0→ 4s4p ^3P^o_1 transition and ΔK̃_NMS=-342 GHz u for the 4s4p ^3P^o_2→ 4s5s ^3S_1 transition, respectively yielding the SMS contributions ΔK̃_SMS=-1435(29) GHz u and ΔK̃_SMS=269(15) GHz u.  <ref> displays the SrD-SR, SrD-MR and SrDT-SS CV 9h MS factors, ΔK̃_NMS, ΔK̃_SMS, and ΔK̃_MS (in GHz u), and FS factors, F (in MHz/fm^2), of the two studied transitions in Zn i. The values of ΔK̃_NMS are compared with the results from eq_sl1 (“Scal.”), those of ΔK̃_SMS and F with results from Refs. <cit.>. Equation eq_sl1 is only strictly valid in the non-relativistic framework, and the relativistic nuclear recoil corrections to ΔK̃_NMS can be computed with ris3 as the expectation values of the relativistic part of the one-body term in the nuclear recoil Hamiltonian eq_H_MS, as shown in  <ref>.Let us start the comparison of the IS factors with the 4s^2 ^1S_0 → 4s4p ^3P^o_1 transition. After correction, the FS factor from CBG97 is in better agreement with our values, the relative difference reaching 10%. Moreover, the three models provide values in the same range, as expected from the one-body nature of the density operator. Turning to the total MS factor, it is seen that ΔK̃_MS is in excellent agreement with CBG97 for the SrDT-SS model while it does not agree within the experimental error bars for the SrD-MR model, although the discrepancies are not large. By contrast, the SrD-SR model provides a number 200 GHz u higher, illustrating the sensitivity to electron correlation of the two-body SMS operator. Hence, correlation beyond the SrD-SR model improves the accuracy on ΔK̃_MS.Analysing the NMS and SMS factors separately,  <ref> shows that the three models provide ΔK̃_NMS values in the same range, as for the FS factor. Moreover, these results totally disagree with the number from the scaling law.  <ref> shows that the relativistic nuclear recoil corrections to ΔK̃_NMS are important (134 GHz u), representing around +33% of the NMS results obtained when neglecting them. The extracted ΔK̃_SMS value is also in disagreement with the results from the three models, as expected from the analysis of ΔK̃_NMS.  <ref> shows that the relativistic corrections to ΔK̃_SMS are much less important (62 GHz u), representing around +3% of the SMS results obtained when neglecting them. In addition, these corrections on both MS factors are insensitive to electron correlation, staying constant along the increasing AS and being independent from the model.It is shown from this analysis that only the sum of the NMS and SMS factors can be comparable with observation, the total ℋ_MS being the only MS operator corresponding to an observable. The relativistic corrections partly cancel when summing ΔK̃_NMS and ΔK̃_SMS, leading to 196 GHz u for the three models, which represent 10-11% of the relativistic ΔK̃_MS values displayed in  <ref>. Hence, neglecting these corrections would bring the SrD-MR and SrDT-SS values around 200 GHz u too low in comparison with the experimental number.The analysis is different for the 4s4p ^3P^o_2 → 4s5s ^3S_1 transition. None of the computed FS factors, whose average value is F=346(3) MHz/fm^2, agrees with the number from Refs. <cit.>, although the three models provide values very close to each other. Turning to the total MS factor, an average of ΔK̃_MS=-14(7) GHz u can be deduced from the three computed values. Thus, important discrepancy is found between this result and the number from Refs. <cit.>. Moreover, it is not clear that correlation beyond the SrD-SR model improves the accuracy on ΔK̃_MS for this transition. Indeed, in contrast to the previous transition a strong cancellation is observed between the values of the NMS and SMS factors. Hence, small variations of the SMS factor due to correlation effects can significantly influence the total MS factor, leading to large theoretical error bars on the latter when comparing the models.Analysing the NMS and SMS factors separately,  <ref> shows that the three models provide ΔK̃_NMS values in the same range, as for the previous transition. Again, these results totally disagree with the number from the scaling law.  <ref> shows that the relativistic corrections to ΔK̃_NMS remain as important as in the previous transition (-34 GHz u), representing around -33% of the NMS results obtained when neglecting them. The extracted ΔK̃_SMS value is also in disagreement with our results.  <ref> shows that the relativistic corrections to ΔK̃_SMS are twice less important than for ΔK̃_NMS (-23 GHz u), representing around -17% of the SMS results obtained when neglecting them. In addition, these two corrections are also insensitive to electron correlation.Again, one concludes from this analysis that only the total MS factor is likely to be comparable with observation, although the discrepancy between theory and experiment is much higher than for the first transition. When summing ΔK̃_NMS and ΔK̃_SMS, the relativistic corrections reach -57 GHz u for the three models, which represents more than twice the relativistic ΔK̃_MS values displayed in  <ref>. Hence, neglecting these corrections would change the sign of all three values, and the agreement with the experiment would be worse.Finally, fully non-relativistic MCHF computations of SMS factors are carried out for the two transitions of interest, using the atsp 2k program package <cit.> and following the computational strategy of the SrD-SR and SrDT-SS models. For the 4s^2 ^1S_0 → 4s4p ^3P^o_1 transition, the CV 9h values are ΔK̃_SMS=-1511 GHz u for SrD-SR and -1731 GHz u for SrDT-SS, in excellent agreement with the fully relativistic results displayed in  <ref>. Hence, the relativistic corrections to the wave functions counterbalance the relativistic corrections to the ℋ_SMS operator for this transition. For the 4s4p ^3P^o_2 → 4s5s ^3S_1 transition, the CV 9h values are ΔK̃_SMS=178 GHz u for SrD-SR and 175 GHz u for SrDT-SS, around 50-60 GHz u higher than the relativistic results from  <ref>. Hence, the relativistic corrections to the wave functions add to those to the ℋ_SMS operator for this transition.Attempts to solve the discrepancies highlighted in this work are ongoing <cit.>. Yang et al. are reinvestigating the extraction of the MS factor for the 4s4p ^3P^o_2 → 4s5s ^3S_1 transition, using the present computed average value of F_481.2=348(1) MHz/fm^2 (with an associated 10-15% error) in several King plots, together with their 481.2 nm ISs, the 589.4 nm ISs from FSS82 and δ⟨ r^2 ⟩_μ-e data from WSF80. Since it is shown that inconsistency occurs in both FS and MS factors when plotting the 481.2 nm ISs against the 307.6 nm ones, coauthors of <cit.> try to calibrate the MS factor without using CBG97. Moreover, as the present ΔK̃_NMS values do not agree with the scaling law, the NMS factor will not be fixed in the fit processes, contrary to the procedure adopted in the previous calibration. Note that the actual aim of Ya17 is the determination of accurate δ⟨ r^2 ⟩ values between the ^64,66-68,70Zn stable isotopes, using the present computed F_481.2 factor and the new calibrated 481.2 nm MS factor.§ CONCLUSIONSThis work describes ab initio relativistic calculations of IS electronic factors in many-electron atoms using the MCDHF approach. The adopted computational approach for the estimation of the MS and FS factors for two transitions between low-lying states in Zn i is based on the expectation values of the relativistic recoil Hamiltonian for a given isotope, together with the FS factors estimated from the total electron densities at the origin. Three different correlation models are explored in a systematic way to determine a reliable computational strategy and estimate theoretical error bars of the IS factors.Within each correlation model, the convergence of the level MS factors and the electronic probability density at the origin, as a function of the increasing active space, is studied for the 4s^2 ^1S_0 → 4s4p ^3P^o_1 and 4s4p ^3P^o_2 → 4s5s ^3S_1 transitions. Satisfactory convergence is found within the three correlation models, and for both studied transitions. It is shown that small variations in the level values due to correlation effects can lead to more significant variations in the transition values, concerning mainly the SMS factors.The accuracy of the results obtained from the different correlation models is investigated by comparison with reference values. The transition energies show good agreement with observation available in the NIST database. Moreover, for both transitions the Δ E results are more accurate than numbers provided by other theoretical works. Since most of these works report on properties other than IS factors, results obtained in the present work are compared with numbers extracted from two experiments. Good agreement of the computed FS and total MS factors is found for the 4s^2 ^1S_0 → 4s4p ^3P^o_1 transition. By contrast, the results are not consistent with values extracted from two King-plot processes for the 4s4p ^3P^o_2 → 4s5s ^3S_1 transition.Significant discrepancies between theory and experiment appear when using the scaling law approximation eq_sl1 to separate the NMS from the total MS. Indeed, the ΔK̃_NMS results completely disagree with numbers provided by this approximation, illustrating the rather fast breakdown of this law based on non-relativistic theory with respect to the atomic number Z. This breakdown has already been highlighted in heavier systems <cit.>. In consequence, the ΔK̃_SMS results also disagree with the extracted experimental values. To investigate these discrepancies, relativistic nuclear recoil corrections to ΔK̃_NMS and ΔK̃_SMS are discussed and quantified for both transitions. It is shown that neglecting them leads to larger discrepancies with observation for the total ΔK̃_MS values. Finally, fully non-relativistic calculations of the SMS factors are carried out with the MCHF method, considering the SrD-SR and SrDT-SS models. It is shown that the relativistic corrections to the wave functionscounterbalance the relativistic nuclear recoil corrections for the 4s^2 ^1S_0 → 4s4p ^3P^o_1 transition, while they add for the 4s4p ^3P^o_2 → 4s5s ^3S_1 transition.From the theoretical point of view, it would be worthwhile to study the effects of the omitted CC correlations within the 3d core orbital. Considerable code development is necessary in order to perform such large calculations in a reasonable time. A common optimization of the orbital sets is also required. Another possible way to improve the accuracy of the present results is the use of the partitioned correlation function interaction (PCFI) approach <cit.>. It is based on the idea of relaxing the orthonormality restriction on the orbital basis, and breaking down the very large calculations in the traditional multiconfiguration methods into a series of smaller parallel calculations. This method is very flexible for targeting different electron correlation effects. Additionally, electron correlation effects beyond the SrD-MR and SrDT-SS models (such as quadruple substitutions) can be included perturbatively. Work is being done in these directions. This work has been partially supported by the Belgian F.R.S.-FNRS Fonds de la Recherche Scientifique (CDR J.0047.16), the BriX IAP Research Program No. P7/12 (Belgium). L.F. acknowledges the support from the FRIA. J.B. acknowledges financial support of the European Regional Development Fund in the framework of the Polish Innovation Economy Operational Program (contract no. POIG.02.01.00-12-023/08). P.J. acknowledges financial support from the Swedish Research Council (VR), under contract 2015-04842.65 fxundefined [1]ifx#1 fnum [1] #1firstoftwosecondoftwo fx [1] #1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0] ` 12 `$12 `&12 `#12 `1̂2 `_12 `%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty [Nazé et al.(2013)Nazé, Gaidamauskas, Gaigalas, Godefroid, and Jönsson]NGG13 authorauthorC. Nazé, authorE. Gaidamauskas, authorG. Gaigalas, authorM. Godefroid, and authorP. Jönsson, @noop journaljournalComput. Phys. Commun. volume184, pages2187 (year2013)NoStop [King(1984)]Ki84 authorauthorW. H. King, @noop titleIsotope Shifts in Atomic Spectra (publisherPlenum Press, New York, year1984)NoStop [Cheal et al.(2012)Cheal, Cocolios, and Fritzsche]CCF12 authorauthorB. Cheal, authorT. E. Cocolios, and authorS. Fritzsche, @noop journaljournalPhys. Rev. A volume86, pages042501 (year2012)NoStop [Nazé et al.(2015)Nazé, Li, and Godefroid]NLG15 authorauthorC. Nazé, authorJ. G. Li, and authorM. Godefroid, @noop journaljournalPhys. Rev. A volume91, pages032511 (year2015)NoStop [Campbell et al.(1997)Campbell, Billowes, and Grant]CBG97 authorauthorP. Campbell, authorJ. Billowes, and authorI. S. Grant, @noop journaljournalJ. Phys. B volume30, pages2351 (year1997)NoStop [Yang et al.(2016)Yang, Wraith, Xie, Babcock, Billowes, Bissell, Blaum, Cheal, Flanagan, Garcia Ruiz, Gins, Gorges, Grob, Heylen, Kaufmann, Kowalska, Kraemer, Malbrunot-Ettenauer, Neugart, Neyens, Nörtershäuser, Papuga, Sánchez, and Yordanov]YWX16 authorauthorX. F. Yang, authorC. Wraith, authorL. Xie, authorC. Babcock, authorJ. Billowes, authorM. L. Bissell, authorK. Blaum, authorB. Cheal, authorK. T. Flanagan, authorR. F. Garcia Ruiz, authorW. Gins, authorC. Gorges, authorL. K. Grob, authorH. Heylen, authorS. Kaufmann, authorM. Kowalska, authorJ. Kraemer, authorS. Malbrunot-Ettenauer, authorR. Neugart, authorG. Neyens, authorW. Nörtershäuser, authorJ. Papuga, authorR. Sánchez, and authorD. T. Yordanov, @noop journaljournalPhys. Rev. Lett. volume116, pages182502 (year2016)NoStop [Fricke et al.(2004)Fricke and Heilig]FH04 authorauthorG. Fricke and authorK. Heilig, @noop titleNuclear Charge Radii, 1st ed. (publisherSpringer-Verlag, Berlin, year2004)NoStop [Froese Fischer et al.(1978)Froese Fischer and Hansen]FH78 authorauthorC. Froese Fischer and authorJ. E. Hansen, @noop journaljournalPhys. Rev. A volume17, pages1956 (year1978)NoStop [Froese Fischer et al.(1979)Froese Fischer and Hansen]FH79 authorauthorC. Froese Fischer and authorJ. E. Hansen, @noop journaljournalPhys. Rev. A volume19, pages1819 (year1979)NoStop [Biémont et al.(1980)Biémont and Godefroid]BG80 authorauthorE. Biémont and authorM. Godefroid, @noop journaljournalPhys. Scr. volume22, pages231 (year1980)NoStop [Hibbert(1989)]Hi89 authorauthorA. Hibbert, @noop journaljournalPhys. Scr. volume39, pages574 (year1989)NoStop [Brage et al.(1992)Brage and Froese Fischer]BF92 authorauthorT. Brage and authorC. Froese Fischer, @noop journaljournalPhys. Scr. volume45, pages43 (year1992)NoStop [Chou et al.(1994)Chou, Chi, and Huang]CCH94 authorauthorH. S. Chou, authorH. C. Chi, and authorK. N. Huang, @noop journaljournalPhys. Rev. A volume49, pages2394 (year1994)NoStop [Fleming et al.(1995)Fleming and Hibbert]FH95 authorauthorJ. Fleming and authorA. Hibbert, @noop journaljournalPhys. Scr. volume51, pages339 (year1995)NoStop [Głowacki et al.(2003)Głowacki and Migdałek]GM03 authorauthorL. Głowacki and authorJ. Migdałek, @noop journaljournalJ. Phys. B volume36, pages3629 (year2003)NoStop [McElroy et al.(2005)McElroy and Hibbert]MH05 authorauthorT. McElroy and authorA. Hibbert, @noop journaljournalPhys. Scr. volume71, pages479 (year2005)NoStop [Głowacki et al.(2006)Głowacki and Migdałek]GM06 authorauthorL. Głowacki and authorJ. Migdałek, @noop journaljournalJ. Phys. B volume39, pages1721 (year2006)NoStop [Jönsson et al.(2006)Jönsson, Andersson, Sabel, and Brage]JAS06 authorauthorP. Jönsson, authorM. Andersson, authorH. Sabel, and authorT. Brage, @noop journaljournalJ. Phys. B volume39, pages1813 (year2006)NoStop [Liu et al.(2006)Liu, Hutton, Zou, Andersson, and Brage]LHZ06 authorauthorY. Liu, authorR. Hutton, authorY. Zou, authorM. Andersson, and authorT. Brage, @noop journaljournalJ. Phys. B volume39, pages3147 (year2006)NoStop [Froese Fischer et al.(2007)Froese Fischer and Zatsarinny]FZ07 authorauthorC. Froese Fischer and authorO. Zatsarinny, @noop journaljournalTheor. Chem. Account. volume118, pages623 (year2007)NoStop [Blundell et al.(2008)Blundell, Johnson, Safronova, and Safronova]BJS08 authorauthorS. A. Blundell, authorW. R. Johnson, authorM. S. Safronova, and authorU. I. Safronova, @noop journaljournalPhys. Rev. A volume77, pages032507 (year2008)NoStop [Andersson et al.(2008)Andersson, Liu, Chen, Hutton, Zou, and Brage]ALC08 authorauthorM. Andersson, authorY. Liu, authorC. Y. Chen, authorR. Hutton, authorY. Zou, and authorT. Brage, @noop journaljournalPhys. Rev. A volume78, pages062505 (year2008)NoStop [Chen et al.(2010)Chen and Cheng]CC10 authorauthorM. H. Chen and authorK. T. Cheng, @noop journaljournalJ. Phys. B volume43, pages074019 (year2010)NoStop [Chi et al.(2010)Chi and Chou]CC10b authorauthorH. C. Chi and authorH. S. Chou, @noop journaljournalPhys. Rev. A volume82, pages032518 (year2010)NoStop [Safronova et al.(2010)Safronova and Safronova]SS10 authorauthorU. I. Safronova and authorM. S. Safronova, @noop journaljournalJ. Phys. B volume43, pages074025 (year2010)NoStop [Liu et al.(2011)Liu, Gao, Zeng, and Shi]LGZ11 authorauthorY. P. Liu, authorC. Gao, authorJ. L. Zeng, and authorJ. R. Shi, @noop journaljournalA&A volume536, pagesA51 (year2011)NoStop [Chi et al.(2014)Chi and Chou]CC14 authorauthorH. C. Chi and authorH. S. Chou, @noop journaljournalJ. Phys. B volume47, pages055002 (year2014)NoStop [Bauche et al.(1970)Bauche and Crubellier]BC70 authorauthorJ. Bauche and authorA. Crubellier, @noop journaljournalJ. Phys. France volume31, pages429 (year1970)NoStop [Bauche et al.(1974)Bauche and Crubellier]BC74 authorauthorJ. Bauche and authorA. Crubellier, @noop journaljournalJ. Phys. France volume35, pages19 (year1974)NoStop [Blundell et al.(1985)Blundell, Baird, Palmer, Stacey, and Woodgate]BBP85 authorauthorS. A Blundell, authorP. E. G. Baird, authorC. W. P. Palmer, authorD. N. Stacey, and authorG. K. Woodgate, @noop journaljournalZ. Phys. A volume321, pages31 (year1985)NoStop [Blundell et al.(1987)Blundell, Baird, Palmer, Stacey, and Woodgate]BBP87 authorauthorS. A Blundell, authorP. E. G. Baird, authorC. W. P. Palmer, authorD. N. Stacey, and authorG. K. Woodgate, @noop journaljournalJ. Phys. B volume20, pages3663 (year1987)NoStop [Jönsson et al.(2013)Jönsson, Gaigalas, Bieroń, Froese Fischer, and Grant]JGB13 authorauthorP. Jönsson, authorG. Gaigalas, authorJ. Bieroń, authorC. Froese Fischer, and authorI. P. Grant, @noop journaljournalComput. Phys. Commun. volume184, pages2197 (year2013)NoStop [Shabaev(1985)]Sh85 authorauthorV. M. Shabaev, @noop journaljournalTheor. Math. Phys. volume63, pages588 (year1985)NoStop [Shabaev(1988)]Sh88 authorauthorV. M. Shabaev, @noop journaljournalSov. J. Nucl. Phys. volume47, pages69 (year1988)NoStop [Bissell et al.(2016)Bissell, Carette, Flanagan, Vingerhoets, Billowes, Blaum, Cheal, Fritzsche, Godefroid, Kowalska, Krämer, Neugart, Neyens, Nörtershäuser, and Yordanov]BCF16 authorauthorM. L. Bissell, authorT. Carette, authorK. T. Flanagan, authorP. Vingerhoets, authorJ. Billowes, authorK. Blaum, authorB. Cheal, authorS. Fritzsche, authorM. Godefroid, authorM. Kowalska, authorJ. Krämer, authorR. Neugart, authorG. Neyens, authorW. Nörtershäuser, and authorD. T. Yordanov, @noop journaljournalPhys. Rev. C volume93, pages064318 (year2016)NoStop [Carette et al.(2016)Carette and Godefroid]CG16 authorauthorT. Carette and authorM. Godefroid, @noop journaljournalarXiv:1602.06574 (year2016)NoStop [Filippin et al.(2016)Filippin, Godefroid, Ekman, and Jönsson]FGE16 authorauthorL. Filippin, authorM. Godefroid, authorJ. Ekman, and authorP. Jönsson, @noop journaljournalPhys. Rev. A volume93, pages062512 (year2016)NoStop [Filippin et al.(2016)Filippin, Beerwerth, Ekman, Fritzsche, Godefroid, and Jönsson]FBE16 authorauthorL. Filippin, authorR. Beerwerth, authorJ. Ekman, authorS. Fritzsche, authorM. Godefroid, and authorP. Jönsson, @noop journaljournalPhys. Rev. A volume94, pages062508 (year2016)NoStop [Grant(2007)]Gr07 authorauthorI. P. Grant, @noop titleRelativistic Quantum Theory of Atoms and Molecules (publisherSpringer, New York, year2007)NoStop [Jönsson et al.(2007)Jönsson, He, Froese Fischer, and Grant]JHF07 authorauthorP. Jönsson, authorX. He, authorC. Froese Fischer, and authorI. P. Grant, @noop journaljournalComput. Phys. Commun. volume177, pages597 (year2007)NoStop [Froese Fischer et al.(2007)Froese Fischer, Tachiev, Gaigalas, and Godefroid]FTG07 authorauthorC. Froese Fischer, authorG. Tachiev, authorG. Gaigalas, and authorM. R. Godefroid, @noop journaljournalComput. Phys. Commun. volume176, pages559 (year2007)NoStop [Froese Fischer et al.(2016)Froese Fischer, Godefroid, Brage, Jönsson, and Gaigalas]FGB16 authorauthorC. Froese Fischer, authorM. Godefroid, authorT. Brage, authorP. Jönsson, and authorG. Gaigalas, @noop journaljournalJ. Phys. B volume49, pages182004 (year2016)NoStop [Seltzer(1969)]Se69 authorauthorE. C. Seltzer, @noop journaljournalPhys. Rev. volume188, pages1916 (year1969)NoStop [Fricke et al.(1995)Fricke, Bernhardt, Heilig, Schaller, Schellenberg, Shera, and De Jager]FBH95 authorauthorG. Fricke, authorC. Bernhardt, authorK. Heilig, authorL. Schaller, authorL. Schellenberg, authorE. Shera, and authorC. De Jager, @noop journaljournalAt. Data Nucl. Data Tables volume60, pages177 (year1995)NoStop [Torbohm et al.(1985)Torbohm, Fricke, and Rosén]TFR85 authorauthorG. Torbohm, authorB. Fricke, and authorA. Rosén, @noop journaljournalPhys. Rev. A volume31, pages2038 (year1985)NoStop [Ekman et al.(2016)Ekman, Jönsson, Godefroid, Nazé, and Gaigalas]EJG16 authorauthorJ. Ekman, authorP. Jönsson, authorM. Godefroid, authorC. Nazé, and authorG. Gaigalas, @noop journaljournalto be submittedNoStop [Papoulia et al.(2016)Papoulia, Carlsson, and Ekman]PCE16 authorauthorA. Papoulia, authorB. G. Carlsson, and authorJ. Ekman, @noop journaljournalPhys. Rev. A volume94, pages042502 (year2016)NoStop [Palmer(1988)]Pa88 authorauthorC. W. P. Palmer, @noop journaljournalJ. Phys. B volume21, pages1951 (year1988)NoStop [Gaidamauskas et al.(2011)Gaidamauskas, Nazé, Rynkun, Gaigalas, and Jönsson]GNR11 authorauthorE. Gaidamauskas, authorC. Nazé, authorP. Rynkun, authorG. Gaigalas, and authorP. Jönsson, @noop journaljournalJ. Phys. B volume44, pages175003 (year2011)NoStop [Kotochigova et al.(2007)Kotochigova, Kirby, and Tupitsyn]KKT07 authorauthorS. Kotochigova, authorK. P. Kirby, and authorI. Tupitsyn, @noop journaljournalPhys. Rev. A volume76, pages052513 (year2007)NoStop [Gustafsson et al.(2017)Gustafsson, Jönsson, Froese Fischer, and Grant]GJF17 authorauthorS. Gustafsson, authorP. Jönsson, authorC. Froese Fischer, and authorI. P. Grant, @noop journaljournalAtoms volume5, pages3 (year2017)NoStop [Veseth(1987)]Ve87 authorauthorL. Veseth, @noop journaljournalJ. Phys. B volume20, pages235 (year1987)NoStop [Kramida et al.(2015)Kramida, Ralchenko, and Reader]KRR15 authorauthorA. Kramida, authorY. Ralchenko, and authorJ. Reader, @noop titleNIST Atomic Spectra Database (Version 5.4) (publisherNational Institute of Standards and Technology, Gaithersburg, MD, year2016); noteavailable at: <http://physics.nist.gov/asd>NoStop [Kloch et al.(1982)Kloch, Leś, Stacey, and Stacey]KLS82 authorauthorR. Kloch, authorZ. Leś, authorD. N. Stacey, and authorV. Stacey, @noop journaljournalActa Phys. Pol. A volume61, pages483 (year1982)NoStop [Foot et al.(1982)Foot, Stacey, Stacey, Kloch, and Leś]FSS82 authorauthorC. J. Foot,authorD. N. Stacey, authorV. Stacey, authorR. Kloch, and authorZ. Leś, @noop journaljournalProc. R. Soc. Lond. A volume384, pages205 (year1982)NoStop [Wohlfahrt et al.(1980)Wohlfahrt, Schwentker, Fricke, Andresen, and Shera]WSF80 authorauthorH. D. Wohlfahrt,authorO. Schwentker, authorG. Fricke, authorH. G. Andresen, and authorE. B. Shera, @noop journaljournalPhys. Rev. C volume22, pages264 (year1980)NoStop [Campbell(2016)Campbell]Ca16 authorauthorP. Campbell, @noop noteprivate communication (2016)NoStop [Huang et al.(1976)Huang, Aoyagi, Chen, and Crasemann]HAC76 authorauthorK. N. Huang, authorM. Aoyagi, authorM. Chen, and authorB. Crasemann, @noop journaljournalAt. Data Nucl. Data Tables volume18, pages243 (year1976)NoStop [Lunney et al.(2003)Lunney, Pearson, and Thibault]LPT03 authorauthorD. Lunney, authorJ. M. Pearson,and authorC. Thibault, @noop journaljournalRev. Mod. Phys. volume75, pages1021 (year2003)NoStop [Coursey et al.(2012)Coursey, Schwab, Tsai, and Dragoset]CST12 authorauthorJ. S. Coursey, authorD. J. Schwab, authorJ. J. Tsai, and authorR. A. Dragoset, @noop titleAtomic Weights and Isotopic Compositions (Version 3.0) (publisherNational Institute of Standards and Technology, Gaithersburg, MD, year2012);noteavailable at: <http://physics.nist.gov/Comp>NoStop [Yang(2016)Yang]Ya16 authorauthorX. F. Yang, @noop noteprivate communication (2016)NoStop [Yang et al.(2017)Yang et al.]Ya17 authorauthorX. F. Yang et al., @noop notein preparationNoStop [Li et al.(2012)Li, Nazé, Godefroid, Gaigalas, and Jönsson]LNG12 authorauthorJ. G. Li, authorC. Nazé, authorM. Godefroid, authorG. Gaigalas, and authorP. Jönsson, @noop journaljournalEur. Phys. J. D volume66, pages290 (year2012)NoStop [Palmeri et al.(2016)Palmeri, Quinet, and Bouazza]PQB16 authorauthorP. Palmeri,authorP. Quinet, and authorS. Bouazza, @noop journaljournalJ. Quant. Spectrosc. Radiat. Transfer volume185, pages70 (year2016)NoStop [Verdebout et al.(2013)Verdebout, Rynkun, Jönsson, Gaigalas, Froese Fischer, and Godefroid]VRJ13 authorauthorS. Verdebout, authorP. Rynkun, authorP. Jönsson, authorG. Gaigalas, authorC. Froese Fischer, and authorM. Godefroid, @noop journaljournalJ. Phys. B volume46, pages085003 (year2013)NoStop
http://arxiv.org/abs/1708.08347v1
{ "authors": [ "Livio Filippin", "Jacek Bieroń", "Gediminas Gaigalas", "Michel Godefroid", "Per Jönsson" ], "categories": [ "physics.atom-ph" ], "primary_category": "physics.atom-ph", "published": "20170825130501", "title": "Multiconfiguration calculations of electronic isotope shift factors in Zn I" }
[email protected] University of Bremen, Center of Applied Space Technology and Microgravity (ZARM), 28359 Bremen, [email protected] School of Mathematics, University of Leeds, Leeds LS2 9JT, UKSchool of Mathematics and Statistics, University of Sheffield, Sheffield S3 7RH, UKWe numerically compute the flow of an electrically conducting fluid in a Taylor-Couette geometry where the rotation rates of the inner and outer cylinders satisfy Ω_o/Ω_i=(r_o/r_i)^-3/2. In this quasi-Keplerian regime a non-magnetic system would be Rayleigh-stable for all Reynolds numbers Re, and the resulting purely azimuthal flow incapable of kinematic dynamo action for all magnetic Reynolds numbers Rm. For Re=10^4 and Rm=10^5 we demonstrate the existence of a finite-amplitude dynamo, whereby a suitable initial condition yields mutually sustaining turbulence and magnetic fields, even though neither could exist without the other. This dynamo solution results in significantly increased outward angular momentum transport, with the bulk of the transport being by Maxwell rather than Reynolds stresses. Dynamo Action in a Quasi-Keplerian Taylor-Couette Flow Marc Avila^1 December 30, 2023 ======================================================The magnetic fields of planets, stars and entire galaxies are created by dynamo action, in which the motion of electrically conducting fluid stretches and thereby amplifies some original seed field. The details vary widely for different objects <cit.>, but it is generally believed that most sufficiently complicated fluid flows can act as dynamos, at least if the electrical conductivity is large enough. The onset of dynamo action then becomes a linear instability problem, with the electrical conductivity incorporated into the so-called magnetic Reynolds number Rm as the control parameter. Once Rm exceeds some critical value any infinitesimal seed field will grow exponentially in time. This process continues until the field is so strong that its associated Lorentz force alters the original flow and eventually stops further field amplification.One astrophysical category where this process may not work quite so simply are accretion disks. The difficulty is that a Keplerian angular rotation profile, Ω(r)∼ r^-3/2, fails the requirement to be `sufficiently complicated'. A flow consisting of only the single component U_ϕ=Ω r is so simple that it will never yield dynamo action, no matter how large Rm is taken to be. To explain the magnetic fields of accretion disks, the flow must therefore be more complicated than just the large-scale Keplerian rotation profile. The generally accepted explanation is that there is also small-scale turbulence <cit.>. This naturally raises the question regarding the origin of this turbulence, especially since the familiar Rayleigh criterion <cit.> states that flows with angular momentum Ω r^2 increasing outward are hydrodynamically stable. The Rayleigh criterion is admittedly a purely linear result, and thus does not exclude the possibility of a nonlinear, finite amplitude instability. Nevertheless, there are both experimental <cit.> and numerical <cit.> results which suggest that Keplerian rotation profiles are indeed stable even with respect to finite amplitude perturbations.There is actually an easy way to bypass the Rayleigh criterion, namely by including magnetic fields. This leads to the magnetorotational instability (MRI), first discovered in the Taylor-Couette context in 1959 <cit.>, and applied to accretion disks in 1991 <cit.>. By using the tension in magnetic field lines to transfer angular momentum between fluid parcels, a key ingredient in the derivation of the Rayleigh criterion is invalidated, namely that without magnetic fields angular momentum is conserved not only globally but also locally on individual fluid parcels. The result is that in the presence of magnetic fields it is only the angular velocity Ω rather than the angular momentum Ω r^2 which needs to be outwardly decreasing for the flow to be unstable. Keplerian profiles Ω∼ r^-3/2 are thus Rayleigh-stable but MRI-unstable. Since its rediscovery in the astrophysical context, there has been enormous further interest in the MRI <cit.>, including also the possibility of obtaining it and variants of it experimentally <cit.>.There is of course one remaining difficulty before the MRI can be invoked to explain the magnetic fields of accretion disks, in that it essentially leads to a `chicken and egg' type situation. If a (sufficiently strong) magnetic field were present, the disk would almost invariably be turbulent, via the MRI, which would likely yield a sufficiently complicated flow to act as a dynamo. However, before the dynamo is operating, where does the initial magnetic field come from? One possibility is that the entire dynamo process is a finite amplitude rather than a linear instability (as encountered also in other contexts, e.g. <cit.>). That is, an infinitesimally small seed field would yield neither turbulence nor a dynamo, but some sufficiently strong initial field could yield a configuration that permanently maintains both the turbulence and an associated magnetic field. The possibility of a dynamo of this type has been explored very extensively in local shearing-box simulations <cit.>, but not in global calculations. In this Letter we provide numerical evidence for the existence of such a finite amplitude dynamo in a global Taylor-Couette geometry, with the inner and outer cylinder's rotation rates set to be in the Rayleigh-stable regime.We start with a Taylor-Couette system having nondimensional inner and outer cylinder radii r_i=1 and r_o=2. Periodicity is imposed in the axial direction, with length L_z=1.4. This periodicity in the axial direction is the most obvious difference between Taylor-Couette flows and accretion disks. The rotation rates of the two cylinders are fixed to satisfy Ω_o/Ω_i=(r_o/r_i)^-3/2=0.35, thereby matching Ω∼ r^-3/2 at the boundaries to constitute what is known as a quasi-Keplerian system. In particular, the resulting basic state flow profile is Rayleigh-stable in the purely hydrodynamic regime. By contrast, previous numerical Taylor-Couette dynamos <cit.> have been in the regime where the flow is already hydrodynamically unstable, and is thus sufficiently complicated to work even as a kinematic dynamo. There have also been two liquid sodium dynamo experiments <cit.> in cylindrical geometry, but again in a regime that does not rely on finite amplitude instabilities.The governing equations for the fluid flow U and the magnetic field B are∂ U/∂ t +U·∇ U=-1/ρ ∇ p+ ν∇^2 U + 1/μ_0ρ (∇× B)× B, ∂ B/∂ t=η∇^2 B + ∇×( U× B),together with ∇· U=∇· B=0. The associated boundary conditions at both cylinders are no-slip for U and insulating for B. Here p is the pressure, ρ is the density, μ_0 the permeability, ν the viscosity, and η the magnetic diffusivity (inversely proportional to the electrical conductivity).We nondimensionalize length by r_i, time by Ω_i^-1, U by Ω_i r_i, and B by Ω_i r_i √(ρμ_0). The relevant nondimensional parameters measuring the rotation rates are the ordinary and magnetic Reynolds numbersRe=Ω_i r_i^2/ν, Rm=Ω_i r_i^2/η.The ratio Rm/Re=ν/η is a material property of the fluid, known as the magnetic Prandtl number Pm. Liquid metals all have Pm≤ O(10^-5), but astrophysical plasmas can have a much greater range, including Pm≥ O(1).In previous work <cit.> we considered turbulent Taylor-Couette flows in the presence of an externally imposed azimuthal magnetic field B_0(r_i/r) ê_ϕ, which guarantees the existence of an instability, the so-called azimuthal magnetorotational instability <cit.>. As our finite amplitude initial condition here, we took a turbulent solution from this work, with Re=Rm=10^4. The external field was then switched off, and two separate runs were performed. For the first the two Reynolds numbers were both kept at 10^4; for the second the magnetic Reynolds number was increased to 10^5. The resolution for the first run was 280 points in radial direction, 600 azimuthal and 400 axial Fourier modes. For the second run the resolution was 480 points, 1024 azimuthal and 512 axial modes. For a full description of the numerical code see <cit.>.Figure <ref> shows how the magnetic and kinetic energies for the two runs evolve in time. The Rm=10^4 case is clearly not a dynamo – after some minor initial transients the magnetic energy starts to decay, while U relaxes back toward the basic quasi-Keplerian profile.By contrast, the Rm=10^5 case shows an immediate dramatic increase in the magnetic energy, followed by saturation to a statistically steady state.After some quite substantial transient adjustments, the flow also equilibrates to a statistically steady state.Figure <ref> shows snapshots of U and B for the Rm=10^5 dynamo.Both are seen to exhibit strong small-scale structures in all three directions.The meridional slices indicate a certain concentration of structures toward the inner boundary, but otherwise no clearly discernible boundary layer structure.As one would expect based on these snapshots, the energy spectra are not concentrated at the largest scales, but instead contain substantial energy out to quite large wavenumbers.As shown in Figure <ref>, the spectra in both z and ϕ are almost flat over one or even two orders of magnitude in wavenumber before dropping off. Closely related to these spectra are the lengthscalesl_U^2=∫ U^2 dV/∫(∇× U)^2 dV, l_B^2=∫ B^2 dV/∫(∇× B)^2 dV,where the integrals are over the entire volume. Theinstantaneous values from the end of the run give l_U=4.5·10^-2 and l_B=9.8·10^-3, broadly consistent with the spectra in Figure <ref>, as well as the O(Rm^-1/2) lengthscale on which a small-scale dynamo would be expected to operate <cit.>.Note also that the diffusive timescales corresponding to these lengthscales, Re· l_U^2≈20 and Rm· l_B^2≈10, are both very short compared with the t=1500 integration time in Figure <ref>, providing further evidence that this is indeed a permanent dynamo and not just remaining transients.A quantity of particular interest in the accretion disk context is the associated outward angular momentum transport, since it is this which determines the rate at which matter actually accretes onto the central object. In the Taylor-Couette context this angular momentum transport is very easily quantified simply by considering the torques on the inner and outer cylinders (where the time-averaged torques are necessarily equal and opposite in a statistically steady state). Figure <ref> shows how the normalized torque (that is, scaled by its value for the non-magnetic laminar U_ϕ-only profile) evolves in time. For Rm=10^4 it very quickly tends to one, consistent with the result in Figure <ref> that U simply relaxes back toward the basic state. By contrast, for Rm=10^5 there are again substantial initial transients, but eventually the torque settles in to O(10) times greater than the laminar value.There are two other points worth mentioning in comparison with Figure <ref>. First, whereas in Figure <ref> the kinetic energy ultimately settled in to a value quite similar to the initial condition with the imposed field, the torque settles in to values around twice as large as for the initial condition. Second, after seemingly settling in, the torque exhibits fluctuations of similar intensity to either the kinetic or magnetic energies, if values are normalized about the maximum.These torque results already indicate that the turbulent, magnetic state at Rm=10^5 is very effective at angular momentum transport. A way of further quantifying this is shown in Figure <ref>a, where we compute the Maxwell, Reynolds and viscous stresses. We see that throughout the bulk of the interior the Maxwell stress dominates, with the Reynolds stress accounting for only around 10%. This agrees with the point noted earlier that the magnetorotational instability works precisely by harnessing the tension in magnetic field lines to transport angular momentum.Next, Figure <ref> shows how the presence of turbulence modifies the time-averaged U_ϕ profile, and how it compares with the original quasi-Keplerian profile. The presence of boundary layers is as expected; since the torque right at the boundaries is purely viscous, if the torques in Figure <ref> are ∼10 greater than the laminar values, then the gradients d/drΩ at the boundaries must also be greater by the same amount. Note also how the viscous stresses in Figure <ref>a dominate within the boundary layers. It is not entirely clear though why the solution adjusts to have U_ϕ so uniform in the interior, as opposed to the angular velocity Ω (or some intermediate quantity U_ϕ/r^q, with 0<q<1). See also <cit.> who explore related issues in non-magnetic Taylor-Couette flows.Returning finally to our original motivation in terms of astrophysical accretion disks, we note that at least some aspects of the nonlinear equilibration must be quite different there from what is seen in Figure <ref>. In particular, in accretion disks the angular velocity profile cannot possibly deviate as strongly from the basic Keplerian profile; if the mass of the central object is dominant, its gravity will always enforce a profile extremely close to Ω∼ r^-3/2. This motivated us to perform an additional calculation in which the azimuthally and axially averaged flow profile was forced to remain identical to the laminar quasi-Keplerian profile, by adding a suitable body force in the Navier-Stokes equation which always drives these components of U back to the desired flow profile. (Numerically this is done by simply not time-stepping those components of U.) The dynamo continued to exist in this calculation, and even has levels of Maxwell and Reynolds stresses broadly similar to the previous ones, as seen in Figure <ref>b. (For such a forced flow there are also stresses arising from the additional body force, but these do not affect the relative Maxwell, Reynolds and viscous contributions.)These forced-flow results suggest that the dynamo presented here is not specific to Taylor-Couette flows, but is instead generic to any Rayleigh-stable differential rotation flows, including those in accretion disks. There are of course other important differences between Taylor-Couette geometry and accretion disks, including the boundaries in both radial and axial directions, stratification, compressibility, etc. <cit.>. Nevertheless, the generic existence of such a dynamo in Rayleigh-stable flows suggests that accretion disks can operate with self-sustained magnetic fields, without relying on fields emanating from the central object (contrary to the shearing-box results of <cit.>, who suggested that angular momentum transport is negligible in the absence of externally imposed axial fields). Further work will map out the full range in parameter space where this dynamo exists, and quantify how the angular momentum transport varies with Re and Rm, as well as the axial length L_z, which has been shown to be an important parameter in shearing-box simulations <cit.>.This work was supported by the German Research Foundation (DFG) grant AV 120/1-1, and the National Science Foundation grant NSF PHY-1125915. Computing time from the North-German Supercomputing Alliance (HLRN) is also gratefully acknowledged. Roberts P. H. Roberts and E. M. King, Rep. Prog. Phys. 76, 096801 (2013). Charbonneau P. Charbonneau, Ann. Rev. Astron. Astrophys. 52, 251 (2014). Widrow L. M. Widrow, Rev. Mod. Phys. 74, 775 (2002). Brandenburg1 A. Brandenburg and K. Subramanian, Phys. Rep. 417, 1 (2005). Balbus1 S. A. Balbus, Annu. Rev. Astron. Astrophys. 41, 555 (2003). Rayleigh Lord Rayleigh, Proc. R. Soc. A 93, 148 (1917). Ji1 H. Ji, M. Burin, E. Schartman and J. Goodman, Nature 444, 343 (2006). Edlund E. M. Edlund and H. Ji, Phys. Rev. E 89, 021004 (2014). Lesur G. Lesur and P. Y. Longaretti, Astron.Astrophys. 444, 25 (2005). Avila M. Avila, Phys. Rev. Lett. 108, 124501 (2012). Ostilla R. Ostilla-Mónico, R. Verzicco, S. Grossman, and D. Lohse, J. Fluid Mech. 748, R3 (2014). Lopez J. M. Lopez and M. Avila, J. Fluid Mech. 817, 21 (2017). Shi L. Shi, B. Hof, M. Rampp and M. Avila, Phys. Fluids 29, 044107 (2017). Velikhov E. P. Velikhov, Sov. Phys. JETP 36, 995 (1959). Balbus2 S. A. Balbus and J. F. Hawley, Astrophys. J. 376, 214 (1991). Rudiger G. Rüdiger and Y. Zhang, Astron. Astrophys. 378, 302 (2001). Ji2 H. T. Ji, J. Goodman, and A. Kageyama, Mon. Not. R. Astron. Soc. 325, L1 (2001) Hollerbach1 R. Hollerbach and G. Rüdiger, Phys. Rev. Lett. 95, 124501 (2005). Stefani F. Stefani, T. Gundrum, G. Gerbeth, G. Rüdiger, M. Schultz, J. Szklarski and R. Hollerbach, Phys. Rev. Lett. 97, 184502 (2006). Flanagan K. Flanagan, M. Clark, C. Collins, C. M. Cooper, I. V. Khalzov, J. Wallace and C. B. Forest, J. Plasma Phys. 81, 345810401 (2015). Christensen U. Christensen, P. Olson and G. A. Glatzmaier, Geophys. J. Int. 138, 393 (1999). Ponty Y. Ponty, J.-P. Laval, B. Dubrulle, F. Daviaud and J.-F. Pinton, Phys. Rev. Lett. 99, 224501 (2007). Krstulovic G. Krstulovic, G. Thorner, J.-P. Vest, S. Fauve and M. Brachet, Phys. Rev. E 84, 066318 (2011). Brandenburg2 A. Brandenburg, A. Nordlund, R. F. Stein and U. Torkelsson, Astrophys. J. 446, 741 (1995). Hawley J. F. Hawley, C. F. Gammie and S. A. Balbus, Astrophys. J. 464, 690 (1996). Fromang S. Fromang, J. Papaloizou, G. Lesur and T. Heinemann, Astron. Astrophys. 476, 1123 (2007). Yousef T. A. Yousef, T. Heinemann, F. Rincon, A. A. Schekochihin, N. Kleeorin, I. Rogachevskii, S. C. Cowley and J. C. McWilliams, Astron. Nachr. 329, 737 (2008). Johansen A. Johansen and Y. Levin, Astron. Astrophys. 490, 501 (2008). Riols A. Riols, F. Rincon, C. Cossu, G. Lesur, P.-Y. Longaretti, G. I. Ogilvie and J. Herault, J. Fluid Mech. 731, 1 (2013). Kunz M. W. Kunz, J. M. Stone and E. Quataert, Phys. Rev. Lett. 117, 235101 (2016). Nauman F. Nauman and M. E. Pessah, Astrophys. J. 833, 187 (2016). Willis A. P. Willis and C. F. Barenghi, Astron. Astrophys. 393, 339 (2002). Nore1 C. Nore, J.-L. Guermond, R. Laguerre and J. Léorat, Phys. Fluids 24, 094106 (2012). Gissinger C. Gissinger, Phys. Fluids 26, 044101 (2014). Nore2 C. Nore, D. Castanon Quiroz, L. Cappanera and J.-L. Guermond, EPL 114, 65002 (2016). Gailitis A. Gailitis, O. Lielausis, S. Dement'ev, E. Platacis, A. Cifersons, G. Gerbeth,T. Gundrum, F. Stefani, M. Christen, H. Hänel and G. Will, Phys. Rev. Lett.84, 4365 (2000). Monchaux R. Monchaux, M. Berhanu, M. Bourgoin, M. Moulin, P. Odier, J.-F. Pinton, R. Volk,S. Fauve, N. Mordant, F. Pétrélis, A. Chiffaudel, F. Daviaud, B. Dubrulle,C. Gasquet, L. Marié and F. Ravelet, Phys. Rev. Lett. 98, 044502 (2007). Guseva1 A. Guseva, A. P. Willis, R. Hollerbach and M. Avila, New J. Phys. 17, 093018 (2015). Guseva2 A. Guseva, R. Hollerbach, A. P. Willis and M. Avila, Magnetohydrodynamics, 53, 25 (2017). Guseva3 A. Guseva, A. P. Willis, R. Hollerbach and M. Avila, arXiv:1705.03785 Hollerbach2 R. Hollerbach, V. Teeluck and G. Rüdiger, Phys. Rev. Lett. 104, 044502 (2010). Seilmayer M. Seilmayer, V. Galindo, G. Gerbeth, T. Gundrum, F. Stefani, M. Gellert, G. Rüdiger, M. Schultz and R. Hollerbach, Phys. Rev. Lett. 113, 024505 (2014). Dormy Mathematical Aspects of Natural Dynamos, edited by E. Dormy and A. M. Soward (CRC Press, London, 2007). Brauckmann H. J. Brauckmann and B. Eckhardt, J. Fluid Mech. 815, 149 (2017). Pessah M. E. Pessah, C. Chan and D. Psaltis, Astrophys. J. 668, L51 (2007).
http://arxiv.org/abs/1708.07695v2
{ "authors": [ "Anna Guseva", "Rainer Hollerbach", "Ashley P. Willis", "Marc Avila" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20170825113635", "title": "Dynamo Action in a Quasi-Keplerian Taylor-Couette Flow" }
Wireless Access for Ultra-Reliable Low-Latency Communication (URLLC): Principles and Building Blocks Petar Popovski1, Jimmy J. Nielsen1,Čedomir Stefanović1, Elisabeth de Carvalho1,Erik Ström2, Kasper F. Trillingsgaard1,Alexandru-Sabin Bana1,Dong Min Kim1,Radoslaw Kotaba1,Jihong Park1,René B. Sørensen1 1Dept. of Electronic Systems, Aalborg University, 9220 Aalborg, Denmark Emails:{petarp,jjn,cs,edc,kft,asb,dmk,rak,jihong,rbs}@es.aau.dk, 2Dept. of Electrical Engineering, Chalmers Univ. of Technology, 412 96 Gothenburg, SwedenEmail: [email protected] December 30, 2023 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Ultra-reliable low latency communication (URLLC) is an important new feature brought by 5G, with a potential to support a vast set of applications that rely on mission-critical links. In this article, we first discuss the principles for supporting URLLC from the perspective of the traditional assumptions and models applied in communication/information theory.We then discuss how these principles are applied in various elements of the system design, such as use of various diversity sources, design of packets and access protocols. The important messages are that there is a need to optimize the transmission of signaling information, as well as a need for a lean use of various sources of diversity.URLLC, 5G, diversity, access protocols§ INTRODUCTIONThe big difference between 5G and the previous generations of mobile wireless systems is that 5G is natively addressing two generic modes of Machine-Type Communications (MTC): Ultra-Reliable Low-Latency Communication (URLLC) and massive MTC (mMTC). URLLC is arguably the most innovative feature brought in 5G, as it will be used for mission critical communications, like reliable remote action with robots or coordination among vehicles. Ultra-reliable communication <cit.> is potentially an enabler of a vast set of applications, some of which are yet unknown. To put this in perspective, wireless connectivity and embedded processing have significantly transformed many products by expanding functionality and transcending the traditional product boundaries <cit.>; e.g., a product stays connected to its manufacturer through its lifetime for maintenance and update. Ultra-reliable connectivity brings this transformation to the next level: Once a system designer can safely assume that wireless connectivity is “truly anywhere and anytime”, e.g. guaranteed> 99.999% of the time, the approach to system design and operation changes fundamentally. An example is Industrie 4.0, where different parts of an object or a machine need not to be physically attached as long as they can use mission-critical ultra-reliable links to work in concert towards accomplishing a production task.In this paper, we first describe the principles for achieving wireless URLLC, relating them to the traditional assumptions in information and communication theory and elaborating why a new view is required. We then describe several important building blocks of a wireless communication system for supporting URLLC connections: framing/packetization, use of diversity, network topology and access protocols. The objective of this article is to describe their properties as essential ingredients in practically any URLLC solution, rather than combining them into a full proposal. § COMMUNICATION-THEORETIC PRINCIPLES FOR URLLCThe simple, but seminal communication model by Shannon <cit.> captures the essential stochastic nature of a communication system. The key information-theoretic result is that, given sufficiently long time and sufficiently many communication channel uses, one can obtain almost a deterministic, error-free data transmission whose rate is dictated by the channel capacity. Here “sufficiently many” means that the law of large numbers (LLN) averages out the stochastic variations. This is challenged in URLLC in at least three aspects: * Due to the latency constraints, the number of available channel uses is limited, such that the LLN cannot be put to work and offer arbitrarily high reliability.* Transmission of the actual data is only one ingredient of the whole communication protocol, which involves transmission/exchange of metadata as well as other auxiliary procedures, such as channel estimation, packet detection, additional protocol exchanges, etc.* The performance that can be guaranteed depends on the model used during the design and URLLC requires that the models are considered in regimes not treated previously (e.g. very rare events).In the following we elaborate on the principles to address these aspects which set the basis for the building blocks and the associated research challenges. §.§ Latency ConstraintsLatency is defined as the delay a packet (containing a certain number of data bits) experiences from the ingress of a protocol layer at the transmitter to the egress of the same layer at the receiver. Some packets will be dropped, i.e., never delivered, due to buffer overflows, synchronization failures, etc. Moreover, we assume that packets that are decoded in error are also dropped—either by the protocol itself or by higher layers.Using the convention that dropped packets have infinite latency, we can define thereliability as the probability that the latency does not exceed a pre-described deadline. Fig. <ref> shows the generic requirement in terms of latency and reliability, applicable not only to point-to-point link, but also arbitrary communication setup. The exact numbers on the deadline and the reliability are application dependent. We note that the latency cumulative distribution function (CDF) asymptote is equal to 1-P_e, where P_e is the probability of packet drop or packet error. Clearly, high reliability implies low P_e, but the opposite is not necessarily true, as in URLLC we need to achieve low P_e in a time duration limited by the deadline. The number of available channel uses is (approximately) proportional to the product of the time duration and the bandwidth of the transmitted signal. Hence, by increasing the bandwidth, we obtain two advantages: more available channel uses and (typically) more frequency diversity.Increasing bandwidth enables us to decrease the channel use time duration, or to keep the duration fixed and to increase the number of channel uses in frequency.The trade-offs arising in relation to the definition of time-frequency resources are captured in the flexible numerology used to design the 5G frames <cit.>.It is important to note a conceptual difference between increasing the channel uses in time vs. frequency.Assume that Alice sends a packet to Bob by using a common packet structure in which data is preceded by metadata.Let the packet transmission consume N = N_M + N_D channel uses, where N_M are intended for metadata and N_D for data.If the N_M metadata channel uses precede the data channel uses, then after decoding the metadata, Bob can decide whether to continue to decode the data from the remaining N_D channel uses (if he is the intended recipient of the data) or to shut down the receiver and save energy.Bob cannot save energy in the same way if these N channel uses occur in parallel in frequency, as he needs to receive all symbols before deciding if the packet is intended for him.This follows the intuition that higher reliability necessarily leads to higher energy expenditure.Besides frequency, URLLC can rely on other types of diversity, such as access point diversity due to densification, spatial diversity due to massive number of antennas and interface diversity. Further elaboration on these is given in Section <ref>. §.§ Metadata, Auxiliary Procedures and Protocol ExchangesThe capacity results of information theory implicitly assume that when Alice transmits data to Bob, both of them know that the transmission is taking place as well as when it starts and ends. In practice, this information needs to be conveyed through transmission of metadata (control information). When the size of the data is much larger than the metadata, as in the classical information-theoretic setup, the amount of resources (channel uses) spent on sending metadata is negligible. Moreover, it is assumed that the number of channel uses for metadata N_M is sufficiently large to guarantee high reliability, while it still holds that N_M ≪ N_D. This does not hold in URLLC, since the data size is often small and comparable to the metadata size, and one explicitly needs to optimize the coding/transmission of metadata.Further, considering the high reliability levels treated in URLLC, such as e.g. >99.999%, one can no longer assume that the metadata transmission, as well as all auxiliary procedures, are perfectly reliable. To illustrate this, consider that the probability of success for a given data packet p, denoted by P_S(p), is a product of the success probabilities for the data P_S(D), metadata P_S(M), and the auxiliary procedures P_S(A):P_S(p)=P_S(A) P_S(M) P_S(D)This calculation assumes that each procedure is executed independently of the others and, thus each of them designed separately to take place over dedicated communication resources. However, in principle, one can gather all communication resources and apply a joint design of the three elements: auxiliary (A), metadata (M) and data (D). Denote the highest probability of success that can be obtained in that case by Q_S(p, AMD). Clearly, Q_S(p,AMD) ≥ P_S(p), since P_S(p) is obtained by using a specific instance in which A, M, and D are separated. Why then do we not always use joint design of A, M, and D? This is due to the layered approach to the communication system design, but also due to energy consumption, also discussed in relation to frequency diversity. Namely, in the common system design, the decoding of data and metadata is causally dependent on the successful completion of the auxiliary procedures (e.g. detection that the packet is there), and, likewise, the decoding of data is causally dependent on the successful decoding of metadata. If the receiver Bob detects that there is a packet, it proceeds to decode the metadata and, if the packet is relevant, to decode the data. When A, M and D are not separated by design, then Bob needs to perform all the decoding steps and spend energy, although the received data may not be relevant for him.The above discussion concerns a single packet transmission from Alice to Bob. However, the communication protocols often use multiple exchanges between the communicating parties. One example is user authentication. As another example, if Bob is a base station (BS) to which Alice wants to transmit, then Bob should send a packet that grants access to Alice, such that Alice can send her data[This type of coordinated access is used in for example 3GPP cellular systems, but not in typical Wi-Fi deployments.]. In a simple example, assume that Bob needs to grant access to Alice via packet p_1, Alice sends her packet p_2 to Bob and finally Bob sends an acknowledgement p_3. The probability of success for the packet p_i is denoted by P_S(p_i), incorporating in it the data and the auxiliary procedures. Here we cannot do the trick of jointly encoding p_1, p_2, and p_3, since each of them is sent by a different party! The overall probability of success is:P_S=P_S(p_1)P_S(p_2)P_S(p_3)such that every additional protocol step decreases the overall reliability. This has been noticed by researchers, giving rise to grant-free access protocols, see Section <ref>. This simple analysis also shows that a systematic redesign of the protocols is required when considering the ultra-reliability regime.The design of packets and access protocols in URLLC regime is further discussed in Sections <ref> and <ref>. §.§ Use of Appropriate Stochastic Models The nature of communication systems and the Shannon-like stochastic models can be used to provide reliability guarantees, provided that the model accurately captures the statistics of all relevant factors. A communication engineer usually models “known unknowns”[Borrowed from the famous quote by D. Rumsfeld: “There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns – the ones we don't know we don't know.”], but the challenge of URLLC is that they require modeling of factors occurring very rarely (e.g. with probability of 10^-6) within the packet duration, if the target reliability is higher (e.g. outage probability in the same period<10^-7). Hence, there is a need to consider factors that so far have been treated as “unknown unknowns” in wireless design and performance evaluation.Specifically, consider a simple model for the signal at a single-antenna receiver:y=hx+z+wwhere x is the transmitted signal, h is the channel coefficient, z is the noise, and w is the interference. By selecting licensed spectrum, the designer makes w a known unknown. The noise z is there to represent the stochastic fluctuations, but it is still a known unknown, as its variance is upper-bounded. If h is known, w=0, and the noise is Gaussian, we get the classical Gaussian channel often used to benchmark coding and transmission techniques. However, accurate knowledge of h or its (tail) statistics is critical for URLLC.Using very conservative estimates for the random factors h, z, and w to be able to guarantee high reliability may lead to very large margins in terms of transmission power or infrastructure. Therefore, proper stochastic models of the wireless environment are crucial for making URLLC affordable.§ FRAMING AND PACKETIZATIONAs discussed above, when the sizes of the preamble, the metadata, and the data are comparable, it is no longer obvious that the conventional frame or packet structure is close to optimal. In this regime, the channel capacity becomes an inaccurate metric for assessing the necessary blocklength required to achieve a certain reliability. Instead, an essential quantity is the maximum coding rate for which <cit.> (and references therein) developed nonasymptotic bounds and approximations of. For AWGN channels, the key result from these works states that the maximum coding rate is subject to a back-off from the capacity that is approximately proportional to the square root of the blocklength.A recent study <cit.> has shown that for cases when CSI is unknown and the channel uses are limited, there is an optimal size of the preamble used for CSI acquisition, which depends on the reliability requirement, SNR, frame length and data rate. This suggests that there might also be an optimal trade-off between the amount of channel uses used for detection and decoding, which may also depend on the reliability requirement, SNR and the available channel uses. In case the optimal size for the preamble becomes considerably large, joint encoding of training symbols and data symbols could prove to be a more suitable alternative for achieving the latency-reliability requirements. This is in the spirit of joint design of the auxiliary procedures and data/metadata, discussed before. Furthermore, the insights gained from finite blocklength information theory also allow for rethinking the frame structure in multiuser systems. Here we show how this structure can be changed for downlink transmissions to URLLC devices. Specifically, consider a wireless system serving multiple URLLC devices with short packets using TDMA. The BS serves the devices in frames with the aim of delivering independent messages to each device with a certain reliability. In the conventional approach, depicted in Fig. <ref>(a), the BS encodes each message into separate packets and organizes them in a frame with a header containing pointers to each packet. This approach is optimal from an information-theoretic perspective when the messages are large, because each message can be encoded with a rate close to the channel capacity. When the messages are small, however, the results from finite blocklength information theory imply that the rate is subject to a back-off from the channel capacity that is inversely proportional to the square-root of the packet length, which is a significant penalty for short packet communications.As an alternative to the conventional frame structure for downlink broadcast, the transmitter can jointly encode all messages into one packet, thereby leveraging the improved achievable rates when encoding larger messages. As a result, all messages can be delivered with the same reliability, but with a shorter frame. We depict this approach in Fig. <ref>(b). The approach is not uniformly better than the conventional one, though, as it requires that each device receives and decodes the full packet containing all messages. From a device perspective, this implies increased power consumption, which is not desirable for devices with power-constraints. Finally, the two approaches can be considered the extremes of a trade-off between frame duration and power consumption at the devices. In this context, finite blocklength information theory can help in finding the optimal operating point on this trade-off curve, a problem addressed in greater detail in <cit.>.§ TYPES OF DIVERSITYDiversity with respect to paths, can be achieved both through using multiple antennas and by using multiple communication interfaces and technologies. These complimentary techniques are presented in the following.§.§ Multi-Antenna DiversityIt is well understood that multiple antennas at the BS or terminals are instrumental to guarantee reliable and low latency communications. The extreme number of spatial degrees of freedom present in massive MIMO can potentially be a significant contributor to URLLC. Indeed, the remarkable properties of massive MIMO that are tailored for URLLC are: * Very high SNR links.* Quasi-deterministic links, quasi-immune to fading.* Extreme spatial multiplexing capability.The first property occurs due to the array gain. Along with the second property, it relaxes the need for strong coding schemes, hence maintaining high reliability for shorter packets, and can dramatically reduce retransmission occurrences. The second and third properties are each grounded in the ability of multiple antennas to create spatial diversity paths. With hundreds of antennas at a massive BS, hundreds of spatial diversity paths can be created, if the propagation channel offers enough scattering. In practice, if the propagation channel provides an order of tens of diversity paths, it is sufficient to offer statistically stable links.Nevertheless, the benefits of massive MIMO are conditioned on the acquisition of the instantaneous channel state information (CSI), particularly at the massive BS. Using the terminology from Section <ref>, massive MIMO is critically dependent on the reliability/latency of the auxiliary procedures. In a mobile environment constrained by channel coherence time as well as extreme latency requirements, instantaneous CSI acquisition becomes the most severe limitation to achieve URLLC. In the general multi-device massive MIMO URLLC framework, reliability and latency are characterized by a trade-off between spatial diversity and multiplexing, as well as latency due to CSI acquisition or possibly to multiple antenna processing.§.§.§ Downlink: Beamforming Based on Channel Structure Acquisition of the instantaneous CSI at the transmitter (CSIT) is a nontrivial task for multi-antenna systems. In FDD systems, it requires a feedback loop from the terminals inducing a significant latency. In TDD, latency can still be reduced by exploiting channel reciprocity, but remains critical. For URLLC it is preferable to depart from the conventional use of instantaneous CSIT, so that the question is how to benefit from the large number of transmit antennas for downlink transmission. One solution consists of beamforming based on the multipath structure of the channel which varies on a large scale. This structure can be estimated via the covariance matrix of the vectorial received signal from which directions of arrival or singular vectors are determined. For example, a directional beam with an angular spread encompassing a subset of the directional propagation paths can be formed.This results in a less precise beam, sacrificing the SNR and thus the rate, but gaining in latency (short auxiliary procedure) and robustness to serve multiple terminals. Furthermore, when multiple terminals are served in the downlink, the design of a joint CSI acquisition procedure for all of them and adjusting the beams to the broadcast transmission is parallel to the ideas of joint data/metadata encoding, discussed in the previous section. §.§.§ Uplink: Coherent vs Non-Coherent Reception Since the pilots are sent in the uplink along with the data, there is less delay involved in CSIR acquisition. Coherent multi-antenna processing can be employed at the receiver under low mobility conditions <cit.>. Hence, the massive spatial multiplexing capabilities of massive MIMO can be exploited to accommodate massive connectivity in the uplink. In URLLC, the processing delay to separate the signals from the different devices might become critical and has to be accounted for especially when the number of multiplexed devices grows.In high-mobility scenarios, non-coherent communications may offer better performance without requiring precise CSI, even in multi-device communications. In high mobility or low SNR scenarios, fulfilling the requirements of URLLC might require to shift to a basic TDMA system with non-coherent receivers based on energy detection (ED) <cit.>. In massive MIMO, ED has advantageous features, as channel and noise energy becomes deterministic and offers stable performance.Simulations of a single-input multiple-output (SIMO) system with 128 antennas at the receiver have been performed, where the mobility is modeled as an imperfection of the received channel estimate. Fig. <ref> shows that mobility enables a couple of orders of magnitude improvement of the symbol error rate (SER) when using non-coherent ED compared to coherent maximum-ratio-combining (MRC). A mobility index σ=0 means no mobility, i.e. the channel coefficients do not change from the training symbol to the data symbol, whereas σ=1 means no correlation between the channel and the estimate. §.§ Interface DiversityWithout intervening at the physical layer, diversity can be achieved through the use of multiple links and/or communication interfaces. Assuming that a URLLC application uses UDP, since the latency budget does not allow transport layer retransmissions, multi-interface diversity is easily achieved by duplicating the application's data packets and transmitting those through sockets attached to different communication interfaces, e.g. LTE, HSPA and Wi-Fi. Since the experienced latency is determined by the first arriving packet, interface diversity with packet duplication (PD) leads to an increase in reliability and lowering of latency.The concept of multi-interface diversity relates to 3GPP's dual connectivity, introduced in LTE release 12. This technique allows for bearer or packet level split of traffic flows between a master eNB and secondary eNB for enhanced throughput. Discussions are ongoing in 3GPP to enable data duplication for URLLC in multi-connectivity scenarios <cit.>.An example of the achievable latency and reliability performance of multi-interface communication is shown in Fig. <ref>, depicting the performance of LTE, HSPA, and Wi-Fi in different single link and PD configurations. The results are based on applying the different configurations in a simulation, where full day measurement traces of packet latency of the different technologies are played back simultaneously. The measurements have been obtained on a typical weekday at Aalborg University campus.While the LTE+Wi-Fi and HSPA+Wi-Fi PD configurations achieve very low latency (≤ 10 ms) at 0.9 reliability, both are performing relatively bad in the high reliability domains (0.9999 - 0.99999) with latencies above 100 ms. In comparison, the LTE+HSPA and LTE+HSPA+Wi-Fi configurations achieve around 60 ms and 40 ms, respectively. § NETWORK TOPOLOGYAnother determining factor for URLLC is the way in which devices are connected, i.e. the network topology. §.§ Base Station DensificationBS densification is important for achieving ubiquitous reliable connectivity, allowing users to have the best associations out of their many neighboring BSs. This contributes to URLLC in three ways: (1) short association distance, (2) per-user resource allocation increase, and (3) multiple associations.The decrease in BS-user association distance mitigates the propagation loss, which is important for the most severely affected users. In the noise-limited regime, where aggregate interference is negligible compared to noise, network densification increases the desired signal power and improves the reliability. For the interference-limited regime, the short propagation distances increase not only the desired signal power but also the interference that may be generated by numerous neighboring BSs. Nevertheless, the desired signal power increase dominates the increase of interference due to the path-loss which follows a power-law. Overall, network densification thereby increases signal-to-interference-plus-noise ratio (SINR) for all users <cit.>. Network densification also leads to resource reuse and increases per-user resource allocation. This resource increment can be directly utilized for latency reduction. Alternatively, it can be dedicated to diversity for reliability enhancement.Finally, network densification makes BSs more likely to have a few or even no associated users within their coverage, especially in ultra-dense network setups where the BS density exceeds user density. Such user-void BSs are expected to be in an idle state, not sending data signals for energy-efficiency, but may provide extra associations for the URLLC users. This, however, increases the downlink interference from the awakened BSs, which can be mitigated by cooperation between neighboring BSs. Consider two neighboring BSs that are interconnected through a high-speed backhaul, thanks to their short inter-BS distance after densification. In order to illustrate the concept, assume that the network features two types of users: low latency users and latency-tolerant users.By exchanging data signals and association information, these two BSs can serve their users concurrently without incurring interference. This can be achieved by utilizing interference cancellation or prioritizing the transmission of low-latency user <cit.>. Fig. <ref> shows its effectiveness in average latency reduction. §.§ Device-to-Device Communication Traditional cellular communication follows an uplink-downlink topology, regardless of the end-device's location. However, LTE release 12 and 5G also supports device-to-device (D2D) communications, where physically close devices, e.g., two vehicles, can communicate directly over a so-called sidelink. Compared to regular uplink-downlink communication, D2D communications benefits from a shorter link distance and fewer hops, which is beneficial from a reliability perspective. Moreover, since communication is direct, i.e., without intermediate nodes, D2D has the potential to provide very low latency. § ACCESS PROTOCOLS Access networking represents a critical segment for development of URLLC services in cellular networks. In this context, 3GPP follows the standard design approach exposed in Section <ref>, separately addressing the control-plane (i.e., metadata and auxiliary) procedures and user-plane (data) procedures, foreseeing that 5G radio access should be able to provide URLLC services with the average control-plane latency of 10 ms, the average user-plane latency of 0.5 ms, and the reliability of 99.999% for 32 byte long packets with latency of up to 1 ms <cit.>. The performance of current cellular access networks is far from these goals <cit.>. Also, some of the verticals impose reliability and latency requirements that may challenge the target 5G URLLC performance. For instance, factory automation, an important use case in Industrie 4.0, may, according to some sources, require reliability of 1 - 10^-9 (!) with 0.5 ms of user-plane end-to-end latency <cit.>. Note that user-plane end-to-end latency relates only to (one-way) communication delay between the source-destination pair, and, as such, is just a part of the cycle time in industrial applications; cycle time is the delay from the issuing of a command by the controller until the feedback from the actuator is received, involving all processing, actuating and sensing times <cit.>.As noted in Section <ref>, the primary method to reduce latency in 5G radio access will be the use of novel numerology and shortening of transmission times. In the downlink, latency could be further reduced by providing instant access to URLLC traffic at the expense of service performance of the other traffic types.Indeed, the 3GPP proposes a new unit of scheduling called mini-slot (see Fig. <ref>), which can be flexibly configured to last between 1-6 OFDM symbols (while standard slots are 7 symbols long) <cit.>. Using mini-slots, the arriving URLLC data can be immediately scheduled by the BS by preempting a portion of the eMBB data operating with traditional slot-level granularity.On the other hand, supporting URLLC requirements in the uplink is rather challenging. Currently, uplink transmissions are subject to resource-reservation procedures with numerous stages and heavy signaling, which has a tremendous impact on latency and reliability, see Section <ref>. In this respect, on-going work in 3GPP considers design of control-plane procedures exploiting pre-established contexts among devices requiring URLLC service and the BS by means of pre-configured/semi-persistent scheduling.However, such a solution is suitable for devices with predictable traffic patterns, but otherwise exhibits low efficiency. Also, it applies only to the resources for the initial transmission, while any possible redundancy follows the standard, lengthy HARQ procedure. In the following, we outline two potential approaches to access protocol redesign for traffic patterns with less determinism. §.§ Grant-Free Access The main idea of grant-free access is to skip the reservation phase. This is a disruptive solution,which is by default random and non-orthogonal, involving collisions among users' transmissions. Slotted ALOHA, the standard paradigm used for collision resolution, is not suitable for URLLC services due to its unfavorable latency/reliability trade-off. The principal approach to achieve reliability with low latency in presence of collisions is again to rely on redundancy/diversity. For instance, a user could devote more bandwidth and/or power to a transmission than would be necessary if there were no collisions, thus making it more robust to interference. In such a way, multi-packet reception (MPR) can be achieved.Such an access scheme should also deal with collisions involving more transmissions than the MPR capability at the receiver. In this respect, a promising approach is to proactively transmit multiple packet replicas.Performance in this case can be further boosted using combining techniques and/or using successive interference cancellation (SIC) on the replicas.This approach is also useful to combat reception errors due to noise. Finally, a full-blown grant-free cellular uplink URLLC solution for the cases without any prior context existing between the BS and the devices should also deal with user activity detection/identification and lack of CSI. Novel approaches advocate use of compressed sensing in this regard, where users prepend sequences to the transmitted data, which can be used both for the activity detection and channel estimation. §.§ Coordinated Grant-Free Access A certain compromise, tailored for the cases where the devices have a relatively high probability of activation, would be a protocol in which users undergo a scheduling procedure only once, followed by infrequent updates from the BS. Scheduling information could consist of a specific access pattern, or a seed to generate it, that would tell the user in which slots to transmit the packet and its replicas, without the need for prior scheduling. The advantage of such solution is the simplified detection procedure at the BS compared to the fully random grant-free technique. Due to the limited amount of resources and transmissions consisting of potentially several replicas per user, the access patterns need not be orthogonal. In such case, the coordinated grant-free technique could benefit from the MPR, combining and SIC mechanisms, as described earlier.In that case, the BS should assign access patterns in a way so that these mechanisms are best exploited to satisfy the URLLC requirements.We conclude by noting that this approach is reminiscent of an CDMA system, where the users are assigned codes, i.e., access patterns, such that the mutual interference of the active users' transmissions is controlled. § CONCLUSIONSWe have formulated the communication-theoretic principles of URLLC,putting them into perspective of the standard models forcommunication system design that are based on the classical information/communication theory. Besides the transmission techniques applied to send data, ultra-high reliability brings in the focus the methods to send metadata and carry out the other auxiliary procedures, such as packet detection. For URLLC it is essential to invest in diversity, and the article reviewed promising approaches for doing so in the domains of device and network architecture, as well as communication protocols. One of the key conclusions is that efficient support of URLLC requires accurate modelling and rethinking of the classical assumptions applied in communication system engineering. The future research directions should build on detailing the design of the building blocks and combining them towards a complete URLLC solution that corresponds to a use case, such as e.g. industrial automation. For example, combining packetization/framing with the mechanism of beamforming in massive MIMO leads to the study of the tradeoff of the reliability gains from the two blocks. Integration of various diversity sources with a latency-constrained access protocol is another example of a relevant research direction implied by this paper. § ACKNOWLEDGMENTThe work has partly been supported by the European Research Council (ERC Consolidator Grant nr. 648382 WILLOW), and partly by the Horizon 2020 project ONE5G (ICT-760809). IEEEtran 10url@samestyle popovski2014ultra P. Popovski, “Ultra-reliable communication in 5g wireless systems,” in 5G for Ubiquitous Connectivity (5GU), 2014 1st International Conference on.1em plus 0.5em minus 0.4emIEEE, 2014, pp. 146–151.porter2014smart M. E. Porter and J. E. Heppelmann, “How smart, connected products are transforming competition,” Harvard Business Review, vol. 92, no. 11, pp. 64–88, 2014.shannon1948mathematical C. E. Shannon, “A mathematical theory of communication, part i, part ii,” Bell Syst. Tech. J., vol. 27, pp. 623–656, 1948.pedersen2015flexible K. Pedersen, F. Frederiksen, G. Berardinelli, and P. Mogensen, “A flexible frame structure for 5g wide area,” in Vehicular Technology Conference (VTC Fall), 2015 IEEE 82nd.1em plus 0.5em minus 0.4emIEEE, 2015, pp. 1–5.Polyanskiy2010b Y. Polyanskiy, H. V. Poor, and S. Verdú, “Channel coding rate in the finite blocklength regime,” IEEE Trans. Inf. Theory, vol. 56, no. 5, pp. 2307–2359, May 2010.short_codes_mismatched_csi G. Liva, G. Durisi, M. Chiani, S. S. Ullah, and S. C. Liew, “Short codes with mismatched channel state information: A case study,” arXiv:1705.05528 [cs.IT], May 2017.Trillingsgaard2017a K. F. Trillingsgaard and P. Popovski, “Downlink transmission of short packets: Framing and control information revisited,” IEEE Trans. Commun., vol. 65, no. 5, pp. 2048–2061, Feb. 2017.feasibility_large_arrays_urllc S. R. Panigrahi, N. Björsell, and M. Bengtsson, “Feasibility of Large Antenna Arrays towards Low Latency Ultra Reliable Communication,” IEEE Int. Conf. on Industrial Technology (ICIT), Mar. 2017.noncoherent_design_performance L. Jing, E. D. Carvalho, P. Popovski, and Á. O. Martínez, “Design and performance analysis of noncoherent detection systems with massive receiver arrays,” IEEE Trans. Signal Process., vol. 64, no. 19, pp. 5000 – 5010, Oct. 2016.3gppTS22.261v16.1.0 3GPP, “Service requirements for the 5G system,” 3rd Generation Partnership Project (3GPP), TS 22.261 v16.1.0, 09 2017.UR2Cspaswin:17 J. Park, D. Kim, P. Popovski, and S.-L. Kim, “Revisiting frequency reuse towards supporting ultra-reliable ubiquitous-rate communication,” Proc. IEEE WiOpt Wksp. SpaSWiN, Paris, France, May 2017.dmk2017vtc D. M. Kim, H. Thomsen, and P. Popovski, “On a user-centric base station cooperation scheme for reliable communications,” in Proc. IEEE VTC 2017 Spring, June 2017.38.913 3GPP, “TR 38.913 V14.3.0,” Tech. Rep., 06. 2017.Ericsson A. Osseiran et al., “Manufacturing reengineered: robots, 5G and the Industrial IoT,” in Ericsson Business Review, Issue 4, 2015, Accessed on 25.08.2017. [Online]. Available: <https://www.ericsson.com/assets/local/publications/ericsson-business-review/issue-4–2015/ebr-issue4-2015-industrial-iot.pdf> 38.802 3GPP, “TR 38.802 V14.1.0,” Tech. Rep., 06. 2017.
http://arxiv.org/abs/1708.07862v3
{ "authors": [ "Petar Popovski", "Jimmy J. Nielsen", "Cedomir Stefanovic", "Elisabeth de Carvalho", "Erik Ström", "Kasper F. Trillingsgaard", "Alexandru-Sabin Bana", "Dong Min Kim", "Radoslaw Kotaba", "Jihong Park", "René B. Sørensen" ], "categories": [ "cs.IT", "cs.NI", "math.IT" ], "primary_category": "cs.IT", "published": "20170825190035", "title": "Wireless Access for Ultra-Reliable Low-Latency Communication (URLLC): Principles and Building Blocks" }
Wireless networked control systems (WNCS) are composed of spatially distributed sensors, actuators, and controllers communicating through wireless networks instead of conventional point-to-point wired connections. Due to their main benefits in the reduction of deployment and maintenance costs, large flexibility and possible enhancement of safety, WNCS are becoming a fundamental infrastructure technology for critical control systems in automotive electrical systems, avionics control systems, building management systems, and industrial automation systems. The main challenge in WNCS is to jointly design the communication and control systems considering their tight interaction to improve the control performance and the network lifetime. In this survey, we make an exhaustive review of the literature onwireless network design and optimization for WNCS. First, we discuss what we call the critical interactive variables including sampling period, message delay, message dropout, and network energy consumption. The mutual effects of these communication and control variables motivate their joint tuning. We discuss the effect of controllable wireless network parameters at all layers of the communication protocols on the probability distribution of these interactive variables. We also review the current wireless network standardization for WNCS and their corresponding methodology for adapting the network parameters. Moreover, we discuss the analysis and design of control systems taking into account the effect of the interactive variables on the control system performance. Finally, we present the state-of-the-art wireless network design and optimization for WNCS, while highlighting the tradeoff between the achievable performance and complexity of various approaches. We conclude the survey by highlighting major research issues and identifying future research directions.wireless networked control systems, wireless sensor and actuator networks, joint design, delay, reliability, sampling rate, network lifetime, optimization.Wireless Network Design for Control Systems: A Survey Pangun Park, Sinem Coleri Ergen, Carlo Fischione, Chenyang Lu, and Karl Henrik Johansson P. Park is with the Department of Radio and Information Communications Engineering, Chungnam National University, Korea (e-mail: ). S. Coleri Ergen is with the Department of Electrical and Electronics Engineering, Koc University, Istanbul, Turkey (e-mail: ). C. Lu is with the Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, USA (e-mail: ). C. Fischione and K. H. Johansson are with the ACCESS Linnaeus Center, Electrical Engineering, Royal Institute of Technology, Stockholm, Sweden (e-mail: ). P. Park and S. Coleri Ergen contributed equally to this work. Received:; accepted: =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ INTRODUCTIONRecent advances in wireless networking, sensing, computing, and control are revolutionizing how control systems interact with information and physical processes such as Cyber-Physical Systems (CPS), Internet of Things (IoT), and Tactile Internet <cit.>. In Wireless Networked Control Systems (WNCS), sensor nodes attached to the physical plant sample and transmit their measurements to the controller over a wireless channel; controllers compute control commands based on these sensor data, which are then forwarded to the actuators in order to influence the dynamics of the physical plant <cit.>. In particular, WNCS are strongly related to CPS and Tactile Internet since these emerging techniques deal with the real-time control of physical systems over the networks. There is a strong technology push behind WNCS through the rise of embedded computing, wireless networks, advanced control, and cloud computing as well as a pull from emerging applications in automotive <cit.>, avionics <cit.>, building management <cit.>, and industrial automation <cit.>. For example, WNCS play a key role in Industry 4.0 <cit.>. The ease of installation and maintenance, large flexibility, and increased safety make WNCS a fundamental infrastructure technology for safety-critical control systems. WNCS applications have been backed up by several international organizations such as Wireless Avionics Intra-Communications Alliance <cit.>, Zigbee Alliance <cit.>, Z-wave Alliance <cit.>, International Society of Automation <cit.>, Highway Addressable Remote Transducer communication foundation <cit.>, and Wireless Industrial Networking Alliance <cit.>.WNCS require novel design mechanisms to address the interaction between control and wireless systems for maximum overall system performance and efficiency. Conventional control system design is based on the assumption of instantaneous delivery of sensor data and control commands with extremely high reliabilities. The usage of wireless networks in the data transmission introduces non-zero delay and message error probability at all times. Transmission failures or deadline misses may result in the degradation of the control system performance, and even more serious economic losses or reduced human safety. Hence, control system design needs to include mechanisms to tolerate message loss and delay. On the other hand, wireless network design needs to consider the strict delay and reliability constraints of control systems. The data transmissions should be sufficiently reliable and deterministic with the latency on the order of seconds, or even milliseconds, depending on the time constraints of the closed-loop system <cit.>. Furthermore, removing cables for the data communication of sensors and actuators motivates the removal of the power supply to these nodes to achieve full flexibility. The limited stored battery or harvested energy of these components brings additional limitation on the energy consumption of the wireless network <cit.>.The interaction between wireless networks and control systems can be illustrated by an example. A WNCS connects sensors attached to a plant to a controller via the single-hop wireless networking protocol IEEE 802.15.4. Fig. <ref> shows the control cost of the WNCS using the IEEE 802.15.4 protocol for different sampling periods, message delays and message loss probabilities <cit.>. The quadratic control cost is defined as a sum of the deviations of the plant state from its desired setpoint and the magnitude of the control input. The maximum allowable control cost is set to 6. The transparent region indicates that the maximum allowable control cost or network requirements are not feasible. For instance, the control cost would be minimized when there is no message loss and no delay, but this point is infeasible since these requirements cannot be met by the IEEE 802.15.4 protocol. The control cost generally increases as the message loss probability, message delay, and sampling period increase. Since short sampling periods increase the traffic load, the message loss probability, and the message delay are then closer to their critical values, above which the system is unstable <cit.>. Hence, the area and shape of the feasible region significantly depends on the network performance. Determining the optimal parameters for minimum network cost while achieving feasibility is not trivial because of the complex interdependence of the control and communication systems. Recently, Lower-Power Wide-Area Network (LPWAN) such as Long-Range WAN (LoRa) <cit.> and NarrowBand IoT (NB-IoT) <cit.> are developed to enable IoT connections over long-ranges (10–15 km). Even though some related works of WNCS are applicable for LPWAN-based control applications such as Smart Grid <cit.>, Smart Transportation <cit.>, and Remote Healthcare <cit.>, this survey focuses on wireless control systems based on Low-Power Wireless Personal Area Networks (LoWPAN) with short-range radios and their applications. Some recent excellent surveys exist on wireless networks, particularly for industrial automation <cit.>. Specifically, <cit.> discusses the general requirements and representative protocols of Wireless Sensor Networks (WSNs) for industrial applications. <cit.> compares popular industrial WSN standards in terms of architecture and design. <cit.> mainly elaborates on real-time scheduling algorithms and protocols for WirelessHART networks, experimentation and joint wireless-control design approaches for industrial automation. While <cit.> focused on WirelessHART networks and their control applications, this article provides a comprehensive survey of the design space of wireless networks for control systems and the potential synergy and interaction between control and communication designs. Specifically, our survey touches on the importance of interactions between recent advanced works of NCS and WSN, as well as different approaches of wireless network design and optimization for various WNCS applications. The goal of this survey is to unveil and address the requirements and challenges associated with wireless network design for WNCS and present a review of recent advances in novel design approaches, optimizations, algorithms, and protocols for effectively developing WNCS. The section structure and relations are illustrated in Fig. <ref>. Section <ref> introduces some inspiring applications of WNCS in automotive electronics, avionics, building automation, and industrial automation. Section <ref> describes WNCS where multiple plants are remotely controlled over a wireless network. Section <ref> presents the critical interactive variables of communication and control systems, including sampling period, message delay, message dropout, and energy consumption. Section <ref> introduces basic wireless network standardization and key network parameters at various protocol layers useful to tune the distribution of the critical interactive variables. Section <ref> then provides an overview of recent control design methods incorporating the interactive variables. Section <ref> presents various optimization techniques for wireless networks integrating the control systems. We classify the design approaches into two categories based on the degree of the integration: interactive designs and joint designs. In the interactive design, the wireless network parameters are tuned to satisfy given requirements of the control system. In the joint design, the wireless network and control system parameters are jointly optimized considering the tradeoff between their performances. Section <ref> describes three experimental testbeds of WNCS. We conclude this article by highlighting promising research directions in Section <ref>. § MOTIVATING APPLICATIONS This section explores some inspiring applications of WNCS. §.§ Intra-Vehicle Wireless NetworkIn-vehicle wireless networks have been recently proposed with the goal of reducing manufacturing and maintenance cost of a large amount of wiring harnesses within vehicles <cit.>. The wiring harnesses used for the transmission of data and power delivery within the current vehicle architecture may have up to 4 000 parts, weigh as much as 40 kg and contain up to 4 km of wiring. Eliminating these wires would additionally have the potential to improve fuel efficiency, greenhouse gas emission, and spur innovation by providing an open architecture to accommodate new systems and applications. An intra-vehicular wireless network consists of a central control unit, a battery, electronic control units, wireless sensors, and wireless actuators. Wireless sensor nodes send their data to the corresponding electronic control unit while scavenging energy from either one of the electronic control units or energy scavenging devices attached directly to them. Actuators receive their commands from the corresponding electronic control unit, and power from electronic control units or an energy scavenging device. The reason for incorporating energy scavenging into the envisioned architecture is to eliminate the lifetime limitation of fixed storage batteries.The applications that can exploit a wireless architecture fall into one of three categories: powertrain, chassis, and body. Powertrain applications use automotive sensors in engine, transmission, and onboard diagnostics for control of vehicle energy use, driveability, and performance. Chassis applications control vehicle handling and safety in steering, suspension, braking, and stability elements of the vehicle. Body applications include sensors mainly used for vehicle occupant needs such as occupant safety, security, comfort, convenience, and information. The first intra-vehicle wireless network applications are the Tire Pressure Monitoring System (TPMS) <cit.> and Intelligent Tire <cit.>. TPMS is based on the wireless transmission of tire pressure data from the in-tire sensors to the vehicle body. It is currently being integrated into all new cars in both U.S.A and Europe. Intelligent Tire is based on the placement of wireless sensors inside the tire to transfer accelerometer data to the coordination nodes in the body of the car with the goal of improving the performance of active safety systems. Since accelerometer data are generated at much higher rate than the pressure data and batteries cannot be placed within the tire, Intelligent Tire contains an ultra-low power wireless communication system powered by energy scavenging technology, which is now being commercialized by Pirelli <cit.>.§.§ Wireless Avionics Intra-CommunicationWireless Avionics Intra-Communications (WAIC) have a tremendous potential to improve an aircraft's performance through more cost-effective flight operations, reduction in overall weight and maintenance costs, and enhancement of the safety <cit.>. Currently, the cable harness provides the connection between sensors and their corresponding control units to sample and process sensor information, and then among multiple control units over a backbone network for the safety-critical flight control <cit.>. Due to the high demands on safety and efficiency, the modern aircraft relies on a large wired sensor and actuator networks that consist of more than 5 000 devices. Wiring harness usually represents 2–5% of an aircraft's weight. For instance, the wiring harness of the Airbus A350-900 weights 23 000 kg <cit.>. The WAIC alliance considers wireless sensors of avionics located at various locations both within and outside the aircraft. The sensors are used to monitor the health of the aircraft structure, e.g., smoke sensors and ice detectors, and its critical systems, e.g., engine sensors and landing gear sensors. The sensor information is communicated to a central onboard entity. Potential WAIC applications are categorized into two broad classes according to application data rate requirements <cit.>. Low and high data rate applications have data rates less than and above 10 kbit/s, respectively.At the World Radio Conference 2015, the International Telecommunication Union voted to grant the frequency band 4.2–4.4 GHz for WAIC systems to allow the replacement of the heavy wiring used in aircraft <cit.>. The WAIC alliance is dedicating efforts to the performance analysis of the assigned frequency band and the design of the wireless networks for avionics control systems <cit.>. Space shuttles and international space stations have already been using commercially available wireless solutions such as EWB MicroTAU and UltraWIS of Invocon <cit.>.§.§ Building AutomationWireless network based building automation provides significant savings in installation cost, allowing a large retrofit market to be addressed as well as new constructions. Building automation aims to achieve optimal level occupant comfort while minimizing energy usage <cit.>. These control systems are the integrative component to fans, pumps, heating/cooling equipment, dampers, and thermostats. The modern building control systems require a wide variety of sensing capabilities in order to control temperature, pressure, humidity, and flow rates. The European environment agency <cit.> shows that the electricity and water consumption of buildings are about 30% and 43% of the total resource consumptions, respectively. An On World survey <cit.> reports that 59% of 600 early adopters in five continents are interested in new technologies that will help them better manage their energy consumption, and 81% are willing to pay for energy management equipment if they could save up to 30% on their energy bill for smart energy home applications.An example of energy management systems using WSNs is the intelligent building ventilation control described in <cit.>. An underfloor air distribution indoor climate regulation process is set with the injection of a fresh airflow from the floor and an exhaust located at the ceiling level. The considered system is composed of ventilated rooms, fans, plenums, and wireless sensors. A well-designed underfloor air distribution systems can reduce the energy consumption of buildings while improving the thermal comfort, ventilation efficiency and indoor air quality by using the low-cost WSNs.§.§ Industrial AutomationWireless sensor and actuator network (WSAN) is an effective smart infrastructure for process control and factory automation <cit.>. Emerson Process Management <cit.> estimates that WSNs enable cost savings of up to 90% compared to the deployment cost of wired field devices in the industrial automation domain. In industrial process control, the product is processed in a continuous manner (e.g., oil, gas, chemicals). In factory automation or discrete manufacturing, instead, the products are processed in discrete steps with the individual elements (e.g., cars, drugs, food). Industrial wireless sensors typically report the state of a fuse, heating, ventilation, or vibration levels on pumps. Since the discrete product of the factory automation requires sophisticated operations of robot and belt conveyors at high speed, the sampling rates and real-time requirements are often stricter than those of process automation. Furthermore, many industrial automation applications might in the future require battery-operated networks of hundreds of sensors and actuators communicating with access points.According to TechNavio <cit.>, WSN solutions in industrial control applications is one of the major emerging industrial trends. Many wireless networking standards have been proposed for industrial processes, e.g., WirelessHART by ABB, Emerson, and Siemens and ISA 100.11a by Honeywell <cit.>. Some industrial wireless solutions are also commercially available and deployed such as Tropos of ABB and Smart Wireless of Emerson.§ WIRELESS NETWORKED CONTROL SYSTEMSFig. <ref> depicts the generalized closed-loop diagram of WNCS where multiple plants are remotely controlled over a wireless network <cit.>. The wireless network includes sensors and actuators attached to the plants, controllers, and relay nodes. A plant is a continuous-time physical system to be controlled. The inputs and outputs of the plant are continuous-time signals. Outputs of plant i are sampled at periodic or aperiodic intervals by the wireless sensors. Each packet associated to the state of the plant is transmitted to the controller over a wireless network. When the controller receives the measurements, it computes the control command. The control commands are then sent to the actuator attached to the plant. Hence, the closed-loop system contains both a continuous-time and a sampled-data component. Since both sensor–controller and controller–actuator channels use a wireless network, general WNCS of Fig. <ref> are also called two-channel feedback NCS <cit.>. The system scenario is quite general, as it applies to any interconnection between a plant and a controller.§.§ Control SystemsThe objective of the feedback control system is to ensure that the closed-loop system has desirable dynamic and steady-state response characteristics, and that it is able to efficiently attenuate disturbances and handle network delays and loss. Generally, the closed-loop system should satisfy various design objectives: stability, fast and smooth responses to set-point changes, elimination of steady-state errors, avoidance of excessive control actions, and a satisfactory degree of robustness to process variations and model uncertainty <cit.>. In particular, the stability of a control system is an extremely important requirement. Most NCS design methods consider subsets of these requirements to synthesize the estimator and the controller. In this subsection, we briefly introduce some fundamental aspects of modeling, stability, control cost, and controller and estimator design for NCSs. §.§.§ NCS ModelingNCSs can be modeled using three main approaches, namely, the discrete-time approach, the sampled-data approach, and the continuous-time approach, dependent on the controller and the plant <cit.>. The discrete-time approach considers discrete-time controllers and a discrete-time plant model. The discrete-time representation leads often to an uncertain discrete-time system in which the uncertainties appear in the matrix exponential form due to discretization. Typically, this approach is applied to NCS with linear plants and controllers since in that case exact discrete-time models can be derived.Secondly, the sampled-data approach considers discrete-time controllers but for a continuous-time model that describes the sampled-data NCS dynamics without exploiting any form of discretization <cit.>. Delay-differential equations can be used to model the sampled-data dynamics. This approach is able to deal simultaneously with time-varying delays and time-varying sampling intervals.Finally, the continuous-time approach designs a continuous-time controller to stabilize a continuous-time plant model. The continuous-time controller then needs to be approximated by a representation suitable for computer implementation <cit.>, whereas typical WNCS consider the discrete-time controller. We will discuss more details of the analysis and design of WNCS to deal with the network effects in Section <ref>. §.§.§ StabilityStability is a base requirement for controller design. We briefly describe two fundamental notions of stability, namely, input-output stability and internal stability <cit.>. While the input-output stability is the ability of the system to produce a bounded output for any bounded input, the internal stability is the system ability to return to equilibrium after a perturbation. For linear systems, these two notions are closely related, but for nonlinear system they are not the same. Input-output stability concerns the forced response of the system for a bounded input. A system is defined to be Bounded-Input-Bounded-Output (BIBO) stable if every bounded input to the system results in a bounded output. If for any bounded input the output is not bounded the system is said to be unstable. Internal stability is based on the magnitude of the system response in steady state. If the steady-state response is unbounded, the system is said to be unstable. A system is said to be asymptotically stable if its response to any initial conditions decays to zero asymptotically in the steady state. A system is defined to be exponentially stable if the system response in addition decays exponentially towards zero. The faster convergence often means better performance. In fact, many NCS researches analyze exponential stability conditions <cit.>. Furthermore, if the response due to the initial conditions remains bounded but does not decay to zero, the system is said to be marginally stable. Hence, a system cannot be both asymptotically stable and marginally stable. If a linear system is asymptotically stable, then it is BIBO stable. However, BIBO stability does not generally imply internal stability. Internal stability is stronger in some sense, because BIBO stability can hide unstable internal behaviors, which do not appear in the output. §.§.§ Control CostBesides stability guarantees, typically a certain closed-loop control performance is desired. The closed-loop performance of a control system can be quantified by the control cost as a function of plant state and control inputs <cit.>. A general regulation control goal is to keep the state error from the setpoint close to zero, while minimizing the control actions. Hence, the control cost often consists of two terms, namely, the deviations of plant state from their desired setpoint and the magnitude of the control input. A common controller design approach is via a Linear Quadratic control formulation for linear systems and a quadratic cost function <cit.>. The quadratic control cost is defined as a sum of the quadratic functions of the state deviation and the control effort. In such formulation, the optimal control policy that minimizes the cost function can be explicitly computed from a Riccati equation. §.§.§ Controller DesignThe controller should ensure that the closed-loop system has desirable dynamic and steady state response characteristics. For NCS, the network delay and loss may degrade the control performance and even destabilize the system. Some surveys present controller design for NCSs <cit.>. For a historical review, see the survey <cit.>. We briefly describe three representative controllers, namely, Proportional-Integral-Derivative (PID) controller <cit.>, Linear Quadratic Regulator (LQR) control <cit.>, and Model Predictive Control (MPC) <cit.>.PID control is almost a century old and has remained the most widely used controller in process control until today <cit.>. One of the main reasons for this controller to be so widely used is that it can be designed without precise knowledge of the plant model. A PID controller calculates an error value as the difference between a desired setpoint and a measured plant state. The control signal is a sum of three terms: the P-term (which is proportional to the error), the I-term (which is proportional to the integral of the error), and the D-term (which is proportional to the derivative of the error). The controller parameters are proportional gain, integral time, and derivative time. The integral, proportional, and derivative part can be interpreted as control actions based on the past, the present and the future of the plant state. Several parameter tuning methods for PID controllers exist <cit.>. Historically, PID tuning methods require a trial and error process in order to achieve a desired stability and control performance. The linear quadratic problem is one of the most fundamental optimal control problems where the objective is to minimize a quadratic cost function subject to plant dynamics described by a set of linear differential equations <cit.>. The quadratic cost is a sum of the plant state cost, final state cost, and control input cost. The optimal controller is a linear feedback controller. The LQR algorithm is basically an automated way to find the state-feedback controller. Furthermore, the LQR is an important subproblem of the general Linear Quadratic Gaussian (LQG) problem. The LQG problem deals with uncertain linear systems disturbed by additive Gaussian noise.While the LQR problem assumes no noise and full state observation, the LQG problem considers input and measurement noise and partial state observation. Finally, MPC solves an optimal linear quadratic control problems over a receding horizon <cit.>. Hence, the optimization problem is similar to the controller design problem of LQR but solved over a moving horizon in order to handle model uncertainties. In contrast to non-predictive controllers, such as a PID or a LQR controller, which compute the current control action as a function of the current plant state using the information about the plant from the past, predictive controllers compute the control based on the systems predicted future behaviour <cit.>. MPC tries to optimize the system behaviour in a receding horizon fashion. It takes control commands and sensing measurements to estimate the current and future state of plant based on the control system model. The control command is optimized to get the desired plant state based on a quadratic cost. In practice, there are often hard constraints imposed on the state and the control input. Compared to the PID and LQR control, the MPC framework efficiently handles constraints. Moreover, MPC can handle missing measurements or control commands <cit.>, which can appear in a NCS setting. §.§.§ Estimator DesignDue to network uncertainties, plant state estimation is a crucial and significant research field of NCSs <cit.>. An estimator is used to predict the plant state by using partially received plant measurements. Moreover, the estimator typically compensates measurement noise, network delays, and packet losses. This predicted state is sometimes used in the calculation of the control command. Kalman filter is one of the most popular approaches to obtain the estimated plant states for NCS <cit.>. Modified Kalman filters are proposed to deal with different models of the network delay and loss <cit.>. The state estimation problem is often formulated by probabilistically modeling the uncertainties occurring between the sensor and the controller <cit.>. However, a non-probabilistic approach by time-stamping the measurement packets is proposed in <cit.>.In LQG control, a Kalman filter is used to estimate the state from the plant output. The optimal state estimator and the optimal state feedback controller are combined for the LQG problem. The controller is the linear feedback controller of LQR. The optimal LQG estimator and controller can be designed separately if the communication protocol supports the acknowledgement of the packet transmission of both sensor–controller and controller–actuator channels <cit.>. In sharp contrast, the separation principle between estimator and controller does not hold if the acknowledgement is not supported <cit.>. Hence, the underlying network operation is critical in the design of the overall estimator and the controller.§.§ Wireless NetworksFor the vast majority of control applications, most of the traffic over the wireless network consists of real-time sensor data from sensor nodes towards one or more controllers. The controller either sits on the backbone or is reachable via one or more backbone access points. Therefore, data flows between sensor nodes and controllers are not necessarily symmetric in WNCS. In particular, asymmetrical link cost and unidirectional routes are common for the most part of the sensor traffic. Furthermore, multiple sensors attached to a single plant may independently transmit their measurements to the controller <cit.>. In some other process automation environments, multicast may be used to deliver data to multiple nodes that may be functionally similar, such as the delivery of alerts to multiple nodes in an automation control room. Wireless sensors and actuators in control environments can be powered by battery, energy scavenging, or power cable. Battery storage provides a fixed amount of energy and requires replacement once the energy is consumed. Therefore, efficient usage of energy is vital in achieving high network lifetime. Energy harvesting techniques, on the other hand, may rely on natural sources, such as solar, indoor lighting, vibrational, thermal <cit.>, inductive and magnetic resonant coupling <cit.>, and radio frequency <cit.>. Efficient usage of energy harvesting may attain infinite lifetime for the sensor and actuator nodes. In most situations, the actuations need to be powered separately because significant amount of energy is required for the actuation commands (e.g., opening a valve). § CRITICAL INTERACTIVE SYSTEM VARIABLESThe critical system variables creating interactions between WNCS control and communication systems are sampling period, message delay, and message dropout. Fig. <ref> illustrates the timing diagram of the closed-loop control over a wireless network with sampling period, message delay, and message dropouts. We distinguish messages of the control application layer with packets of the communication layer. The control system generates messages such as the sensor samples of the sensor–controller channel or the control commands of the controller–actuator channel. The control system generally determines the sampling period. The communication protocols then convert the message to the packet format and transmit the packet to the destination. Since the wireless channel is lossy, the transmitter may have multiple packet retransmissions associated to one message depending on the communication protocol. If all the packet transmissions of the message fail due to a bursty channel, then the message is considered to be lost. In Fig. <ref>, the message delay is the time delay between when the message was generated by the control system at a sensor or a controller and when it is received at the destination. Hence, the message delay of a successfully received message depends on the number of packet retransmissions. Furthermore, since the routing path or network congestion affects the message delay, the message arrivals are possibly disordered as shown in Fig. <ref>. The design of the wireless network at multiple protocol layers determines the probability distribution of message delay and message dropout. These variables together with the sampling period influence the stability of the closed-loop NCS and the energy consumption of the network. Fig. <ref> presents the dependences between the critical system variables. Since WNCS design requires an understanding of the interplay between communication and control, we discuss the effect of these system variables on both control and communication system performance.§.§ Sampling Period§.§.§ Control System AspectContinuous-time signals of the plant need to be sampled before they are transmitted through a wireless network. It is important to note that the choice of the sampling should be related to the desired properties of the closed-loop system such as the response to reference signals, influence of disturbances, network traffic, and computational load <cit.>. There are two methods to sample continuous-time signals in WNCS: time-triggered and event-triggered sampling <cit.>.In time-triggered sampling, the next sampling instant occurs after the elapse of a fixed time interval, regardless of the plant state. Periodic sampling is widely used in digital control systems due to the simple analysis and design of such systems. Based on experience and simulations, a common rule for the selection of the sampling period is to make sure ωh be in the range [0.1,0.6] , where ω is the desired natural frequency of the closed-loop system and h is the sampling period <cit.>. This implies typically that we are sampling up to 20 samples per period of the dominating mode of the closed-loop system.In a traditional digital control system based on point-to-point wired connections, the smaller the sampling period is chosen, the better the performance is achieved for the control system <cit.>. However, in wireless networks, the decrease in sampling period increases the network traffic, which in turn increases the message loss probability and message delay. Therefore, the decrease in sampling period eventually degrades the control performance, as illustrated in Fig. <ref>.Recently, event-based control schemes such event- and self-triggered control systems have been proposed, where sensing and actuating are performed when the system needs attention <cit.>. Hence, the traffic pattern of event- and self-triggered control systems is asynchronous rather than periodic. In event-triggered control, the execution of control tasks is determined by the occurrence of an event rather than the elapse of a fixed time period as in time-triggered control. Events are triggered only when stability or a pre-specified control performance are about to be lost <cit.>. Event-triggered control can significantly reduce the traffic load of the network with no or minor control performance degradation since the traffic is generated only if the signal changes by a specified amount <cit.>. However, since most trigger conditions depend on the instantaneous state, the plant state is required to be monitored <cit.>. Self-triggered control has been proposed to prevent such monitoring <cit.>. In self-triggered control, an estimation of the next event time instant is made. The online detection of plant disturbances and corresponding control actions cannot be generated with self-triggered control. A combination of event- and self-triggered control is therefore often desirable <cit.>. §.§.§ Communication System AspectThe choice of time-triggered and event-triggered sampling in the control system determines the pattern of message generation in the wireless network. Time-triggered sampling results in regular periodic message generation at predetermined rate. If random medium access mechanism is used, the increase in network load results in worse performance in the other critical interactive system variables, i.e., message delay, message dropout, and energy consumption <cit.>. The increase in control system performance with higher sampling rates, therefore, does not hold due to these network effects. On the other hand, the predetermined nature of packet transmissions in time-triggered sampling allows explicit scheduling of sensor node transmissions beforehand, reducing the message loss and delay caused by random medium access <cit.>.A scheduled access mechanism can predetermine the transmission time of all the components such that additional nodes have minimal effect on the transmission of existing nodes <cit.>. When the transmission of the periodically transmitting nodes are distributed uniformly over time rather than being allocated immediately as they arrive, additional nodes may be allocated without causing any jitter in their periodic allocation.The optimal choice of medium access control mechanism is not trivial for event-triggered control <cit.>. The overall performance of event-triggered control systems significantly depends on the plant dynamics and the number of control loops. The random access mechanism is a good alternative if a large number of slow dynamical plants share the wireless network. In this case, the scheduled access mechanism may result in significant delay between the triggering of an event and a transmission in its assigned slot due to the large number of control loops. However, most time slots are not utilized since the traffic load is low for slow plants. On the other hand, the scheduled access mechanism performs well when a small number of the fast plants is controlled by the event-triggered control algorithm. Contention-based random access generally degrades the reliability and delay performance for the high traffic load of fast plants. When there are packet losses in the random access scheme, the event-triggered control further increases the traffic load, which may eventually incur stability problems <cit.>.The possible event-time prediction of self-triggered control alleviates the high network load problem of time-triggered sampling and random message generation nature of event-triggered sampling by predicting the evolution of the triggering threshold crossings of the plant state <cit.>. The prediction allows the explicit scheduling of sensor node transmissions, eliminating the high message delays and losses of random medium access. Most existing works of event-triggered and self-triggered control assume that message dropouts and message disorders do not occur. This assumption is not practical when the packets of messages are transmitted through a wireless network. Dealing with message dropouts and message disorders in these control schemes is challenging for both the wireless network and the control system. §.§ Message Delay §.§.§ Control System AspectThere are mainly two kinds of message delays of NCSs: sensor–controller delay and controller–actuator delay, as illustrated in Fig. <ref>. The sensor–controller delay represents the time interval from the instant when the physical plant is sampled to the instant when the controller receives the sampled message; and the controller–actuator delay indicates the time duration from the generation of the control message at the controller until its reception at the actuator. The increase in both delays prevents the timely delivery of the control feedback, which degrades system performance, as exemplified in Fig. <ref>. In control theory, these delays cause phase shifts that limit the control bandwidth and affect closed-loop stability <cit.>.Since delays are especially pernicious for closed-loop systems, some forms of modeling and prediction are essential to overcome their effects. Techniques proposed to overcome sensor–controller delays use predictive filters including Kalman filter <cit.>. In practice, message delay can be estimated from time stamped data if the receiving node is synchronized through the wireless network <cit.>. The control algorithm compensates the measured or predicted delay unless it is too large <cit.>. Such compensation is generally impossible for controller–actuator delays. Hence, controller–actuator delays are more critical than the sensor–controller delays <cit.>. The packet delay variation is another interesting metric since it significantly affects the control performance and causes possible instability even when the mean delay is small. In particular, a heavy tail of the delay distribution significantly degrades the stability of the closed-loop system <cit.>. The amount of degradation depends on the dynamics of the process and the distribution of the delay variations. One way to eliminate delay variations is to use a buffer, trading delay for its variation.§.§.§ Communication System AspectMessage delay in a multihop wireless network consists of transmission delay, access delay, and queueing delay at each hop in the path from the source to the destination.Transmission delay is defined as the time required for the transmission of the packet. Transmission delay depends on the amount of data to be transmitted to the destination and the transmission rate, which depends on the transmit power of the node itself and its simultaneously active neighboring nodes. As the transmit power of the node increases, its own transmission rate increases, decreasing its own transmission delay; while causing more interference to simultaneously transmitting nodes, increasing their delay. The optimization of transmission power and rate should take into account this tradeoff <cit.>. Medium access delay is defined as the time duration required to start the actual transmission of the packet. Access delay depends on the choice of medium access control (MAC) protocol. If contention-based random access mechanism is used, this delay depends on the network load, encoding/decoding mechanism used in the transmitter and receiver, and random access control protocol. As the network load increases, the access delay increases due to the increase in either busy sensed channel or failed transmissions. The receiver decoding capability determines the number of simultaneously active neighboring transmitters. The decoding technique may be based on interference avoidance, in which only one packet can be received at a time <cit.>; self-interference cancellation, where the node can transmit another packet while receiving <cit.>; or interference cancellation, where the node may receive multiple packets simultaneously and eliminate interference <cit.>. Similarly, a transmitter may have the capability to transmit multiple packets simultaneously <cit.>. The execution of the random access algorithm together with its parameters also affect the message delay. On the other hand, if schedule-based access is used, the access delay in general increases as the network load increases. However, this effect may be minimized by designing efficient scheduling algorithms adopting uniform distribution of transmissions via exploiting the periodic transmission of time-triggered control <cit.>. Similar to random access, more advanced encoding/decoding capability of the nodes may further decrease this access delay. Moreover, packet losses over the channel may require retransmissions, necessitating the repetition of medium access and transmission delay over time. This further increases message delay, as illustrated in Fig. <ref>.Queueing delay depends on the message generation rate at the nodes and amount of data they are relaying in the multihop routing path. The message generation and forwarding rate at the nodes should be kept at an acceptable level so as not to allow packet build up at the queue. Moreover, scheduling algorithms should consider the multihop forwarding in order to minimize the end-to-end delay from the source to the destination <cit.>. The destination may observe disordered messages since the packet associated to the message travels several hops with multiple routing paths or experiences network congestion <cit.>. §.§ Message Dropout§.§.§ Control System AspectGenerally, there are two main reasons for message dropouts, namely, message discard due to the control algorithm and message loss due to the wireless network itself. The logical Zero-Order Hold (ZOH) mechanism is one of the most popular and simplest approaches to discard disordered messages <cit.>. In this mechanism, the latest message is kept and old messages are discarded based on the time stamp of the messages. However, some alternatives are also proposed to utilize the disordered messages in a filter bank <cit.>. A message is considered to be lost if all packet transmissions associated to the message have eventually failed. The effect of message dropouts is more critical than message delay since it increases the updating interval with a multiple of the sampling period. There are mainly two types of dropouts: sensor–controller message dropouts and controller–actuator message dropouts. The controller estimates the plant state to compensate possible message dropouts of the sensor–controller channel. Remind that Kalman filtering is one of the most popular approaches to estimate the plant state and works well if there is no significant message loss <cit.>. Since the control command directly affects the plant, controller–actuator dropouts are more critical than sensor–controller dropouts <cit.>. Many practical NCSs have several sensor–controller channels whereas the controllers are collocated with the actuators, e.g., heat, ventilation and air-conditioning control systems <cit.>. NCS literatures often model the message dropout as a stochastic variable based on different assumptions of the maximum consecutive message dropouts. In particular, significant work has been devoted for deriving upper bounds on the updating interval for which stability can be guaranteed <cit.>. The upper bounds could be used as the update deadline over the network as we will discuss in more detail in Section <ref>. The bursty message dropout is very critical for control systems since it directly affects the upper bounds on the updating interval.§.§.§ Communication System AspectData packets may be lost during their transmissions, due to the susceptibility of wireless channel to blockage, multipath, doppler shift, and interference <cit.>. Obstructions between transmitter and receiver, and their variation over time, cause random variations in the received signal, called shadow fading. The probabilistic distribution of the shadow fading depends on the number, size, and material of the obstructions in the environment. Multipath fading, mainly caused by the multipath components of the transmitted signal reflected, diffracted or scattered by surrounding objects, occurs over shorter time periods or distances than shadow fading. The multipath components arriving at the receiver cause constructive and destructive interference, changing rapidly over distance. Doppler shift due to the relative motion between the transmitter and the receiver may cause the signal to decorrelate over time or impose lower bound on the channel error rate. Furthermore, unintentional interference from the simultaneous transmissions of neighboring nodes and intentional interference in the form of cyber-attacks can disturb the successful reception of packets as well.§.§ Network Energy Consumption A truly wireless solution for WNCS requires removing power cables in addition to the data cables to provide full flexibility of installation and maintenance. Therefore, the nodes need to rely on either battery storage or energy harvesting techniques. Limiting the energy consumption in the wireless network prolongs the lifetime of the nodes. If enough energy scavenging can be extracted from natural sources, inductive or magnetic resonant coupling, or radio frequency, then infinite lifetime may be achieved <cit.>.Decreasing sampling period, message delay, and message dropout improves the performance of the control system, but at the cost of higher energy consumption in the communication system <cit.>. The higher the sampling rate, the greater the number of packets to be transmitted over the channel. This increases the energy consumption of the nodes. Moreover, decreasing message delay requires increasing the transmission rate or data encoding/decoding capability at the transceivers. This again comes at the cost of increased energy consumption <cit.>. Finally, decreasing message dropout requires either increasing transmit power to combat fading and interference, or increasing data encoding/decoding capabilities. This again translates into energy consumption.§ WIRELESS NETWORK§.§ StandardizationThe most frequently adopted communication standards for WNCS are IEEE 802.15.4 and IEEE 802.11 with some enhancements. Particularly, WirelessHART, ISA-100.11a, and IEEE 802.15.4e are all based on IEEE 802.15.4. Furthermore, some recent works of IETF consider Internet Protocol version 6 (IPv6) over low-power and lossy networks such as 6LoWPAN, Routing Protocol for Low-Power and Lossy Networks (RPL), and 6TiSCH, which are all compatible with IEEE 802.15.4 <cit.>. IEEE 802.15.4 is originally developed for low-rate, low-power and low-cost Personal Area Networks (PANs) without any concern on delay and reliability. The standards such as WirelessHART, ISA-100.11a and IEEE 802.15.4e are built on top of the physical layer of IEEE 802.15.4 with additional Time Division Multiple Access (TDMA), frequency hopping and multiple path features to provide delay and reliable packet transmission guarantees while further lowering energy consumption. In this subsection, we first introduce IEEE 802.15.4 and then discuss WirelessHART, ISA-100.11a, IEEE 802.15.4e, and the higher layers of IETF activities such as 6LoWPAN, RPL, and 6TiSCH. On the other hand, although the key intentions of the IEEE 802.11 family of Wireless Local Area Network (WLAN) standards are to provide high throughput and a continuous network connection, several extensions have been proposed to support QoS for wireless industrial communications <cit.>. In particular, the IEEE 802.11e specification amendment introduces significant enhancements to support the soft real-time applications. In this subsection, we will describe the fundamental operations of basic IEEE 802.11 and IEEE 802.11e. The standards are summarized in Table <ref>. §.§.§ IEEE 802.15.4IEEE 802.15.4 standard defines the physical and MAC layers of the protocol stack <cit.>. A PAN consists of a PAN coordinator that is responsible of managing the network and many associated nodes. The standard supports both star topology, in which all the associated nodes directly communicate with the PAN coordinator, and peer-to-peer topology, where the nodes can communicate with any neighbouring node while still being managed by the PAN coordinator. The physical layer adopts direct sequence spread spectrum, which is based on spreading the transmitted signal over a large bandwidth to enable greater resistance to interference. A single channel between 868 and 868.6 MHz, 10 channels between 902.0 and 928.0 MHz, and 16 channels between 2.4 and 2.4835 GHz are used. The transmission data rate is 250 kbps in the 2.4 GHz band, 40 kbps in 915 MHz and 20 kbps in 868 MHz band.The standard defines two channel access modalities: the beacon enabled modality, which uses a slotted CSMA/CA and the optional Guaranteed Time Slot (GTS) allocation mechanism, and a simpler unslotted CSMA/CA without beacons. The communication is organized in temporal windows denoted superframes. Fig. <ref> shows the superframe structure of the beacon enabled mode.In the following, we focus on the beacon enabled modality. The network coordinator periodically sends beacon frames in every beacon interval T_ to identify its PAN and to synchronize nodes that communicate with it. The coordinator and nodes can communicate during the active period, called the superframe duration T_, and enter the low-power mode during the inactive period. The structure of the superframe is defined by two parameters, the beacon order () and the superframe order (), which determine the length of the superframe and its active period, given byT_=× 2^ ,T_=× 2^ ,respectively, where 0 ≤≤≤ 14 andis the number of symbols forming a superframe whenis equal to 0. In addition, the superframe is divided into 16 equally sized superframe slots of length . Each active period can be further divided into a Contention Access Period (CAP) and an optional Contention Free Period (CFP), composed of GTSs. A slotted CSMA/CA mechanism is used to access the channel of non time-critical data frames and GTS requests during the CAP. In the CFP, the dedicated bandwidth is used for time-critical data frames. Fig. <ref> illustrates the date transfer mechanism of the beacon enabled mode for the CAP and CFP. In the following, we describe the data transmission mechanism for both CAP and CFP. CSMA/CA mechanism of CAP: CSMA/CA is used both during the CAP in beacon enabled mode and all the time in non-beacon enabled mode. In CAP, the nodes access the network by using slotted CSMA/CAas described in Fig. <ref>. The major difference of CSMA/CA in different channel access modes is that the backoff timer starts at the beginning of the next backoff slot in beacon enabled mode, and immediately in non-beacon enabled mode. Upon the request of the transmission of a packet, the following steps of the CSMA/CA algorithms are performed: 1) The channel access variables are initialized. Contention window size, denoted by , is initialized to 2 for the slotted CSMA/CA. The backoff exponent, called , and number of backoff stages, denoted by , are set to 0 and macMinBE, respectively. 2) A backoff time is chosen randomly from [0, 2^-1] interval. The node waits for the backoff time in units of backoff period slots. 3) When the backoff timer expires, the clear channel assessment is performed. a) If the channel is free in non-beacon enabled mode, the packet is transmitted. b) If the channel is free in beacon enabled mode,is updated by subtracting 1. If =0, the packet is transmitted. Otherwise, the second channel assessment is performed. c) If the channel is busy, the variables are updated as follows: = +1,= min(+1, macMaxBE), =2. The algorithm continues with step 2 if < macMaxCSMABackoffs, otherwise the packet is discarded.GTS allocation of CFP: The coordinator is responsible for the GTS allocation and determines the length of the CFP in a superframe. To request the allocation of a new GTS, the node sends the GTS request command to the coordinator. The coordinator confirms its receipt by sending an ACK frame within CAP. Upon receiving a GTS allocation request, the coordinator checks whether there are sufficient resources and, if possible, allocates the requested GTS. We recall that Fig. <ref> illustrates the GTS allocation mechanism. The CFP length depends on the GTS requests and the current available capacity in the superframe. If there is sufficient bandwidth in the next superframe, the coordinator determines a node list for GTS allocation based on a first-come-first-served policy. Then, the coordinator transmits the beacon including the GTS descriptor to announce the node list of the GTS allocation information. Note that on receipt of the ACK to the GTS request command, the node continues to track beacons and waits for at most aGTSDescPersistenceTime superframes. A node uses the dedicated bandwidth to transmit the packet within the CFP.§.§.§ WirelessHART WirelessHART was released in September 2007 as the first wireless communication standard for process control applications <cit.>. The standard adopts the IEEE 802.15.4 physical layer on channels 11–25 at 2.4 GHz. TDMA is used to allow the nodes to put their radio in sleep when they are not scheduled to transmit or receive a packet for better energy efficiency and eliminate collisions for better reliability. The slot size of the TDMA is fixed at 10 ms.To increase the robustness to interference in the harsh industrial environments, channel hopping and channel blacklisting mechanisms are incorporated into the direct sequence spread spectrum technique adopted in the IEEE 802.15.4 standard. Frequency hopping spread spectrum is used to alternate the channel of transmission on a packet level, i.e., the channel does not change during the packet transmission. The frequency hopping pattern is not explicitly defined in the standard but needs to be determined by the network manager and distributed to the nodes. Channel blacklisting may also be used to eliminate the channels containing high interference levels. The network manager performs the blacklisting based on the quality of reception at different channels in the network.WirelessHART defines two primary routing approaches for multihop networks: source routing and graph routing. Source routing provides a single route of each flow, while graph routing provides multiple redundant routes <cit.>. Since the source routing approach only establishes a fixed single path between source and destination, any link or node failure disturbs the end-to-end communication. For this reason, source routing is mostly used for network diagnostics purposes to test the end-to-end connection. Multiple redundant routes in the graph routing provide significant improvement over source routing in terms of the routing reliability. The routing paths are determined by the network manager based on the periodic reports received from the nodes including the historical and instantaneous quality of the wireless links. §.§.§ ISA-100.11aISA-100.11a standard was released in September 2009 with many similar features to WirelessHART but providing more flexibility and adaptivity <cit.>. Similar to WirelessHART, the standard adopts the IEEE 802.15.4 physical layer on channels 11–25 at 2.4 GHz but with the optional additional usage of channel 26. TDMA is again used for better energy consumption and reliability performance but with a configurable slot size on a superframe base. ISA-100.11a adopts channel hopping and blacklisting mechanism to improve the communication robustness similar to WirelessHART but with more flexibility. The standard adopts three channel hopping mechanisms: slotted hopping, slow hopping, and hybrid hopping. In slotted hopping, the channel is varied in each slot, same as WirelessHART. In slow hopping, the node stays on the same channel for consecutive time slots, a number which is configurable. Slow hopping facilitates the communication of nodes with imprecise synchronization, join process of new nodes, and transmission of event-driven packets. Transmissions in a slow hopping period is performed by using CSMA/CA. This mechanism decreases the delay of event-based packets while increasing energy consumption due to unscheduled transmission and reception times. In hybrid hopping, slotted hopping is combined with slow hopping by accommodating slotted hopping for periodical messages and slow hopping for less predictable new or event-driven messages. There are five predetermined channel hopping patterns in this standard, in contrast to WirelessHART that does not explicitly define hopping patterns. §.§.§ IEEE 802.15.4eThis standard has been released in 2012 with the goal of introducing new access modes to address the delay and reliability constraints of industrial applications <cit.>. IEEE 802.15.4e defines three major MAC modes, namely, Time Slotted Channel Hopping (TSCH), Deterministic and Synchronous Multichannel Extension (DSME), and Low Latency Deterministic Network (LLDN). Time Slotted Channel Hopping: TSCH is a medium access protocol based on the IEEE 802.15.4 standard for industrial automation and process control <cit.>. The main idea of TSCH is to combine the benefits of time slotted access with multichannel and channel hopping capabilities. Time slotted access increases the network throughput by scheduling the collision-free links to meet the traffic demands of all nodes. Multichannel allows more nodes to exchange their packets at the same time by using different channel offsets. Since TSCH is based on the scheduling of TDMA slot and FDMA, the delay is deterministically bounded depending on the time-frequency pattern. Furthermore, the packet based frequency hopping is supported to achieve a high robustness against interference and other channel impairments. TSCH also supports various network topologies, including star, tree, and mesh. TSCH mode exhibits many similarities to WirelessHART and ISA-100.11a, including slotted access, multichannel communication, and frequency hopping for mesh networks. In fact, it defines more details of the MAC operation with respect to WirelessHART and ISA-100.11a.In the TSCH mode, nodes synchronize on a periodic slotframe consisting of a number of time slots. Each node obtains synchronization, channel hopping, time slot and slotframe information from Enhanced Beacons (EBs) that are periodically sent by other nodes in order to advertise the network. The slots may be dedicated to one link or shared among links. A dedicated link is defined as the pairwise assignment of a directed communication between nodes in a given time slot on a given channel offset. Hence, a link between communicating nodes can be represented by a pair specifying the time slot in the slotframe and the channel offset used by the nodes in that time slot. However, the TSCH standard does not specify how to derive an appropriate link schedule.Since collisions may occur in shared slots, the exponential backoff algorithm is used to retransmit the packet in the case of a transmission failure to avoid repeated collisions. Differently from the original IEEE 802.15.4 CSMA/CA algorithm, the backoff mechanism is activated only after a collision is experienced rather than waiting for a random backoff time before the transmission. Deterministic and Synchronous Multichannel Extension: DSME is designed to support stringent timeliness and reliability requirements of factory automation, home automation, smart metering, smart buildings and patient monitoring <cit.>. DSME extends the beacon enabled mode of the IEEE 802.15.4 standard, relying on the superframe structure, consisting of CAPs and CFPs, by increasing the number of GTS time slots and frequency channels used <cit.>. The channel access of DSME relies on a specific structure called multi-superframe. Each multi-superframe consists of a collection of superframes defined in IEEE 802.15.4. The beacon transmission interval is a multiple number of multi-superframes without inactive period. By adopting a multi-superframe structure, DSME tries to support both periodic and aperiodic (or event-driven) traffic, even in large multihop networks. In a DSME network, some coordinators periodically transmit an EB, used to keep all the nodes synchronized and allow new nodes to join the network. The distributed beacon and GTS scheduling algorithms of DSME allow to quickly react to time-varying traffic and changes in the network topology. Specifically, DSME allows to establish dedicated links between any two nodes of the network for the multihop mesh networks with deterministic delay. DSME is scalable and does not suffer from a single point of failure because beacon scheduling and slot allocation are performed in a distributed manner. This is the major difference with TSCH, which relies on a central entity. Given the large variety of options and features, DSME turns out to be one of the most complex modes of the IEEE 802.15.4e standard. Due to the major complexity issue, DSME still lacks a complete implementation. Moreover, all the current studies on DSME are limited to single-hop or cluster-tree networks, and do not investigate the potentialities of mesh topologies.Low Latency Deterministic Network: LLDN is designed for very low latency applications of the industrial automation where a large number of devices sense and actuate the factory production in a specific location <cit.>. Differently from TSCH and DSME, LLDN is designed only for star topologies, where a number of nodes need to periodically send data to a central sink using just one channel frequency. Specifically, the design target of LLDN is to support the data transmissions from 20 sensor nodes every 10 ms. Since the former IEEE 802.15.4 standard does not fulfill this constraint, the LLDN mode defines a fine granular deterministic TDMA access. Similarly to IEEE 802.15.4, each LLDN device can obtain the exclusive access for a time slot in the superframe to send data to the PAN coordinator. The number of time slots in a superframe determines how many nodes can access the channel. If many nodes need to send their packets, the PAN coordinator needs to equip with multiple transceivers, so as to allow simultaneous communications on different channels. In LLDN, short MAC frames with just a 1-octet MAC header are used to accelerate frame processing and reduce transmission time. Moreover, a node can omit the address fields in the header, since all packets are destined to the PAN coordinator. Compared with TSCH, LLDN nodes do not need to wait after the beginning of the time slot in order to start transmitting. Moreover, LLDN provides a group ACK feature. Hence, time slots can be much shorter than the one of TSCH, since it is not necessary to accommodate waiting times and ACK frames.§.§.§ 6LoWPAN6LoWPAN provides a compaction and fragmentation mechanism to efficiently transport IPv6 packets in IEEE 802.15.4 frames <cit.>. The IPv6 header is compressed by the removal of the fields that are not needed or always have the same contents, and inferring IPv6 addresses from link layer addresses. Moreover, fragmentation rules are defined so that multiple IEEE 802.15.4 frames can form one IPv6 packet. 6LoWPAN allows low-power devices to communicate by using IP.§.§.§ RPLRPL is an IPv6 routing protocol for Low-Power and Lossy Networks (LLNs) proposed to meet the delay, reliability and high availability requirements of critical applications in industrial and environmental monitoring <cit.>. RPL is a distance vector and source routing protocol. It can operate on top of any link layer mechanism including IEEE 802.15.4 PHY and MAC. RPL adopts Destination Oriented Directed Acyclic Graphs (DODAGs), where most popular destination nodes act as the roots of the directed acyclic graphs. Directed acyclic graphs are tree-like structures that allow the nodes to associate with multiple parent nodes. The selection of the stable set of parents for each node is based on the objective function. The objective function determines the translation of routing metrics, such as delay, link quality and connectivity, into ranks, where the rank is defined as an integer, strictly decreasing in the downlink direction from the root. RPL left the routing metric open to the implementation <cit.>.§.§.§ 6TiSCH6TiSCH integrates an Internet-enabled IPv6-based upper stack, including 6LoWPAN, RPL and IEEE 802.15.4 TSCH link layer <cit.>. This integration allows achieving industrial performance in terms of reliability and power consumption while providing an IP-enabled upper stack. 6TiSCH Operation Sublayer (6top) is used to manage TSCH schedule by allocating and deallocating resources within the schedule, monitor performance and collect statistics.6top uses either centralized or distributed scheduling. In centralized scheduling, an entity in the network collects topology and traffic requirements of the nodes in the network, computes the schedule and then sends the schedule to the nodes in the network. In distributed scheduling, nodes communicate with each other to compute their own schedule based on the local topology information. 6top labels the scheduled cells as either hard or soft depending on their dynamic reallocation capability. A hard cell is scheduled by the centralized entity and can be moved or deleted inside the TSCH schedule only by that entity. 6top maintains statistics about the network performance in the scheduled cells. This information is then used by the centralized scheduling entity to update the schedule as needed. Moreover, this information can be used in the objective function of RPL. On the other hand, a soft cell is typically scheduled by a distributed scheduling entity. If a cell performs significantly worse than other cells scheduled to the same neighbor, it is reallocated, providing an interference avoidance mechanism in the network. The distributed scheduling policy, called on-the-fly scheduling, specifies the structure and interfaces of the scheduling <cit.>. If the outgoing packet queue of a node fills up, the on-the-fly scheduling negotiates additional time slots with the corresponding neighbors. If the queue is empty, it negotiates the removal of the time slots. §.§.§ IEEE 802.11The basic 802.11 MAC layer uses the Distributed Coordination Function (DCF) with a simple and flexible exponential backoff based CSMA/CA and optional RTS/CTS for medium sharing <cit.>. If the medium is sensed idle, the transmitting node transmits its frame. Otherwise, it postpones its transmission until the medium is sensed free for a time interval equal to the sum of an Arbitration Inter-Frame Spacing (AIFS) and a random backoff interval. DCF experiences a random and unpredictable backoff delay. As a result, the periodic real-time NCS packets may miss their deadlines due to the long backoff delay, particularly under congested network conditions.To enforce a timeliness behavior for WLANs, the original 802.11 MAC defines another coordination function called the Point Coordination Function (PCF). This is available only in infrastructure mode, where nodes are connected to the network through an Access Point (AP). APs send beacon frames at regular intervals. Between these beacon frames, PCF defines two periods: the Contention Free Period (CFP) and the Contention Period (CP). While DCF is used for the CP, in the CFP, the AP sends contention-free-poll packets to give them the right to send a packet. Hence, each node has an opportunity to transmit frames during the CFP. In PCF, data exchange is based on a periodically repeated cycle (e.g., superframe) within which time slots are defined and exclusively assigned to nodes for transmission. PCF does not provide differentiation between traffic types, and thus does not fulfill the deadline requirements for the real-time control systems. Furthermore, this mode is optional and is not widely implemented in WLAN devices. §.§.§ IEEE 802.11eAs an extension of the basic DCF mechanism of 802.11, the 802.11e enhances the DCF and the PCF by using a new coordination function called the Hybrid Coordination Function (HCF) <cit.>. Similar to those defined in the legacy 802.11 MAC, there are two methods of channel accesses, namely, Enhanced Distributed Channel Access (EDCA) and HCF Controlled Channel Access (HCCA) within the HCF. Both EDCA and HCCA define traffic categories to support various QoS requirements.The IEEE 802.11e EDCA provides differentiated access to individual traffic known as Access Categories (ACs) at the MAC layer. Each node with high priority traffic basically waits a little less before it sends its packet than a node with low priority traffic. This is accomplished through the variation of CSMA/CA using a shorter AIFS and contention window range for higher priority packets. Considering the real-time requirements of NCSs, the periodic NCS traffic should be defined as an AC with a high priority <cit.> and saturation must be avoided for high priority ACs <cit.>. HCCA extends PCF by supporting parametric traffic and comes close to actual transmission scheduling. Both PCF and HCCA enable contention-free access to support collision-free and time-bounded transmissions. In contrast to PCF,the HCCA allows for CFPs being initiated at almost anytime to support QoS differentiation. The coordinator drives the data exchanges at runtime according to specific rules, depending on the QoS of the traffic demands. Although HCCA is quite appealing, like PCF, HCCA is also not widely implemented in network equipment. Hence, some researches adapt the DCF and EDCA mechanisms for practical real-time control applications <cit.>.§.§ Wireless Network ParametersTo fulfill the control system requirements, the bandwidth of the wireless networks needs to be allocated to high priority data for sensing and actuating with specific deadline requirements. However, existing QoS-enabled wireless standards do not explicitly consider the deadline requirements and thus lead to unpredictable performance of WNCS <cit.>. The wireless network parameters determine the probability distribution of the critical interactive system variables. Some design parameters of different layers are the transmission power and rate of the nodes, the decoding capability of the receiver at the physical layer, the protocol for channel access and energy saving mechanism at the MAC layer, and the protocol for packet forwarding at the routing layer. §.§.§ Physical LayerThe physical layer parameters that determine the values of the critical interactive system variables are the transmit power and rate of the network nodes. The decoding capability of the receiver depends on the signal-to-interference-plus-noise ratio (SINR) at the receiver and SINR criteria. SINR is obviously the ratio of the signal power to the total power of noise and interference, while SINR criteria is determined by the transmission rate and decoding capability of the receiver. The increase in the transmit power of the transmitter increases SINR at the receiver. However, the increase in the transmit power at the neighboring nodes causes a decrease at the SINR, due to the increase in interference. Optimizing the transmit power of neighboring nodes is, therefore, critical in achieving SINR requirements at the receivers. The transmit rate determines the SINR threshold at the receivers. As the transmit rate increases, the required SINR threshold increases. Moreover, depending on the decoding capability of the receiver, there may be multiple SINR criteria. For instance, in successive interference cancellation, multiple packets can be received simultaneously based on the extraction of multiple signals from the received composite signal, through successive decoding <cit.>. IEEE 802.15.4 allows the adjustment of both transmit power and rate. However, WirelessHART and ISA-100.11a use fixed power and rate, operating at the suboptimal region. §.§.§ Medium Access ControlMAC protocols fall into one of three categories: contention-based access, schedule-based access, and hybrid access protocols. Contention-based Access Protocol: Contention-based random access protocols used in WNCS mostly adopt the CSMA/CA mechanism of IEEE 802.15.4. The values of the parameters that determine the probability distribution of delay, message loss probability, and energy consumption include the minimum and maximum value of backoff exponent, denoted by macMinBE and macMaxBE, respectively, and maximum number of backoff stages, called macMaxCSMABackoffs. Similarly to IEEE 802.15.4, the corresponding parameters for IEEE 802.11 MAC include the IFS time, contention window size, number of tries to sense the clean channel, and retransmission limits due to missing ACKs.The energy consumption of CSMA/CA has been shown to be mostly dominated by the constant listening to the channel <cit.>. Therefore, various energy conservation mechanisms adopting low duty-cycle operation have later been proposed <cit.>TMAC, BMAC–<cit.>. In low duty-cycle operation, the nodes periodically cycle between a sleep and listening state, with the corresponding durations of sleep time and listen time, respectively. Low duty-cycle protocols may be synchronous or asynchronous. In synchronous duty-cycle protocols, the listen and sleep time of neighboring nodes are aligned in time <cit.>. However, this requires an extra overhead for synchronization and exchange of schedules. In asynchronous duty-cycle protocols, on the other hand, the transmitting node sends a long preamble <cit.> or multiple short preambles <cit.> to guarantee the wakeup of the receiver node. The duty-cycle parameters, i.e., sleep time and listen time, significantly affect the delay, message loss probability, and energy consumption of the network. Using a larger sleep time reduces the energy consumption in idle listeningat the receiver, while increasing the energy consumption at the transmitter due to the transmission of longer preamble. Moreover, the increase in sleep time significantly degrades the performance of message delay and reliability due to the high contention in the medium with increasing traffic.Schedule-based Access Protocol: Schedule-based protocols are based on assigning time slots, of possibly variable length, and frequency bands to a subset of nodes for concurrent transmission. Since the nodes know when to transmit or receive a packet, they can put their radio in sleep mode when they are not scheduled for any activity. The scheduling algorithms can be classified into two categories: fixed priority scheduling and dynamic priority scheduling <cit.>. In fixed priority scheduling, each flow is assigned a fixed priority off-line as a function of its periodicity parameters, including sampling period and delay constraint. For instance, in rate monotonic and deadline monotonic scheduling, the flows are assigned priorities as a function of their sampling periods and deadlines, respectively: The shorter the sampling period and deadline, the higher the priority. Fixed priority scheduling algorithms are preferred due to their simplicity and lower scheduling overhead but are typically non-optimal since they do not take the urgency of transmissions into account. On the other hand, in dynamic priority scheduling algorithms, the priority of the flow changes over time depending on the execution of the schedule. For instance, in Earliest Deadline First (EDF) Scheduling, the transmission closest to the deadline will be given highest priority, so, scheduled next; whereas in least laxity first algorithm, the priority is assigned based on the slack time, which is defined as the amount of time left after the transmission if the transmission started now. Although dynamic priority scheduling algorithms have higher scheduling overhead, they perform much better due to the dynamic adjustment of priorities over time.Hybrid Access Protocol: Hybrid protocols aim to combine the advantages of contention-based random access and schedule-based protocols: Random access eliminates the overhead of scheduling and synchronization, whereas scheduled access provides message delay and reliability guarantees by eliminating collisions. IEEE 802.15.4 already provides such a hybrid architecture for flexible usage depending on the application requirements <cit.>. §.§.§ Network RoutingOn the network layer, the routing protocol plays an extremely important role in achieving high reliability and real-time forwarding together with energy efficiency for large scale WNCS, such as large-scale aircraft avionics and industrial automation. Various routing protocols are proposed to achieve energy efficiency for traditional WSN applications <cit.>. However, to deal with much harsher and noisier environments, the routing protocol must additionally provide reliable real-time transmissions <cit.>. Multipath routing has been extensively studied in wireless networks for overcoming wireless errors and improving routing reliability <cit.>. Most of previous works focus on identifying multiple link/node-disjoint paths to guarantee energy efficiency and robustness against node failures <cit.>.ISA 100.11a and WirelessHART employ a simple and reliable routing mechanism called graph routing to enhance network reliability through multiple routing paths. When using graph routing, the network manager builds multiple graphs of each flow. Each graph includes some device numbers and forwarding list with unique graph identification. Based on these graphs, the manager generates the corresponding sub-routes for each node and transmits to every node. Hence, all nodes on the path to the destination are pre-configured with graph information that specifies the neighbors to which the packets may be forwarded. For example, if the link of the sub-routes is broken, then the node forwards the packet to another neighbor of other sub-routes corresponding to the same flow. There has been an increasing interest in developing new approaches for graph routing with different routing costs dependent on reliability, delay, and energy consumption <cit.>.RPL employs the objective function to specify the selection of the routes in meeting the QoS requirements of the applications. Various routing metrics have been proposed in the objective function to compute the rank value of the nodes in the network. The rank represents the virtual coordinate of the node, i.e., its distance to the DODAG root with respect to a given metric. Some approaches propose the usage of a single metric, including link expected transmission count <cit.>, node remaining energy, link delay <cit.>, MAC based metrics considering packet losses due to contention <cit.> and queue utilization <cit.>. <cit.> proposes two methods, namely, simple combination and lexical combination, for combining two routing metrics among the hop count, expected transmission count, remaining energy, and received signal strength indicator. In simple combination, the rank of the node is determined by using a composition function as the weighted sum of the ranks of two selected metrics. In lexical combination, the node selects the neighbor with the lower value of the first selected metric, and if they are equal in the first metric, the node selects the one with the lower value of the second composition metric. Finally, <cit.> combines a set of these metrics in order to provide a configurable routing decision depending on the application requirements based on the fuzzy parameters. § CONTROL SYSTEM ANALYSIS AND DESIGNThis section provides a brief overview of the analysis and design of control systems to deal with the non-ideal critical interactive system variables resulting from the wireless network. The presence of an imperfect wireless network degrades the performance of the control loop and can even lead to instability. Therefore, it is important to understand how these interactive system variables influence the closed-loop performance in a quantitative manner. Fig. <ref> illustrates the section structure and relations.Control system analysis has two main usages here: requirement definition for the network design and the actual control algorithm design. First, since the control cost depends on the network performance such as message loss and delay, the explicit set of requirements for the wireless network design are determined to meet a certain control performance. This allows the optimization of the network design to meet the given constraints imposed by the control system instead of just improving the reliability, delay, or energy efficiency. Second, based on the control system analysis, the controller is designed to guarantee the control performance under imperfect network operation.Despite the interdependence between the three critical interactive variables of sampling period, message delay, and message dropout, as we have discussed in Section <ref>, much of the available literature on NCS considers only a subset of these variables due to the high complexity of the problem. Since any practical wireless network incurs imperfect network performance, the WNCS designers must carefully consider the performance feasibility and tradeoffs. Previous studies in the literature analyze the stability of control systems by considering either only wireless sensor–controller channel, e.g., <cit.> or both sensor–controller and controller–actuator, e.g., <cit.>.Hybrid system and Markov jump linear system have been applied for the modeling and control of NCS under message dropout and message delay. The hybrid or switched system approach refers to continuous-time dynamics with (isolated) discrete switching events <cit.>. Mathematically, these components are usually described by a collection of indexed differential or difference equations. For NCS, a continuous-time control system can be modelled as the continuous dynamics and network effects such as message dropouts and message delays are modelled as the discrete dynamics <cit.>. Compared to switched systems, in Markov jump linear system the mode switches are governed by a stochastic process that is statistically independent from the state values <cit.>. Markov systems may provide less conservative requirements than switched systems. However, the network performance must support the independent transitions between states. In other words, this technique is effective if the network performance is statistically independent or modelled as a simple Markov model.The above theoretical approaches can be used to derive network requirements as a function of the sampling period, message dropout, and message delay. Some network requirements are explicitly related to the message dropout and message delay, such as maximum allowable message dropout probability, number of consecutive message dropouts, and message delay. Furthermore, since various analytical tools only provide sufficient conditions for closed-loop stability, their requirements might be too conservative. In fact, many existing results are shown to be conservative in simulation studies and finding tighter bounds on the network is an area of great interest <cit.>. To highlight the importance of the sampling mechanism, we classify NCS analysis and design methods into time-triggered sampling and event-triggered sampling. §.§ Time-Triggered SamplingTime-triggered NCSs can be classified into two categories based on the relationship between sampling period and message delay: hard sampling period and soft sampling period. The message delay of hard sampling period is smaller than the sampling period. The network discards the message if is not successfully transmitted within its sampling period and tries to transmit the latest sampled message for the hard sampling period. On the other hand, the node of the soft sampling period continues to transmit the outdated messages even after its sampling period. The wireless network design must take into account which time-triggered sampling method is implemented.§.§.§ Hard Sampling PeriodThe message dropouts of NCSs are generally modelled as stochastic variables with and without limited number of consecutive message dropouts. Hence, we classify hard sampling period into unbounded consecutive message dropout and bounded consecutive message dropout.Unbounded Consecutive Message Dropout: When the controller is collocated with the actuators, a Markov jump linear system can be used to analyze the effect of the message dropout <cit.>. In <cit.>, the message dropout is modelled as a Bernoulli random process with dropout probability p ∈ [0, 1). Under the Bernoulli dropout model, the system model of the augmented state is a special case of a discrete-time Markov jump linear system. The matrix theory is used to show exponential stability of the NCS with dropout probability p. The stability condition interpreted as a linear matrix inequality is a useful tool to design the output feedback controller as well as requirement derivation of the maximum allowable probability of message dropouts for the network design. However, the main results of <cit.> are hard to apply for wireless network design since they ignore the message delay for a fixed sampling period. Furthermore, the link reliability of wireless networks does not follow a Bernoulli random process since wireless links are highly correlated over time and space in practice <cit.>. While the sensor–controller communication is considered without any delays in <cit.>, the sensor–controller and controller–actuator channels are modelled as two switches indicating whether the corresponding message is dropped or not in <cit.>. A discrete-time switched system is used to model the closed-loop NCS with message dropouts when the message delay and sampling period are fixed. By using switched system theory, sufficient conditions for exponential stability are presented in terms of nonlinear matrix inequalities. The proposed methods provide an explicit relation between the message dropout rate and the stability of the NCS. Such a quantitative relation enables the design of a state feedback controller guaranteeing the stability of the closed-loop NCS under a certain message dropout rate. The network may assign a fixed time slot for a single packet associated to the message to guarantee the constant message delay. However, since this does not allow any retransmissions, it will significantly degrade the message dropout rate. Another way to achieve constant message delay may be to buffer the received packet at the sink. However, this will again degrade the control performance with higher average delay. In order to apply the results of <cit.>, the wireless network needs to monitor the message dropout probability and adapt its operation in order to meet the maximum allowable probability of message dropouts. These results can further be used to save network resources while preserving the stability of the NCS by dropping messages at a certain rate. In fact, most NCS research focuses on the stability analysis and design of the control algorithm rather than explicit derivation of network requirements useful for the wireless network design. Since the joint design of controller and wireless networks necessitates the derivation of the required message dropout probability and message delay to achieve the desired control cost, <cit.> provides the formulation of the control cost function as a function of the sampling period, message dropout probability, and message delay. Most NCS researches use the linear quadratic cost function as the control objective. The model combines the stochastic models of the message dropout <cit.> and the message delay <cit.>. Furthermore, the estimator and controller are obtained by extending the results of the optimal stochastic estimator and controller of <cit.>. Given a control cost, numerical methods are used to derive a set of the network requirements imposed on the sampling period, message dropout, and message delay. One of the major drawbacks is the high computation complexity to quantify the control cost in order to find the feasible region of the network requirements.Bounded Consecutive Message Dropout:Some NCS literatures <cit.> assume limited number of consecutive message dropouts, such hard requirements are unreasonable for wireless networks where the packet loss probability is greater than zero at any point in time. Hence, some other approaches <cit.> set stochastic constraints on the maximum allowable number of consecutive message dropouts. Control theory provides deterministic bounds on the maximum allowable number of consecutive message dropouts <cit.>. In <cit.>, a switched linear system is used to model NCSs with constant message delay and arbitrary but finite message dropout over the sensor–controller channel. The message dropout is said to be arbitrary if the sampling sequence of the successfully applied actuation is an arbitrary variable within the maximum number of consecutive message dropouts. Based on the stability criterion of the switched system, a linear matrix inequality is used to analyze sufficient conditions for stability. Then, the maximum allowable bound of consecutive message dropouts and the feedback controllers are derived via the feasible solution of a linear matrix inequality. A Lyapunov-based characterization of stability is provided and explicit bounds on the Maximum Allowable Transfer Interval (MATI) and the Maximally Allowable Delay (MAD) are derived to guarantee the control stability of NCSs, by considering time-varying sampling period and time-varying message delays, in <cit.>. If there are message dropouts for the time-triggered sampling, its effect is modelled as a time-varying sampling period from receiver point-of-view. MATI is the upper bound on the transmission interval for which stability can be guaranteed. If the network performance exceeds the given MATI or MAD, then the stability of the overall system could not be guaranteed. The developed results lead to tradeoff curves between MATI and MAD. These tradeoff curves provide effective quantitative information to the network designer when selecting the requirements to guarantee stability and a desirable level of control performance.Many control applications, such as wireless industrial automation <cit.>, air transportation systems <cit.>, and autonomous vehicular systems <cit.>, set a stochastic MATI constraint in the form of keeping the time interval between subsequent state vector reports above the MATI value with a predefined probability to guarantee the stability of control systems. Stochastic MATI constraint is an efficient abstraction of the performance of the control systems since it is directly related to the deadline of the real-time scheduling of the network design <cit.>. §.§.§ Soft Sampling PeriodSometimes it is reasonable to relax the strict assumption on the message delay being smaller than the sampling period. Some works assume the eventual successful transmission of all messages with various types of deterministic or stochastic message delays <cit.>. Since the packet retransmission corresponding to the message is allowed beyond its sampling period, one can consider the packet loss as a message delay. While the actuating signal is updated after the message delay of each sampling period if the delay is smaller than its sampling period <cit.>, the delays longer than one sampling period may result in more than one (or none) arriving during a single sampling period. It makes the derivation of recursive formulas of the augmented matrix of closed-loop system harder, compared to the hard sampling period case. To avoid high computation complexity, an alternative approach defines slightly different augmented state to use the stability results of switched systems in <cit.>. Even though the stability criterion defines the MATI and MAD requirements, there are fundamental limits of this approach to apply for wireless networks. The stability results hold if there is no message dropout for the fixed sampling period and constant message delay, since the augmented matrix consiered is a function of the fixed sampling period with the constant message delay. Hence, the MATI and MAD requirements are only used to set the fixed sampling period and message delay deadline. On the other hand, the NCS of <cit.> uses the time-varying sampling and varying message delay to take into account the message dropout and stochastic message delay. Hence, the MATI and MAD requirements of <cit.> are more practical control constraints than the ones of <cit.> to apply to wireless network design.In <cit.>, a stochastic optimal controller is proposed to compensate long message delays of the sensor–controller channel for fixed sampling period. The stochastic delay is assumed to be bounded with a known probability density function. Hence, the network manager needs to provide the stochastic delay model by analyzing delay measurements. In both <cit.> and <cit.>, the NCSs assume the eventual successful transmission of all messages. This approach is only reasonable if MATI is large enough compared to the sampling period to guarantee the eventual successful transmission of messages with high probability. However, it is not applicable for fast dynamical system (i.e., small MATI requirement).While <cit.> do not explicitly consider message dropouts, <cit.> jointly considers the message dropout and message delay longer than the fixed sampling period over the sensor–controller channel. From the derived stability criteria, the controller is designed and the MAD requirement is determined under a fixed message dropout rate by solving a set of matrix inequalities. Even though the message dropout and message delay are considered, the tradeoff between performance measures is not explicitly derived. However, it is still possible to obtain tradeoff curves by using numerical methods. The network is allowed to transmit the packet associated to the message within the MAD. The network also monitors the message dropout rate. Stability is guaranteed if the message dropout rate is lower than its maximum allowable rate. Furthermore, the network may discard outdated messages to efficiently utilize the network resource as long as the message dropout rate requirement is satisfied.§.§ Event-Triggered SamplingEvent-triggered control is reactive since it generates sensor measurements and control commands when the plant state deviates more than a certain threshold from a desired value. On the other hand, self-triggered control is proactive since it computes the next sampling or actuation instance ahead of current time. Event- and self-triggered control have been demonstrated to significantly reduce the network traffic load <cit.>. Motivated by those advantages, a systematic design of event-based implementations of stabilizing feedback control laws was performed in <cit.>.Event-triggered and self-triggered control systems consist of two elements, namely, a feedback controller that computes the control command, and a triggering mechanism that determines when the control input has to be updated again. The triggering mechanism directly affects the traffic load <cit.>. There are many proposals for the triggering rule in the event-triggered literature. Suppose that the state x(t) of the physical plant is available. One of the traditional objectives of event-triggered control is to maintain the condition∥ x(t) - x(t_k) ∥≤δ,where t_k denotes the time instant when the last control task is executed (the last event time) and δ > 0 is a threshold <cit.>. The next event time instant is defined as t_k+1 = inf{ t > t_k | ∥ x(t) - x(t_k) ∥ > δ} . The sensor of the event-triggered control loop continuously monitors the current plant state and evaluates the triggering condition. Network traffic is generated if the plant state deviates by the threshold. The network design problem is particularly challenging because the wireless network must support the randomly generated traffic. Furthermore, event-triggered control does not provide high energy efficiency since the node must continuously activate the sensing part of the hardware platform. Self-triggered control determines its next execution time based on the previously received data and the triggering rule <cit.>. Self-triggered control is basically an emulation of an event-triggered rule, where one considers the model of the plant and controller to compute the next triggering time. Hence, it is predictive sampling based on the plant models and controller rules. This approach is generally more conservative than the event-triggered approach since it is based on approximate models and predicted events. The explicit allocation of network resources based on these predictions improves the real-time performance and energy efficiency of the wireless network. However, since event- and self-triggered control generate fewer messages, the message loss and message delay might seem to be more critical than for time-triggered control <cit.>. §.§ Comparison Between Time- and Event-Triggered Sampling One of the fundamental issues is to compare the performance of time-triggered sampling and event-triggered sampling approaches by using various channel access mechanisms <cit.>. In fact, many event-based control researches show performance improvement since it often reduces the network utilization <cit.>. However, recent works of the event-based control using the random access show control performance limitations in the case when there are a large number of control loops <cit.>. <cit.> considers a control system where a number of time-triggered or event-triggered control loops are closed over a shared communication network. This research is one of the inspiring works of WNCS co-design problem, where both the control policy and network scheduling policy have been taken into account. The overall target of the framework is to minimize the sum of the stationary state variance of the control loops. A Dirac pulse is applied to achieve the minimum plant state variance as the control law. The sampling can be either time-triggered or event-triggered, depending on the MAC schemes such as the traditional TDMA, FDMA, and CSMA schemes. Intuitively, TDMA is used for the time-triggered sampling, while the event-triggered sampling is applied for CSMA. Based on the previous work <cit.>, the event-triggered approach is also used for FDMA since the event-triggered sampling with a minimum event interval T performs better than the one using the time-triggered sampling with the same time interval T. The authors of <cit.> assume that once the MAC protocol gains the network resource, the network is busy for specific delay from sensor to actuator, after which the control command is applied to the plant. The simulation results show that event-triggered control using CSMA gives the best performance. Even though the main tradeoffs and conclusions of the paper are interesting, some assumptions are not realistic. In practice, the Dirac pulse controls are unrealistic due to the capability limit of actuators. For simplicity, the authors assume that the contention resolution time of CSMA is negligible compared to the transmission time. This assumption is not realistic for general wireless channel access schemes such as IEEE 802.15.4 and IEEE 802.11. Furthermore, the total bandwidth resource of FDMA is assumed to scale in proportion to the number of plants, such that the transmission delay from sensor to actuator is inversely proportional to the number of plants. These assumptions are not practical since the frequency spectrum is a limited resource for general wireless networks, thus further studies are needed.While most previous works on event-based control consider a single control loop or small number of control loops, <cit.> compares time-triggered control and event-based control for a NCS consisting of a large number of plants. The pure ALOHA protocol is used for the event-based control of NCSs. The authors show that packet losses due to collisions drastically reduce the performance of event-based control if packets are transmitted whenever the event-based control generates an event. Remark that the instability of the ALOHA network itself is a well known problem in communications <cit.>. It turns out that in this setup time-triggered control is superior to event-based control. The same authors also analyze the tradeoff between delay and loss for event-based control with slotted ALOHA <cit.>. They show that the slotted ALOHA significantly improve the control cost of the state variance respect to the one of the pure ALOHA. However, the time-triggered control still performs better. Therefore, it is hard to generalize the performance comparison between time-triggered sampling and event-triggered sampling approaches since it really depends on the network protocol and topology.§ WIRELESS NETWORK DESIGN TECHNIQUES FOR CONTROL SYSTEMSThis section presents various design and optimization techniques of wireless networks for WNCS. We distinguish interactive design approach and joint design approach. In the interactive design approach, the wireless network parameters are tuned to satisfy given constraints on the critical interactive system variables, possibly enforced by the required control system performance. In the joint design approach, the wireless network and control system parameters are jointly optimized considering their interaction through the critical system variables. Fig. <ref> illustrates the section structure related to previous Sections <ref> and <ref>. In Table <ref>, we summarize the characteristics of the related works. In the table, we have demonstrated whether indications of requirements and communication and control parameters have been included in the network design or optimization for WNCS. Table <ref> classifies previous design approaches of WNCS based on control and communication aspects. Furthermore, Table <ref> categorizes previous works based on the wireless standards described in Section <ref>.§.§ Interactive Design Approach In the interactive design approach, wireless network parameters are tuned to satisfy the given requirements of the control system. Most of the interactive design approaches assume time-triggered control systems, in which sensor samples are generated periodically at predetermined rates. They generally assume that the requirements of the control systems are given in the form of upper bounds on the message delay or message dropout with a fixed sampling period. The adoption of wireless communication technologies for supporting control applications heavily depends on the ability to guarantee the bounded service times for messages, at least from a probabilistic point of view. This aspect is particularly important in control systems, where the real-time requirement is considered much more significant than other performance metrics, such as throughput, that are usually important in other application areas. Note that the real-time performance of wireless networks heavily depends on the message delay and message dropout. Hence, we mainly discuss the deadline-constrained MAC protocols of IEEE 802.15.4 and IEEE 802.11. Different analytical techniques can provide the explicit requirements of control systems for wireless networks, as we discussed in Section <ref>. The focus of previous research is mainly on the design and optimization of MAC, network resource scheduling, and routing layer, with limited efforts additionally considering physical layer parameters.§.§.§ Medium Access ControlResearch on real-time 802.15.4 and 802.11 networks can be classified into two groups. The first group of solutions called contention-based access includes adaptive MAC protocols for QoS differentiations. They adapt the parameters of backoff mechanism, retransmissions, and duty-cycling dependent on the constraints. The second group called schedule-based access relies on the contention free scheduling of a single-hop netowrk.Contention-based Access: Contention-based random access protocols for WNCS aim to tune the parameters of the CSMA/CA mechanism of IEEE 802.15.4 and IEEE 802.11, and duty-cycling to improve delay, packet loss probability, and energy consumption performance. The adaptive tuning algorithms are either measurement-based or model-based adaptation.The measurement-based adaptation techniques do not require any network model but rather depend on the local measurements of packet delivery characteristics. Early works of IEEE 802.15.4 propose adaptive algorithms to dynamically change the value of only a single parameter. <cit.> adaptively determine minimum contention window size, denoted by macMinBE, to decrease the delay and packet loss probability of nodes and increase overall throughput. The references <cit.> extend these studies to autonomously adjust all the CSMA/CA parameters. The ADAPT protocol <cit.> adapts the parameter values with the goal of minimizing energy consumption while meeting packet delivery probability based on their local estimates. However, ADAPT tends to oscillate between two or more parameters sets. This results in high energy consumption. <cit.> solves this oscillation problem by triggering the adaptation mechanism only upon the detection of a change in operating conditions. Furthermore, <cit.> aims to optimize duty-cycle parameters based on a linear increase/linear decrease of the duty-cycle depending on the comparison of the successfully received packet rate and its target value while minimizing the energy consumption. Model-based parameter optimization mainly use theoretical or experimental-based derivations of the probability distribution of delay, packet error probability, and energy consumption. A Markov model per node of IEEE 802.15.4 is used <cit.> to capture the state of each node at each moment in time. These individual Markov chains are then coupled by the memory introduced by fixed duration two slot clear channel assessment. The proposed Markov model is used to derive an analytical formulation of both throughput and energy consumption in such networks. The extension of this work in <cit.> leads to the derivation of the reliability, delay, and energy consumption as a function of all the CSMA/CA protocol parameters for IEEE 802.15.4. The paper <cit.> provides analytical models of delay, reliability, and energy consumption as a function of duty-cycle parameters by considering their effects on random backoff of IEEE 802.15.4 before successful transmissions. These models are then used to minimize energy consumption given constraints on delay and reliability. On the other hand, <cit.> derives experimental based models by using curve fitting techniques and validation through extensive experiments. An adaptive algorithm was also proposed to adjust the coefficients of these models by introducing a learning phase without any explicit information about data traffic, network topology, and MAC parameters.By considering IEEE 802.11, a deadline-constrained MAC protocol with QoS differentiation is presented for soft real-time NCSs <cit.>. It handles periodic traffic by using two specific mechanisms, namely, a contention-sensitive backoff mechanism and a deadline-sensitive retry limit assignment mechanism. The backoff algorithm offers bounded backoff delays, whereas the deadline-sensitive retry limit assignment mechanism differentiates the retry limits for periodic traffic in terms of their respective deadline requirements. A Markov chain model is established to describe the proposed MAC protocol and evaluate its performance in terms of throughput, delay, and reliability under the critical real-time traffic condition.<cit.> provides experimental measures and the analysis of 802.11g/e network to better understand the statistical distribution of delay for real-time industrial applications. The statistical distribution of network delay is first evaluated experimentally when the traffic patterns they support resemble the realistic industrial scenarios under the varying background traffic. Then, experimental results have been validated by means of a theoretical analysis for unsaturated traffic condition, which is a quite common condition in well-designed industrial communication systems. The performance evaluation shows that delays are generally bounded if the traffic on the industrial WLAN is light (below 20%). If the traffic grows higher (up to 40%), the QoS mechanism provided by EDCA is used to achieve quasi-predictable behavior and bounded delays for selected high priority messages.Schedule-based Access: The explicit scheduling of transmissions allows to meet the strict delay and reliability constraints of the nodes, by giving priority to the nodes with tighter constraint. To support soft real-time industrial applications, <cit.> combines a number of various mechanisms of IEEE 802.11 such as transmission and retransmission scheduling, seamless channel redundancy, and basic bandwidth management to improve the deterministic network performance. The proposed protocol relies on centralized transmission scheduling of a coordinator according to the EDF strategy. Furthermore, the coordinator takes care of the number of retransmissions to achieve both delay and reliability over lossy links. In addition to scheduling, the seamless channel redundancy concurrently transmits the copies of each frame on multiple distinct radio channels. This mechanism is appealing for real-time systems since it improves the reliability without affecting timeliness. Moreover, the bandwidth manager reallocates the unused bandwidth of failed data transmission to additional attempts of other data transmissions within their deadlines.<cit.> presents the design and implementation of a real-time wireless communication protocol called RT-WiFi to support high-speed control systems which typically require 1KHz or higher sampling rate. RT-WiFi is a TDMA data link layer protocol based on IEEE 802.11 physical layer. It provides deterministic timing performance on packet delivery. Since different control applications have different communication requirements on data delivery, RT-WiFi provides a configurable platform to adjust the design tradeoffs including sampling rate, delay variance, and reliability.The middleware proposed in <cit.> uses a TDMA-based method on top of 802.11 CSMA to assign specific time slots to each real-time node to send its traffic. In <cit.>, a polling-based scheduling using the EDF policy on top of 802.11 MAC is incorporated with a feedback mechanism to adjust the maximum number of transmission attempts. Moreover, <cit.> implements a real-time communication architecture based on the 802.11 standard and on the real-time networking framework RTnet <cit.>. Wireless Ralink RT2500 chipset of RTnet is used to support the strict network scheduling requirements of real-time systems. The performance indicators such as packet loss ratio and delay are experimentally evaluated by varying protocol parameters for a star topology. Experimental results show that a proper tuning of system parameters can support robust real-time network performance. Physical Layer Extension: <cit.> propose a priority assignment and scheduling algorithm as a function of sampling periods and transmission deadlines to provide maximum level of adaptivity, to accommodate the packet losses of time-triggered nodes and the transmissions of event-triggered nodes. The adaptivity metric is illustrated using the following example. Let us assume that the network consists of 4 sensor nodes, denoted by sensor node i for i ∈ [1, 4]. The packet generation period and transmission time of sensor 1 are 1 ms and t_1=0.15 ms, respectively. The packet generation period of sensor nodes 2, 3 and 4 is 2 ms, whereas packet transmission times are given by t_2=0.20 ms, t_3=0.25 ms and t_4=0.30 ms, respectively. Figs. <ref>(a) and <ref>(b) show a robust schedule where the time slots are uniformly distributed over time and the EDF schedule, respectively. The schedule given in Fig. <ref>(a), is more robust to packet losses than the EDF schedule given in Fig. <ref>(b). Indeed, suppose that the data packet of sensor 1 in the first 1 ms is not successfully transmitted. In Fig. <ref>(a), the robust schedule includes enough unallocated intervals for the retransmission of sensor 1, whereas the EDF schedule does not. Furthermore, the robust scheduler can accommodate event-triggered traffic with smaller delay than the EDF schedule, as shown in Fig. <ref>. To witness, suppose that an additional packet of 0.2 ms transmission time is generated by an event-triggered sensor node at the beginning of the scheduling frame. Then the event-triggered packet transmission can be allocated with a delay of 0.60 ms in the robust schedule and 1.15 ms in the EDF schedule. This uniform distribution paradigm is quantified as minimizing the maximum total active length of all subframes, where the subframe length is the minimum packet generation period among the components and the total active length of a subframe is the sum of the transmission time of the components allocated to that subframe. The proposed Smallest Period into the Shortest Subframe First (SSF) algorithm has been demonstrated to significantly decrease the maximum delay experienced by the packet of an event-triggered component compared to the EDF schedule, as shown in Fig. <ref>. Moreover, when time diversity, in the form of the retransmission of the lost packets, is included in this framework, the proposed adaptive framework decreases the average number of missed deadlines per unit time, which is defined as the average number of packets that cannot be successfully transmitted within their delay constraint, significantly compared to the EDF schedule.Since IEEE 802.11n encompasses several enhancements at both PHY and MAC layers of WLAN, <cit.> analyzes the performance indicators such as service time and reliability of IEEE 802.11n for industrial communication systems. The authors present both theoretical analysis and its validation through a set of experiments. The experimental analysis shows the possibility to select the IEEE 802.11n parameters to ensure the deterministic behavior for the real-time applications. In particular, it is shown that a good MIMO configuration of the standard enhances the communication reliability while sacrificing the network throughput. §.§.§ Network Resource ScheduleSeveral scheduling algorithms are proposed to efficiently assign the time slotand the channel of the multihop networks in order to meet the strict delay and reliability requirements.Scheduling Algorithm: Some scheduling algorithms focus on meeting a common deadline for all the packets generated within a sampling period <cit.>. <cit.> formulates the delay minimization of the packet transmissions from the sensor nodes to the common access point. The optimization problem has been shown to be NP-hard. The proposed scheduling algorithms provide upper bounds on the packet delivery time, by considering many-to-one transmission characteristics. The formulation and scheduling algorithms, however, do not take packet losses into account. <cit.> introduce novel procedures to provide reliability in case of packet failures. <cit.> proposes an optimal schedule increment strategy based on the repetition of the most suitable slot until the common deadline. The objective of the optimization problem is to maximize end-to-end reliability while providing end-to-end transmission delay guarantees. The physical network nodes have been reorganized into logical nodes for improved scheduling flexibility. Two scheduling algorithms have been evaluated: dedicated scheduling and shared scheduling. In dedicated scheduling, the packets are only transmitted in the scheduled time slots, whereas in shared scheduling, the packets share scheduled time slots for better reliability. <cit.> proposes a faster scheduling algorithm for the same problem introduced in <cit.>. The algorithm is based on gradually increasing a network model from one to multiple transmitted packets as a function of given link qualities to guarantee end-to-end reliability. These scheduling algorithms can be combined with multiple path routing algorithms. The authors assumeBernoulli distribution for the arrival success of the packets over each link. Moreover, they do not consider the transmission power, rate and packet length as a variable, assigning exactly one time slot to each transmission.The scheduling algorithms that consider the variation of sampling periods and deadlines of the nodes over the network fall into one of two categories: fixed priority and dynamic priority. The end-to-end delay analysis of periodic real-time flows from sensors to actuators in a WirelessHART network under fixed priority scheduling policy has been performed in <cit.>. The upper bound on the end-to-end delay of the periodic flows is obtained by mapping their scheduling to real-time multi-processor scheduling and then exploiting the response time analysis of the scheduling. Both the channel contention and transmission conflict delay due to higher priority flows have been considered. Channel contention happens when all channels are assigned to higher priority flows in a transmission slot, whereas transmission conflict occurs when there exists a common node with a transmission of higher priority flow. This study has later been extended for reliable graph routing to handle transmission failures through retransmissions and route diversity in <cit.>. Similarly, both worst-case and probabilisitic delay bounds have been derived by considering channel contention and transmission conflicts. These analyses consider multihop multichannel networks with fixed time slots without incorporating any transmit power or rate adjustment mechanism.The real-time dynamic priority scheduling of periodic deadline-constrained flows in a WirelessHART network has been shown to be NP-hard in <cit.>. Upon determining necessary condition for schedulability, an optimal branch-and-bound scheduling is proposed, effectively discarding infeasible branches in the search space. Moreover, a faster heuristic conflict-aware least laxity first algorithm is developed by assigning priorities to the nodes based on the criticality of their transmission. The conflict-aware laxity is defined as the laxity after discarding time slots that can be wasted while waiting to avoid transmission conflicts. The lower the conflict-aware laxity, the higher the transmission criticality. The algorithm does not provide any guarantee on the timely packet delivery. <cit.> provides the end-to-end delay analysis of periodic real-time flows from sensors to actuators under EDF policy. The delay is bounded by considering the channel contention and transmission conflict delays. The EDF has been shown to outperform fixed priority scheduling in terms of real-time performance. Robustness Enhancement: The predetermined nature of schedule-based transmissions allows the incorporation of various retransmission mechanisms in case of packet losses at random time instants. Although explicit scheduling is used to prevent various types of conflict and contention, still transmission failures may occur due to multipath fading and external interference in harsh and unstable environments. Some of the retransmission mechanisms have been introduced at the link layer <cit.>. Since schedule is known apriori by the nodes in the network, the retransmissions can be minimized by exploiting the determinism in the packet headers to recover the unknown bytes of the header <cit.>. Moreover, various efficient retransmission procedures can be used to minimize the number of bits in the retransmissions <cit.>. <cit.> uses symbol decoding confidence, whereas <cit.> uses received signal strength variations to determine the parts of the packet received in error, so, should be retransmitted. The retransmission mechanisms at the network layer aim to determine the best timing and quantity of shared and/or separate time slots given the link quality statistics <cit.>. <cit.> combines the retransmissions with real-time worst-case scheduling analysis. The number of possible retransmissions of a packet is limited considering the corresponding deadline and already guaranteed delay bounds of other packets. <cit.> proposes a scheduling algorithm that provides delay guarantees for the periodic real-time flows considering both link bursts and interference. A new metric called maximum burst length is defined as the maximum length of error burst, estimated by using empirical data. The algorithm then provides reliability guarantee by allocating each link one plus its corresponding maximum burst length time slots. A novel least-burst-route algorithm is used in conjunction with this scheduling algorithm to minimize the sum of worst case burst lengths over all links in the route. Similarly, <cit.> increases the spacing between the actual transmission and the first retransmission for maximum reliability instead of allocating all the time slots in between. <cit.> improves the retransmission efficiency by using limited number of shared slots efficiently through fast slot competition and segmented slot assignment. Shared resources are allocated for retransmission due to its unpredictability. Fast slot competition is introduced by embedding more than one clear channel assessment at the beginning of the shared slots to reduce the rate of collision. On the other hand, segmented slot assignment provides the retransmission chances for a routing hop before its following hop arrives.§.§.§ Network RoutingThere has been increasing interest in developing efficient multipath routing to improve the network reliability and energy efficiency of wireless networks. Previous works of the multipath routings are classified into four categories based on the underlying key ideas of the routing metric and the operation: disjoint path routing, graph routing, controlled flooding, and energy/QoS-aware routing.Disjoint Path Routing: Most of previous works focus on identifying multiple disjoint paths from source to destination to guarantee the routing reliability against node or link failures since multiple paths may fail independently <cit.>. The disjoint paths have two types: node-disjoint and link-disjoint. While node-disjoint paths do not have any relay node in common, link-disjoint paths do not have any common link but may have common nodes. <cit.> provides the node-disjoint and braided multipath schemes to provide the resilience against node failures. Ad-hoc On-demand Multipath Distance Vector (AOMDV) is a multipath extension of a well-studied single path routing protocol of Ad-hoc On-demand Distance Vector (AODV) <cit.>. Graph Routing: Graph routing of ISA 100.11a and WirelessHART leads to significant improvement over a single path in terms of worst-case reliability due to the usage of multiple paths. Since the standards do not explicitly define the mechanism to build these multiple paths, it is possible to use the existing algorithms of the disjoint path. Multiple routing paths from each node to the destination are formed by generating the subgraphs containing all the shortest paths for each source and destination pair <cit.>. Real-time link quality estimation is integrated into the generation of subgraphs for better reliability in <cit.>. <cit.> proposes an algorithm to construct three types of reliable routing graphs, namely, uplink graph, downlink graph, and broadcast graph for different communication purposes. While the uplink graph is a graph that connects all nodes upward to the gateway, the downlink graph of the gateway is a graph to send unicast messages to each node of the network. The broadcast graph connects gateway to all nodes of the network for the transmission of operational control commands. Three algorithms are proposed to build these graphs based on the concepts of (k, m)-reliability where k and m are the minimum required number of incoming and outgoing edges of all nodes excluding the gateway, respectively. The communication schedule is constructed based on the traffic load requirements and the hop sequence of the routing paths. Recently, the graph routing problem has been formulated as an optimization problem where the objective function is to maximize network lifetime, namely, the time interval before the first node exhausts its battery, for a given connectivity graph and battery capacity of nodes <cit.>. This optimization problem has been shown to be NP-hard. A suboptimal algorithm based on integer programming and a greedy heuristic algorithm have been proposed for the optimization problem. The proposed algorithm shows significant improvement in the network lifetime while guaranteeing the high reliability of graph routing. Controlled Flooding: Previous approaches of disjoint routing and graph routing focus on how to build the routing paths and distribute the traffic load over the network. Some control applications may define more stringent requirements on the routing reliability in the harsher and noisier environments. To address the major reliability issue, a reliable Real-time Flooding-based Routing protocol (REALFLOW) is proposed for industrial applications <cit.>. REALFLOW controls the flooding mechanism to further improve the multipath diversity while reducing the overhead. Each node transmits the received packet to the corresponding multiple routing paths instead of all feasible outgoing links. Furthermore, it discards the duplicated packets and outdated packets to reduce the overhead. For both uplink and downlink transmissions, the same packets are forwarded according to the related node lists in all relay nodes. Due to redundant paths and flooding mechanism, REALFLOW can be tolerant to network topology changes. Furthermore, since related node lists are distributively generated, the workloads of the gateway are greatly reduced. The flooding schedule is also extended by using the received signal strength in <cit.>.Energy/QoS-aware Routing: Even though some multipath routings such as disjoint path, graph routing, and controlled flooding lead to significant reliability improvement, they also increase the cost of the energy consumption. Energy/QoS-aware routing jointly considers the application requirements and energy consumption of the network <cit.>. Several energy-balanced routing strategies are proposed to maximize the network lifetime while meeting the strict requirements for industrial applications. Breath is proposed to ensure a desired packet delivery and delay probabilities while minimizing the energy consumption of the network <cit.>. The protocol is based on randomized routing, MAC, and duty-cycling jointly optimized for energy efficiency. The design approach relies on a constrained optimization problem, whereby the objective function is the energy consumption and the constraints are the packet reliability and delay. The optimal working point of the protocol is achieved by a simple algorithm, which adapts to traffic variations and channel conditions with negligible overhead. EARQ is another energy aware routing protocol for reliable and real-time communications for industrial applications <cit.>. EARQ is a proactive routing protocol, which maintains an ongoing routing table updated through the exchange of beacon messages among neighboring nodes. A beacon message contains expected values such as energy cost, residual energy of a node, reliability and end-to-end message delay. Once a node gets a new path to the destination, it will broadcast a beacon message to its neighbors. When a node wants to send a packet to the destination, next hop selections are based on the estimations of energy consumption, reliability, and deadlines. If the packet chooses a path with low reliability, the source will forward a redundant packet via other paths.<cit.> proposes the minimum transmission power cooperative routing algorithm, reducing the energy consumption of a single route while guaranteeing certain throughput. However, the algorithm ignores the residual energy and communication load of neighboring nodes, which result in unbalanced energy consumption among nodes. In addition, in <cit.>, a load-balanced routing algorithm is proposed where each node always chooses the next-hop based on the communication load of neighboring nodes. However, the algorithm has heavy computation complexity and the communication load is high. <cit.> propose a two-hop information-based routing protocol, aiming at enhancing real-time performance with energy efficiency. The routing decision in <cit.> is based on the integration of the velocity information of two-hop neighbors with energy balancing mechanism, whereas the routing decision in <cit.> is based on the number of hops from source to destination and two-hop information of the velocity. §.§ Joint Design Approach In the joint design approach, the wireless network and control system parameters are jointly optimized considering the tradeoff between their performances. These parameters include the sampling period for time-triggered control and level crossings for event-triggered control in the control system, and transmission power and rate at the physical layer, the access parameters and algorithm of the MAC protocol, duty-cycle parameters, and routing paths in the communication system. The high complexity of the problem led to different abstractions of control and communication systems, many of which considering only a subset of these parameters. §.§.§ Time-Triggered SamplingThe joint design approaches of the time-triggered control are classified into three categories based on the communication layers: contention-based access, schedule-based access, and routing and traffic generation control. Contention-based Access: The usage of contention-based protocols in the joint optimization of control and communication systems requires modeling the probabilistic distribution of delay and packet loss probability in the wireless network and its effect on the control system <cit.>. A general framework for the optimization of the sampling period together with link layer parameters has been first proposed in <cit.>. The objective of the optimization problem is to maximize control system performance given the delay distribution and the packet error probability constraints. The linear quadratic cost function is used as the control performance measure. Simplified models of packet loss and delay are assumed for the contention-based random access mechanism without considering spatial reuse. The solution strategy is based on an iterative numerical method due to the complexity of the control cost used as an objective function of the optimization problem. <cit.> aims to minimize the mean-square error of the state estimation subject to delay and packet loss probability induced by the contention-based random access. The mean-square error of the estimator is derived as a function of sampling period and delay distribution under the Bernoulli random process of the packet losses. <cit.> discusses several fundamental tradeoffs of WNCS over IEEE 802.15.4 networks. Fig. <ref> shows the quadratic control cost and communication throughput over different sampling periods. In the figure, J^i_∞ and J^r_∞ refer to the control cost bound using an ideal network (no packet loss and no delay) and a realistic lossy network of IEEE 802.15.4, respectively. Due to the absence of packet delays and losses, the control performance using an ideal network increases monotonically as the sampling period increases. However, when using a realistic network, a shorter sampling period does not minimize the control cost, because of the higher packet loss probability and delay when the traffic load is high. In addition, the two curves of the control cost J^i_∞ and J^r_∞ coincide for longer sampling periods, meaning that when the sampling period is larger, the sampling period is the dominant factor in the control cost compared to the packet loss probability and delay.In Fig. <ref>, if we consider a desired maximum control cost J_req greater than the minimum value of the control cost, then we have the feasible range of the sampling periods between 𝒮 and ℒ. However, the performance of the wireless network is still heavily affected by the operating point of the sampling period. Let us consider two feasible sampling periods 𝒮 and ℒ. By choosing ℒ, the throughput of the network is stabilized (cf., <cit.>), the control cost is also stabilized with respect to small perturbations of the network operation. Furthermore, the longer sampling period ℒ leads to lower network energy consumption than the one of the shorter sampling period 𝒮. Based on these observations, an adaptation of the WNCS is proposed by considering a constrained optimization problem. The objective is to minimize the total energy consumption of the network subject to a desired control cost. The variables of the problem include both sampling period and MAC parameters of IEEE 802.15.4. The network manager predicts the energy consumption corresponding to each feasible network requirement. The optimal network requirements are obtained to minimize the energy consumption of the network out of the feasible set of network requirements. <cit.> proposes an interesting approach to the design of WNCS by decomposing the overall concerns into two design spaces. In the control layer, a passive control structure of <cit.> is used to guarantee the stability of NCSs. The overall NCS performance is then optimized by adjusting the retransmission limits of the IEEE 802.11 standard. At the control layer, the authors leverage their passivity-based architecture to handle the message delay and message loss. The authors consider a passive controller which produces a trajectory of the plant to track and define the control performance as its absolute tracking error. Through extensive simulation results, a convex relationship between the retransmission limit of IEEE 802.11 and the control performance is shown. Based on this observation, a MAC parameter controller is introduced to dynamically adjust the retransmission limit to track the optimal tradeoff between packet losses and delays and thus to optimize the overall control system performance. Simulation results show that the MAC adaptation can converge to a proper retransmission limit which optimizes the performance of the control system. Even though the proposed approach is interesting, the fundamental tradeoff relationships between communication parameters and control performance are not trivial to derive in practice.<cit.> presents a MPC-based NCS and its implementation over wireless relay networks of IEEE 802.11 and cooperative MAC protocol <cit.>. The proposed approach deals with the problem from the control perspective. It basically employs a MPC, an actuator state, and an adaptive IEEE 802.11 MAC to reduce unbounded packet delay and improve the tolerance against the packet loss. Furthermore, the cooperative MAC protocol <cit.> is used to improve the control performance by enabling reliable and timely data transmission under harsh wireless channel conditions.Schedule-based Access: A novel framework for the communication–control joint optimization is proposed encompassing efficient abstraction of control system in the form of stochastic MATI and MAD constraints <cit.>. We should remember that MATI and MAD are defined as the maximum allowed time interval between subsequent state vector reports and the maximum allowed packet delay for the transmission, respectively, as we have discussed in Section <ref>. Since such hard real-time guarantees cannot be satisfied by a wireless network with non-zero packet loss probability, stochastic MATI is introduced with the goal of keeping the time interval between subsequent state vector reports above the MATI value with a predefined probability to guarantee the stability of control systems. Further, a novel schedulability constraint in the form of forcing an adaptive upper bound on the sum of the utilization of the nodes, defined as the ratio of their delay to their sampling periods, is included to guarantee the schedulability of transmission under variable transmission rate and sampling period values. The objective of the optimization is to minimize the total energy consumption of the network while guaranteeing MATI and MAD requirements of the control system and maximum transmit power and schedulability constraints of the wireless communication system. The solution for the specific case of M-ary quadrature amplitude modulation and EDF scheduling is based on the reduction of the resulting mixed-integer programming problem into an integer programming problem based on the analysis of the optimality conditions, and relaxation of this reduced problem <cit.>. The formulation is also extended for any non-decreasing function of the power consumption of the nodes as the objective, any modulation scheme, and any scheduling algorithm in <cit.>. First, an exact solution method based on the analysis of the optimality conditions and smart enumeration techniques is introduced. Then, two polynomial-time heuristic algorithms adopting intelligent search space reduction and smart searching techniques are proposed. The energy saving has been demonstrated to increase up to 70% for a network containing up to 40 nodes. <cit.> studies utility maximization problem subject to wireless network capacity and delay requirement of control system. The utility function is defined as the ratio of root-mean-square of the discrete-time system to that of the continuous-time counterpart. This utility function has been demonstrated to be a strictly concave function of the sampling period and inversely proportional to tracking error induced by discretization, based on the assumption that the plants follow the reference trajectories provided by the controllers. The wireless network capacity is derived by adopting slotted time transmission over a conflict graph, where each vertex represents a wireless link and there is an edge between two vertices if their corresponding links interfere with each other. The sampling period is used as the multihop end-to-end delay bound. The solution methodology is based on embedded-loop approach. In the inner loop, a relaxed problem with fixed delay bound, independent of sampling period, is solved via dual decomposition. The outer loop then determines optimal delay bounds based on the sampling period as an output of the inner loop.<cit.> proposes a mathematical framework for modeling and analyzing multihop NCSs. The authors present the formal syntax and semantics for the dynamics of the composed system, providing an explicit translation of multihop control networks to switched systems. The proposed method jointly considers control system, network topology, routing, resource scheduling, and communication error. The formal models are applied to analyze the robustness of NCSs, where data packet is exchanged through a multihop communication network subject to disruptions. The authors consider two communication models, namely, permanent error model and transient error model, dependent on the length of the communication disruptions. The authors address the robustness of the multihop NCS in the non-deterministic case by worst case analysis of scheduling, routing, and packet losses, and in the stochastic case by the stability analysis of node fault probability and packet loss probability. The joint optimization of the sampling period of sensors, packet forwarding policy and control law for computing actuator command is addressed in <cit.> for a multihop WirelessHART network. The objective of the optimization problem is to minimize the closed-loop control cost subject to the energy and delay constraints of the nodes. The linear quadratic cost function is used as the control cost similar to the one in <cit.>. The solution methodology is based on the separation of joint design problem for the fixed sampling rate: transmission scheduling for maximizing the deadline-constrained reliability subject to a total energy budget and optimal control under packet loss. The optimal solution for transmission scheduling is based on dynamic programming, which allows nodes to find their optimal forwarding policy based on the statistics of their outgoing links in a distributed fashion. The bounds on the continuous-time control loss function are derived for optimal time-varying Kalman filter estimator and static linear feedback control law. The joint optimal solution is then found by a one-dimensional search over the sampling period.Some recent researches of WNCS investigate fault detection and fault tolerant issues <cit.>. <cit.> develops a design framework of fault-tolerant NCSs for industrial automation applications. The framework relies on an integrated design and parametrization of the TDMA MAC protocols, the controller, and the fault diagnosis algorithms in a multilayer system. The main objective is to determine the data transmission of wireless networks and reduce the traffic load while meeting the requirements of the control and the fault detection and identification performance. By considering the distributed control groups, the hierarchical WNCS configuration is considered. While the lower layer tightly integrates with sensors, actuators and microprocessors of (local) feedback control loops and its TDMA resource, the higher layer implements a fault-tolerant control in the context of resource management. The TDMA MAC protocol is modeled as a scheduler, whose design and parameterization are achieved with the development of the control and the fault detection and identification algorithms at the different functional layers.In a similar way, <cit.> investigates the fault estimation problem based on the deterministic model of the TDMA mechanism. The discrete periodic model of control systems is integrated with periodic information scheduling model without packet collisions. By adopting the linearity of state equations, the fault estimator is proposed for the periodic system model with arbitrary sensor inputs. The fault estimation is obtained after solving a deterministic quadratic minimization problem of control systems by means of recursive calculation. However, the scheduler of the wireless network does not consider any realistic message delays and losses.Routing and Traffic Generation Control: In <cit.>, the cross-layer optimized control (CLOC) protocol is proposed for minimizing the worst-case performance loss of multiple control systems. CLOC is designed for a general wireless sensor and actuator network where both sensor–controller and controller–actuator connections are over a multihop mesh network. The design approach relies on a constrained max-min optimization problem, where the objective is to maximize the minimum resource redundancy of the network and the constraints are the stability of the closed-loop control systems and the schedulability of the communication resources. The stability condition of the control system has been formulated in the form of stochastic MATI constraint <cit.>. The optimal operation point of the protocol is automatically set in terms of the sampling period, slot scheduling, and routing, and is achieved by solving a linear programming problem, which adapts to system requirements and link conditions. The performance analysis shows that CLOC ensures control stability and fulfills communication constraints while maximizing the worst-case system performance. <cit.> presents a case study on a wireless process control system that integrates the control design and the wireless routing of the WirelessHART standard. The network supports two routing strategies, namely, single-path source routing and multi-path graph routing. Remind that the graph routing of the WirelessHART standard reduces packet loss through path diversity at the cost of additional overhead and energy consumption. To mitigate the effect of packet loss in the WNCS, the control design integrates an observer based on an extended Kalman filter with a MPC and an actuator buffer of recent control inputs. The experimental results show that sensing and actuation can have different levels of robustness to packet loss under this design approach. Specifically, while the plant state observer is highly effective in mitigating the effects of packet loss from the sensors to the controller, the control performance is more sensitive to packet loss from the controller to the actuators despite the buffered control inputs. Based on this observation, the paper proposes an asymmetric routing configuration for sensing and actuation (source routing for sensing and graph routing for actuation) to improve control performance. <cit.> addresses the sampling period optimization with the goal of minimizing overall control cost while ensuring end-to-end delay constraints for a multihop WirelessHART network. The linear quadratic cost function is used as the control performance measure, which is a function of the sampling period. The optimization problem relies on the multihop problem formulation of the end-to-end delay bound in <cit.>. Due to the difficulty of the resulting optimization problem, the solution methodologies based on a subgradient method, simulated annealing-based penalty method, greedy heuristic method and approximated convex optimization method are proposed. The tradeoff between execution time and achieved control cost is analyzed for these methods.§.§.§ Event-Triggered SamplingThe communication system design for event-triggered sampling has mostly focused on the MAC layer. In particular, most researches focus on contention-based random access since it is suitable for these control systems due to the unpredictability of the message generation time.Contention-based Access: The tradeoff between the level threshold crossings in the control system and the packet losses in the communication system have been analyzed in <cit.>. <cit.> studies the event-triggered control under lossy communication. The information is generated and sent at the level crossings of the plant output. The packet losses are assumed to have aBernoulli distribution independent over each link. The dependence between the stochastic control criterion on the level crossings and the message loss probability is derived for a class of integrator plants. This allows the generation of a design guideline on the assignment of the levels for the optimal usage of communication resources.<cit.> provides an extension to <cit.> by considering a multi-dimensional Markov chain model of the attempted and successful transmissions over lossy channel. In particular, a threshold-based event-triggering algorithm is used to transmit the control command from the controller to the actuator. By combining the communication model of the retransmissions with an analytical model of the closed-loop performance, a theoretical framework is proposed to analyze the tradeoff between the communication cost and the control performance and it is used to adapt an event threshold. However, the proposed Markov chain only considers the packet loss as aBernoulli process and it does not capture the contention between multiple nodes. On the other hand, schedule-based access, in which the nodes are assigned fixed time slots independent of their message generation times, is considered as an alternative to random access for event-triggered control <cit.>. However, this introduces extra delay between the triggering of an event and a transmission in its assigned slot. <cit.> analyzes the event-based NCS consisting of multiple linear time-invariant control systems over a multichannel slotted ALOHA protocol. The multichannel slotted ALOHA system is considered as the random access model of the Long Term Evolution <cit.>. The authors separate the resource allocation problem of the multichannel slotted ALOHA system into two problems, namely, the transmission attempt problem and the channel selection problem. Given a time slot, each control loop decides locally whether to attempt a transmission based on some error thresholds. A local threshold-based algorithm is used to adapt the error thresholds based on the knowledge of the network resource. When the control loop decides to transmit, then it selects one of the available channels in uniform random fashion.Given plant and controller dynamics, <cit.> proposes control-aware random access policies to address the coupling between control loops over the shared wireless channel. In particular, the authors derive a sufficient mathematical condition for the random access policy of each sensor so that it does not violate the stability criterion of other control loops. The authors only assume the packet loss due to the interference between simultaneous transmissions of the network. They propose a mathematical condition decoupling the control loops. Based on this condition, a control-aware random access policy is proposed by adapting to the physical plant states measured by the sensors online. However, it is still computationally challenging to verify the condition.Some event-triggered sampling appproaches <cit.> use the CSMA protocol to share the network resource. <cit.> analyzes the performance of the event-based NCSs with the CSMA protocol to access the shared network. The authors present a Markov model that captures the joint interactions of the event-triggering policy and a contention resolution mechanism of CSMA. The proposed Markov model basically extends Bianchi's analysis of IEEE 802.11 <cit.> by decoupling interactions between multiple event-based systems of the network.<cit.> investigates the event-triggered data scheduling of multiple loop control systems communicating over a shared lossy network. The proposed error-dependent scheduling scheme combines deterministic and probabilistic approaches. This scheduling policy deterministically blocks transmission requests with lower errors not exceeding predefined thresholds. Subsequently, the medium access is granted to the remaining transmission requests in a probabilistic manner. The message error is modeled as a homogeneous Markov chain. The analytical uniform performance bounds for the error variance is derived under the proposed scheduling policy. Numerical results show a performance improvement in terms of error level with respect to the one with periodic and random scheduling policies.<cit.> proposes a distributed adaptation algorithm for an event-triggered control system, where each system adjusts its communication parameter and control gain to meet the global control cost. Each discrete-time stochastic linear system is coupled by the CSMA model that allows to close only a limited number of feedback loops in every time instant. The backoff intervals of CSMA are assumed to be exponentially distributed with homogeneous backoff exponents. Furthermore, the data packets are discarded after the limited number of retransmission trials. The individual cost function is defined as the linear quadratic cost function. The design objective is to find the optimal control laws and optimal event-triggering threshold that minimize the control cost. The design problem is formulated as an average cost Markov Decision Process (MDP) problem with unknown global system parameters that are to be estimated during execution. Techniques from distributed optimization and adaptive MDPs are used to develop distributed self-regulating event-triggers that adapt their request rate to accommodate a global resource constraint. In particular, the dual price mechanism forces each system to adjust their event-triggering thresholds according to the total transmission rate.Self-triggered Control and Mixed Approach:Self-triggered sampling allows to save energy consumption and reduce the contention delay by predicting the level crossings in the future, so, explicitly scheduling the corresponding transmissions <cit.>. The sensor nodes are set to sleep mode until the predicted level crossing. <cit.> proposes a new approach to ensure the stability of the controlled processes over a shared IEEE 802.15.4 network by self-triggered control. The self-triggered sampler selects the next sampling time as a function of current and previous measurements, measurement time delay, and estimated disturbance. The superframe duration and transmission scheduling in the contention free period of IEEE 802.15.4 are adapted to minimize theenergy consumption while meeting the deadlines. The joint selection of the sampling time of processes, protocol parameters and scheduling allows to address the tradeoff between closed-loop system performance and network energy consumption. However, the drawback of this sampling methodology is the lack of its robustness to uncertainties and disturbances due to the predetermined control and communication models. The explicit scheduling for self-triggered sampling is, therefore, recently extended to include additional time slots in the communication schedule not assigned apriori to any nodes <cit.>. In the case of the presence of disturbance, these extra slots are used in an event-triggered fashion. The contention-based random access is used in these slots due to the unpredictability of the transmissions. In <cit.>, a joint optimization framework is presented, where the objective is a function of process state, cost of the actuations, and energy consumption to transmit control commands, subject to communication constraints, limited capabilities of the actuators, and control requirements. While the self-triggered control is adopted, with the controller dynamically determining the next task execution time of the actuator, including command broadcasting and changing of action, the sensors are assumed to perform sampling periodically. A simulated annealing based algorithm is used for online optimization, which optimizes the sampling intervals. In addition, the authors propose a mechanism for estimating and predicting the system states, which may not be known exactly due to packet losses and measurement noise.<cit.> proposes a joint design approach of control and adaptive sampling for multiple control loops. The proposed method computes the optimal control signal to be applied as well as the optimal time to wait before taking the next sample. The basic idea is to combine the concept of the self-triggered sampling with MPC, where the cost function penalizes the plant state and control effort as well as the time interval until the next sample is taken. The latter is considered to generate an adaptive sampling scheme for the overall system such that the sampling time increases as the system state error goes to zero. In the multiple loop case, the authors also present a transmission scheduling algorithm to avoid the conflicts.<cit.> proposes a mixed self-triggered sampling and event-triggered sampling scheme to ensure the control stability of NCSs, while improving the energy efficiency of the IEEE 802.15.4 wireless networks. The basic idea of the mixed approach is to combine the self-triggered sampling and the event-triggered sampling schemes. The self-triggered sampling scheme first predicts the next activation time of the event-triggered sampler when the controller receives the sensing information. The event-triggered sampler then begins to monitor the predefined triggering condition and computes the next sampling instance. Compared to the typical event-triggered sampling, the sensor does not continuously check the event-triggered condition, since the self-triggered sampling component of the proposed mixed scheme estimates the next sampling a priori. Furthermore, compared with the alone utilization of self-triggered sampling, the conservativeness is reduced, since the event-triggered sampling component extends the sampling interval. By coupling the self-triggered and event-triggered sampling in a unified framework, the proposed scheme extends the inactive period of the wireless network and reduces the conservativeness induced by the self-triggered sampling to guarantee the high energy-efficiency while preserving the desired control performance.§ EXPERIMENTAL TESTBEDSIn contrast to previous surveys of WSN testbeds <cit.>, we introduce some of our representative WNCS testbeds. Existing WNCS research often relies on small-scale experiments. However, they usually suffers from limited size, and cannot capture delays and losses of realistic large wireless networks. Several simulation tools <cit.> are developed to investigate the NCS research. Unfortunately, simulation tools for control systems often lack realistic models of wireless networks that exhibit complex and stochastic behavior in real-world environments. In this section, we describe three WNCS testbeds, namely, cyber-physical simulator and WSN testbed, building automation testbed, and industrial process testbed. §.§ Cyber-Physical Simulator and WSN TestbedWireless cyber-physical simulator (WCPS) <cit.> is designed to provide a realistic simulation of WNCS. WCPS employs a federated architecture that integrates Simulink for simulating the physical system dynamics and controllers, and TOSSIM <cit.> for simulating wireless networks. Simulink is commonly used by control engineers to design and study control systems, while TOSSIM has been widely used in the sensor network community to simulate WSNs based on realistic wireless link models <cit.>. WCPS provides an open-source middleware to orchestrate simulations in Simulink and in TOSSIM. Following the software architecture in WCPS, the sensor data generated by Simulink is fed into the WSN simulated using TOSSIM. TOSSIM then returns the packet delays and losses according to the behavior of the network, which are then fed to the controller of Simulink. Controller commands are then fed again into TOSSIM, which delays or drops the packets and sends the outputs to the actuators. Furthermore, it is also possible to use the experimental wireless traces of a WSN testbed as inputs to the TOSSIM simulator. The Cyber-Physical Laboratory of Washington University in St. Louis has developed an experimental WSN testbed to study and evaluate WSN protocols <cit.>. The system comprises a network manager on a server and a network protocol stack implementation on TinyOS and TelosB nodes <cit.>. Each node is equipped with a TI MSP430 microcontroller and a TI CC2420 radio compatible with the IEEE 802.15.4 standard. Fig. <ref> shows the deployment of the nodes in the campus building. The testbed consists of 79 nodes placed throughout several office areas. The testbed architecture is hierarchical in nature, consisting of three different levels of deployment: sensor nodes, microservers, and a desktop class host/server machine. At the lowest tier, sensor nodes are placed throughout the physical environment in order to take sensor readings and/or perform actuation. They are connected to microservers at the second tier through a USB infrastructure consisting of USB 2.0 compliant hubs. Messages can be exchanged between sensor nodes and microservers over this interface in both directions. In the testbed, two nodes are connected to each microserver, typically with one microserver per room. The final tier includes a dedicated server that connects to all of the microservers over an Ethernet backbone. The server machine is used to host, among other things, a database containing information about the different sensor nodes and the microservers they are connected to.§.§ Building Automation TestbedHeating, Ventilation and Air Conditioning (HVAC) systems guarantee indoor air quality and thermal comfort levels in buildings, at the price of high energy consumption <cit.>. To reduce the energy required by HVAC systems, researchers have been trying to efficiently use thermal storage capacities of buildings by proposing advanced estimation and control schemes by using wireless sensor nodes. An example HVAC testbed is currently comprised of the second floor of the electrical engineering building of the KTH campus and is depicted in Fig. <ref>. This floor houses four laboratories, an office room, a lecture hall, one storage room and a boiler room. Each room of the testbed is considered to be a thermal zone and has a set of wireless sensors and actuators that can be individually controlled. The WSN testbed is implemented on TinyOS and TelosB nodes <cit.>. The testbed consists of 12 wireless sensors measuring indoor and outdoor temperature, humidity, CO2 concentrations, light intensity, occupancy levels, and events like door/windows openings/closings in several rooms. Note that the nodes are equipped with on-board humidity, temperature, and light sensors, and external sensors such as CO2 sensors by using an analog-to-digital converter channel on the 16-pin Telosb expansion area. Furthermore, laboratory A225 includes a people counter to measure the occupancy of the laboratory. The collection tree protocol is used to collect the sensor measurements through the multihop networks <cit.>. The actuators are the flow valve of the heating radiator, the flow valve for the air conditioning system, the air vent for fresh air flow at constant temperature, and the air vent for air exhaust to the corridor. An overview of the testbed architecture is shown in Fig. <ref>. The HVAC testbed is developed in LabVIEW and is comprised of two separate components; the experimental application and a database/web server system <cit.>. The database is responsible for logging the data from all HVAC components in real-time. On the other hand, the experimental application is developed by each user and interacts with the data-logging and supervisory control module in the testbed server, which connects to the programmable logic controller. This component allows for real-time sensing, computation, and actuation. Even though the application is developed in LabVIEW, MATLAB code is integrated in the application through a MathScript zone. §.§ Industrial Process TestbedThe control of liquid levels in tanks and flows between tanks are basic problems in process industry <cit.>. Liquids need to be processed by chemicals or mixed treatment in tanks, while the levels of the tanks must be controlled and the flows between tanks must be regulated. Fig. <ref> depicts the experimental apparatus and a diagram of the physical system used in <cit.>. The coupled tank system consists of a pump, a water basin and two tanks of uniform cross sections <cit.>. The system is simple, yet representative testbed of dynamics of water tanks used in practice. The water in the lower tank flows to the water basin. A pump is responsible for pumping water from the basin to the upper tank, which then flows to the lower tank. The holes in each of the tanks have the same diameter. The controller regulates the level of water in the upper or lower tank. The sensing of the water levels is performed by pressure sensors placed under each tank. The process control testbed is built on multiple control systems of Quanser coupled tanks <cit.> with a wireless network consisting of TelosB nodes. The control loops are regulating two coupled tank processes, where the tanks are collocated with the sensors and actuators and communicate wirelessly with a controller node. A wireless node interfaces the sensors with an analog-to-digital converter, in order to sample the sensors for both tanks. The actuation is implemented through the digital-to-analog converter of the wireless actuator node, connected to an amplification circuit that will convert the output voltage of the pump motor.§ OPEN CHALLENGES AND FUTURE RESEARCH DIRECTIONS Although a large number of results on WSN and NCSs are reported in the literature, there are still a number of challenging problems to be solved out, some of them are presented as follows. §.§ Tradeoff of Joint Design The joint design of communication and control layers is essential to guarantee the robustness, fault-tolerance, and resilience of the overall WNCS. Several different approaches of WNCS design are categorized dependent on the degree of the interaction. Increasing the interaction may improve the control performance but at the risk of high complexity of the design problem and thus eventually leading to the fundamental scalability and tractability issues. Hence, it is critical to quantify the benefit of the control performance and cost of the complexity depending on the design approaches. The benefit of the adaptation of the design parameters significantly depends on the dynamics of control systems. Most researches of control and communication focus on the design of the controller or the network protocol with certain optimization problems for the fixed sampling period. Some NCS researches propose possible alternatives to set the sampling periods based on the stability analysis <cit.>. However, they do not consider the fundamental tradeoff between QoS and sampling period of wireless networks. While the adaptive sampling period might provide control performance improvement, it results in the complex stability problem of the control systems and requires the real-time adaptation of wireless networks. Real-time adaptation of the sampling period might be needed for the fast dynamical system. On the other hand, it may just increase the complexity and implementation overhead for slow control systems. Hence, it is critical to quantify the benefit and cost of the joint design approach for control and communication systems. §.§ Control System RequirementVarious technical approaches such as hybrid system, Markov jump linear system, and time-delay system are used to analyze the stability of NCSs for different network assumptions. The wireless network designers must carefully consider the detailed assumptions of NCS before using their results in wireless network design. Similarly, control system designers need to consider wireless network imperfections encompassing both message dropout and message delay in their framework. While some assumptions of control system design affect the protocol operation, other assumptions may be infeasible to meet for overall network. For instance, the protocol operation should consider the hard/soft sampling period to check whether it is allowed to retransmit the outdated messages over the sampling period. On the other hand, if the NCS design requires a strict bound on the maximum allowable number of consecutive packet losses, this cannot be achieved by the wireless system, in which the packet error probability is non-zero at all times.Numerical methods are mostly used to derive feasible sets of wireless network requirements in terms of message loss probability and delay to achieve a certain control system performance. Even though all these feasible requirements meet the control cost, it may give significantly different network costs such as energy consumption and robustness and thus eventually affect the overall control systems. There are two ways to solve these problems. The first one is to provide efficient tools quantifying feasible sets and corresponding network costs. Previous researches of WNCS still lack of the comparison of different network requirements and their effect on the network design and cost. The second one is to provide efficient abstractions of both control and communication systems enabling the usage of non-numerical methods. For instance, the usage of stochastic MATI and MAD constraints for the control system in <cit.> enables the generation of efficient solution methodologies for the joint optimization of these systems. §.§ Communication System AbstractionEfficient abstractions of communication systems need to be included to achieve the benefit of joint design while reducing complexity for WNCS. Both interactive and joint design approaches mostly focus on the usage of constant transmit power and rate at the physical layer to simplify the problem. However, variable transmit power and rate have already been supported by network devices. The integration of the variability of time slots with variable transmit power and rate has been demonstrated to improve the communication energy consumption significantly <cit.>. This work should be extended to integrate power and rate variability into the WNCS design approaches. Bernouilli distribution has been commonly used as a packet loss model to analyze the control stability for simplicity. However, most wireless links are highly correlated over time and space in practice <cit.>. The time dependence of packet loss distribution can significantly affect the control system performance due to the effect of consecutive packet losses on the control system performance. The packet loss dependencies should be efficiently integrated into the interactive and joint design approaches.§.§ Network Lifetime Safety-critical control systems must continuously operate the process without any interruptions such as oil refining, chemicals, power plants, and avionics. The continuous operation requires infrequent maintenance shut-downs such as semi-annual or annual since its effects of the downtime losses may range from production inefficiency and equipment destruction to irreparable financial and environmental damages. On the other hand, energy constraints are widely regarded as a fundamental limitation of wireless devices. The limited lifetime due to the battery constraint is particularly challenging for WNCS, because the sensors/actuators are attached to the main physical process or equipment. In fact, the battery replacement may require the maintenance shut-downs since it may be not possible to replace while the control process is operating. Recently, two major technologies of energy harvesting and wireless power transfer have emerged as a promising technology to address lifetime bottlenecks of wireless networks. Some of these solutions are also commercially available and deployed such as ABB WISA <cit.> based on the wireless power transfer for the industrial automation and EnOcean <cit.> based on the energy harvesting for the building automation. WNCS using these energy efficient technologies encounters new challenges at all layers of the network design as well as the overall joint design approach. In particular, the joint design approach must balance the control cost and the network lifetime while considering the additional constraint on the arrival of energy harvesting. The timing and amount of energy harvesting may be random for the generation of energy from natural sources such as solar, vibration, or controlled for the RF, inductive and magnetic resonant coupling.§.§ Ultra-Reliable Ultra-Low Latency CommunicationRecently, machine-type communication with ultra-reliable and ultra-low latency requirements has attracted much interest in the research community due to many control related applications in industrial automation, autonomous driving, healthcare, and virtual and augmented reality <cit.>. In particular, the Tactile Internet requires the extremely low latency in combination with high availability, reliability and security of the network to deliver the real-time control and physical sensing information remotely <cit.>. Diversity techniques, which have been previously proposed to maximize total data rate of the users, are now being adapted to achieve reliability corresponding to packet error probability on the order of 10^-9 within latency down to a millisecond or less. The ultra-low latency requirement may prohibit the sole usage of time diversity in the form of automatic-repeat-request (ARQ), where the transmitter resends the packet in the case of packet losses, or hybrid ARQ, where the transmitter sends incremental redundancy rather than the whole packet assuming the processing of all the information available at the receiver. Therefore, <cit.> have investigated the usage of space diversity in the form of multiple antennas at the transmitter and receiver, and transmission from multiple base stations to the user over one-hop cellular networks. These schemes, however, mostly focus on the reliability of a single user <cit.>, multiple users in a multi-cell interference scenario <cit.>, or multiple users to meet a single deadline for all nodes <cit.>. <cit.> extended these works to consider the separate packet generation times and individual packet transmission deadlines of multiple users in the high reliability communication.The previous work on WNCS only investigated the time and path diversity to achieve very high reliability and very low latency communication requirements of corresponding applications, as explained in detailed above. The time diversity mechanisms either adopt efficient retransmission mechanisms to minimize the number of bits in the retransmissions at the link layer or determine the best timing and quantity of time slots given the link quality statistics. On the other hand, path diversity is based on the identification of multiple disjoint paths from source to destination to guarantee the routing reliability against node and link failures. The extension of these techniques to include other diversity mechanisms, such as space and frequency in the context of ultra-reliable ultra low latency communication, requires reformulation of the joint design balancing control cost and network lifetime and addressing new challenges at all layers of the network design.§.§ Low-Power Wide-Area NetworksOne of the major issues for large scale Smart Grid <cit.>, Smart Transportation <cit.>, and Industry 4.0 <cit.> is to allow long-range communications of sensors and actuators using very low-power levels. Recently, several LPWAN protocols such asLoRa <cit.>, NB-IoT <cit.>, Sigfox <cit.>, and LTE-M <cit.> are proposed to provide the low data rate communications of battery operated devices. LTE-M and NB-IoT use a licensed spectrum supported by 3rd Generation Partnership Project standardization. On the other hand, LoRa and Sigfox rely on an unlicensed spectrum.The wireless channel behavior of LPWANs is significantly different from the behavior of the short-range wireless channel commonly used in WNCS standards, such as WirelessHART, Bluetooth, and Z-wave, due to different multi-path fading characteristics and spectrum usage. Thus, the design of the physical and link layers is completely different. Moreover, the protocol design needs to consider the effect of the interoperation of different protocols of LPWANs on the overall message delay. Hence, the control system engineers must validate the feasibility of the traditional assumptions of wireless networks for WNCS based on LPWANs. Furthermore, the network architecture of LPWAN must carefully adapt its operation in order to support the real-time requirements and control message priority of large scale control systems. § CONCLUSIONS Wireless networked control systems are the fundamental technology of the safety-critical control systems in many areas, including automotive electronics, avionics, building automation, and industrial automation. This article provided a tutorial and reviewed recent advances ofwireless network design and optimization for wireless networked control systems. We discussed the critical interactive variables of communication and control systems, including sampling period, message delay, message dropout, and energy consumption. We then discussed the effect of wireless network parameters at all protocol layers on the probability distribution of these interactive variables. Moreover, we reviewed the analysis and design of control systems that consider the effect of various subsets of interactive variables on the control system performance. By considering the degree of interactions between control and communication systems, we discussed two design approaches: interactive design and joint design. We also describe some practical testbeds of WNCS. Finally, we highlighted major existing research issues and identified possible future research directions in the analysis of the tradeoff between the benefit of the control performance and cost of the complexity in the joint design, efficient abstractions of control and communication systems for their usage in the joint design, inclusion of energy harvesting and diversity techniques in the joint design and extension of the joint design to wide-area wireless networked control systems.IEEEtran
http://arxiv.org/abs/1708.07353v1
{ "authors": [ "Pangun Park", "Sinem Coleri Ergen", "Carlo Fischione", "Chenyang Lu", "Karl Henrik Johansson" ], "categories": [ "cs.SY", "cs.IT", "cs.NI", "math.IT" ], "primary_category": "cs.SY", "published": "20170824110015", "title": "Wireless Network Design for Control Systems: A Survey" }
Navigation in the Ancient Mediterranean and Beyond M. Nielbock Received August 8, 2016; accepted March 2, 2017 =================================================== empty Many high-energy astrophysical environments are strongly magnetized, i.e. the magnetic energy per particle exceeds the rest mass energy, so that the magnetization parameter σ≡ B_0^2/4π nm_ec^2≥1, where B_0 is the magnetic field strength, m_e is the electron mass, n is the plasma density, and c is the vacuum light velocity. In such environments, magnetic reconnection (MR), operating in the relativistic regime, plays a key role in the transfer of large amount of magnetic to kinetic energy via field dissipation <cit.>. Motivated by numerous astrophysical observations, such as high energy emission in pulsars <cit.>, cosmological gamma-ray bursts <cit.> and active galactic nuclei jets <cit.>, the study of relativistic MR has made rapid progress in the last few decades through analytical studies <cit.> as well as 2D <cit.> and 3D <cit.> particle-in-cell (PIC) simulations. However, due to difficulties in achieving the extreme magnetic energy densities that are required to observe relativistic MR in laboratory environments, previous experimental studies investigated mainly the non-relativistic regime (σ<1). These include experimental observation of MR in tokamaks <cit.> or dedicated experiments, such as MRX <cit.>.High-intensity laser-plasma interaction <cit.> is a promising way to break through the relativistic limit as the energy densities that can be achieved by high-intensity laser facilities worldwide<cit.> are rising rapidly. These facilities provide an important platform for the study of relativistic MR and therefore attract extensive attention. In the typical laser-driven MR experiments, two neighboring plasma bubbles are created by laser-matter interaction. The opposite azimuthal magnetic fields arise due to the Biermann battery effect (∇ n×∇ T_e, where T_e is the electron temperature) <cit.> and reconnect in the midplane as they are driven together by the frozen-in-flow of bulk plasma expansion. Although most of the previous studies are focused on the non-relativistic limit, it has been reported recently that relativistic MR conditions can be achieved with such a scenario in laboratory environments <cit.>.In the laser-driven reconnection studies based on the Biermann battery effect <cit.>, due to the oppositely directed magnetic fields that need to be compressed together by the plasma thermal flows, the ratio of the plasma thermal and magnetic energy, β, is high (β>1). As a result, the plasma is notmagnetically dominated and it is therefore not clear what role MR plays in terms of energy balance. Recently it has been reported that magnetically dominated MR can be achieved by a double-turn Helmholtz capacitor-coil target <cit.>, but this approach is very difficult to extend to the relativistic regime because it requires a kJ-class laser system. Thus, in spite of the remarkable progress that has been made, relativistic MR in low-β environments (β<1), which is closely related to the interpretation of many space plasma measurements and astronomical observations <cit.>, has not been thoroughly studied. In this paper, we propose a novel experimental setup based on the interaction of a readily available moderately intense (TW-mJ-class) laser with a micro-sized plasma slab, resulting in ultrafast relativistic MR in the low-β-regime. By irradiating a micro-sized plasma slab with a high-intensity laser pulse, it is possible to deposit the laser energy in a very small volume, hence significantly reducing the laser energy (by at least 1-2 orders of magnitude comparing with the results reported in Ref.<cit.>) required to achieve relativistic MR regime (σ>1). Meanwhile, since the magnetic field lines are not driven by the plasma thermal flow, the magnetically-dominated regime can be accessed. Using 3D PIC simulations, a comprehensive numerical experiment is presented, which demonstrate the MR event observed in the proposed scheme have a significant effect on the whole system. It leads to fierce (0.1-TW-class) field dissipation and highly-efficient particle acceleration, which covers 20% of the total energy transition. The enormous magnetic tension gives rise to intense relativistic jets that have so far been absent in other relativistic MR studies based on laser-plasma interaction <cit.>. Due to these features, the proposed scenario is promising to provide an important platform to study the non-thermal signatures and energy transition in relativistic MR. In addition, we present other MR signatures including quantified agyrotropy peaks in the diffusion area <cit.>, and out-of-plane quadrupole field structures <cit.>. With recent advances in laser pulse cleaning techniques <cit.> and micro-target manufacturing <cit.>, the proposed scenario can be straightforwardly implemented in experiments. §.§ Generation of relativistic jets. A sketch of the simulation setup is shown in Fig. 1(a). A linearly polarized (in y direction) laser with normalized laser intensity a_0=eE_0/m_ecω_0 = 5 (intensity ∼ 10^19 W/cm^2) propagates alongthe x-axis, where E_0 is the laser amplitude, ω_0 = 2π c/λ_0 and λ_0 = 1μm are the frequency and wavelength of the laser, respectively. The laser spot size is 4 λ_0 and the duration is 15 T_0, where T_0≈ 3.3 fs is the laser cycle. A plasma slab with thickness (in the laser-polarizing direction) d=1λ_0 and length (in the laser propagation direction) L=20λ_0 splits the laser pulse in half. The main part of the slab has an uniform density of 20n_c, where n_c=m_eω_0^2/4π e^2 is the critical density. At the end of the structure (coronal region, represented by the area within the blue-box framework in Fig. 1), the density drops exponentially as x increases (scale length l=2λ_0).As the laser pulse sweeps along the slab, it drives two energetic electron beams on both sides of the plasma surface <cit.>. These electron beams are typically overdense (i.e. n_b>n_c) <cit.> and capable of generating 100 MG level opposing azimuthal magnetic fields in the middle, as shown by the black arrows in Fig. 1(a). Detailed parameters of the simulation can be found in Methods.In the early stage (i.e. before the electron beams reach the corona), MR does not occur because a much stronger return current is excited inside the slab, which separates the antiparallel magnetic fields on each side of the slab, and the magnetic energy remains constant (after the initial rise due to the laser-matter interaction) during this period. As the electron beam approaches the end of the structure, the plasma density decreases rapidly so that the electron number in the local plasma becomes insufficient to form a return current that is strong enough to separate the magnetic fields. Therefore, due to Ampère's force law, the electron beams on both sides attract each other and flow into the mid-plane coronal plasma. The magnetic field lines that move with the electron beams are pushed together and reconnect. The reconnection magnetic field in the corona is approximately 100 MG. An X-point magnetic field topology is observed as shown by the right-bottom inset of Fig.1(a).As the field topology changes, the explosive release of magnetic energy results in the emission of relativistic jets as shown by Fig. 1(b-e). These jets are formed by the background plasma electrons in the corona, which start to appear at approximately t = 32T_0 [Fig. 1(b)] and acquire relativistic energies within a few laser cycles. They propagate backwards (-x direction) towards the exhaust region of the reconnection site (± z direction). The backward longitudinal momentum stems from efficient acceleration due to magnetic energy dissipation as will be discussed in the remainder of this work. The electrons in the jets distinguish themselves from the rest of background electrons that are heated by the laser pulse in this region by (i) remarkably higher energies (top 0.2% on the electron spectra, with mean energy E̅∼ 4.7 MeV), (ii) considerably small y-divergence (i.e. θ_y=|P_y|/P∼0.1, where P is the electron momentum and P_y denotes its y-component), (iii) forming a dense electron beam, where the density is n_jet∼ 5 n_c initially, but decreases rapidly due to dispersion in the z-direction. The total charge of the jets is approximately 0.2 nC. §.§ Pressure tensor agyrotropy and observation of field-line rearrangement. In order to locate the reconnection site in the corona, we calculate the scalar measure of the electron pressure tensor gyrotropy that was suggested recently by Swisdak <cit.>,Q=𝒫_12^2+𝒫_13^2+𝒫_23^2/𝒫_⊥^2+2𝒫_⊥𝒫_∥.Here [𝒫_∥ 𝒫_12 𝒫_13; 𝒫_12𝒫_⊥ 𝒫_23; 𝒫_13 𝒫_23𝒫_⊥ ] is the electron pressure tensorℙ=m∫(𝐯-𝐯̅)(𝐯-𝐯̅)fd^3𝐯transformed into a frame in which the diagonal components are in gyrotropic form, i.e. one of the coordinate axes points in the direction of the local magnetic field and the others are oriented such that the final two components of the diagonal of ℙ are equal (see the Appendix in Ref. <cit.> for details). f(x,y,z,𝐯) is the distribution function at position (x,y,z) and velocity (𝐯), and 𝐯̅ is the mean velocity. It should be noted that although Eq. (2) is the non-relativistic definition of the electron pressure tensor, the relativistic effects are not expected to have a significant influence here since more than 99% of the electrons in the corona are non-relativistic, namely γ-1<1 as can be seen in Fig. 4(c), where γ = 1/√(1-(v/c)^2) is the Lorentz factor of the electrons. In general, for gyrotropic electron pressure tensors Q=0, and the maximum departure from gyrotropy is Q=1. High values of Q usually identify regions of interesting magnetic topology such as separatrices and X-points in magnetic reconnection.Figure 2 shows that the space and time where Q reaches its peak in the 3D PIC simulation coincides with the appearance of the electron jets shown in Fig. 1(b-c). The slab is significantly pinched during the interaction <cit.>, which results in a higher inflow velocity and thus a faster reconnection rate. Knowing the location of the reconnection site allows one to calculate the magnetization parameter. By substituting the electron density and magnetic field for the region with peak agyrotropy, one obtains the maximum magnetization parameter σ_m≈25. Therefore we conclude that the observed MR is in the relativistic regime.It is generally understood that the jets are accelerated out of the reconnection region by the magnetic tension forces (T^m=(B·∇)B/4π) of the newly-connected, strongly-bent magnetic field lines. In Fig. 3, we plot the magnetic fields at x=26λ_0, and the z-component of the magnetic tension T^m_z at the midplane (y=0). A quadrupole longitudinal magnetic field pattern emerges as the MR occurs, which is indicative of Hall-like reconnection <cit.>, where the reconnection rate is significantly enhanced due to decoupling of electron and ion motion. The amplitude of B_x is 20 MG, which is ∼20% of the reconnected magnetic field.Figure 3 illustrates that an enormous magnetic tension force is generated in the corona because of the reconnection. The pressure exceeds the relativistic light pressure (P = 2I/c with a_0∼1) within one laser wavelength, which exerts a strong compression of the electron jets that results in the observed high-density emission. Moreover, the shape of magnetic tension in Fig. 3 shows the dynamics of the newly-connected field lines strengthening and relaxing, during which the energy is transferred to the plasma particles. The phenomenon is consistent with the shape and emission direction of the jets that we observed in Fig. 1(c-e). §.§ Magnetic energy dissipation and particle acceleration. In order to gain a deeper understanding of energy transfer in the relativistic MR, we now focus on the field dissipation process. The observed dissipation power is of the order of 0.1-TW, which results in a highly efficient energy transfer from the magnetic fields to the kinetic energy of plasma.Figure 4(a) is a graphic demonstration of 3D field dissipation, where the work done by the longitudinal electric field per unit volume and unit time (i.e. E_xJ_x) is presented. The blue chains on both sides of the slab show the process of the laser-driven electron-beams losing energy to the static electric field, which is sometimes referred to as “enhanced target normal sheath acceleration” <cit.>. Meanwhile in the midplane, one can see that the energy flows are directed in the opposite direction as MR occurs, i.e. the electrons in the coronal plasma collectively extract energy from the (reconnection associated) electric fields and thus gain kinetic energy. To further clarify the dissipation process, we also conducted a comparison simulation which is presented in the Supplemental Materials. To further understand the role thatrelativistic MR plays in the energy transfer process, we calculated the energy change for each component in the simulation, i.e. laser pulse (frequency ≥ 0.8ω_0), static electric/magnetic field (frequency < 0.8ω_0), and kinetic energy of electrons and protons), and the results are shown in Fig. 4(b). Let Δ E_+ denote the total energy increase in the electrostatic field, coronal electrons and protons, Δ E_- denote the energy reduction in the laser pulse and other electrons (mostly laser-driven electron beams), and Δ E_m represent the energy loss in the static magnetic fields. As one can see, a significant contribution to the total energy transfer comes from the annihilation of magnetic fields due to relativistic MR, which accounts for ∼20% of the total energy gain. Also, Fig. 4(b) indicates the static magnetic field loses ∼ 3.0 mJ energy in 5 laser cycles, i.e. an efficient magnetic energy dissipation with power ∼ 0.18 TW is obtained in the simulation. In addition, the evolution of the static magnetic energy and the kinetic energy of the electron jets are plotted in the inset of Fig. 4(b). A good time-synchronization is observed between the magnetic field dissipation and electron acceleration.In Fig. 4(c), we plot the electron energy spectrum in the corona from 30T_0 to 36T_0. One can see that as the reconnection occurs, the released magnetic energy is transferred to the non-thermal electrons. The total electron kinetic energy in the corona has increased by a factor of 4 during the reconnection and a hard power-law electron energy distribution dN/dγ∝1/γ^p is obtained with index p≈1.8. In addition, according to the electron energy spectrum at 30 T_0, the low-energy part (with γ-1<0.1, contains 98.8% of the total charge) has a temperature around 6 keV, given the reconnection magnetic field ∼ 100 MG and electron density ∼ 1n_c, the plasma β is approximately only 0.02.To reveal the mechanism of particle acceleration, we track 100 electrons that attain the highest energy during the MR, and display one representative case in Fig. 4(d), where the electron kinetic energy and work done by each component of the electric fields are plotted as a function of time. It illustrates that the electric field associated with reconnection at the X-point is primarily responsible for the energization of the electrons. This is supported by the trajectory of the tracked electrons in phase space [shown by the inset of Fig. 4(d), where the trajectory of the representative electron for the energy plotting is drawn in red], which shows that almost all the electrons gain their energies within a narrow plane (-0.1λ_0<y<0.1λ_0) adjacent to the X-point. In this region, since the magnetic field vanishes due to the reconnection, the electrons become unmagnetized and can be accelerated freely. Moreover, as indicated by the horizontal lines in the phase space, once the electrons escape from this narrow plane, the acceleration process stops immediately. Recent studies <cit.> have pointed out that the randomness of the electron injection and escape from the acceleration region give riseto the observed power-law energy distribution as shown in Fig. 4(c). §.§ Outlook. The present work is the first demonstration that the interaction of a high-intensity laser and a micro-scale plasma can trigger relativistic MR, which leads to highly efficient magnetic energy dissipation and gives rise to intense relativistic jets. In order to explore the potential of the proposed scenario, we conducted a series of 3D PIC simulations to study the laser intensity dependence of the maximum magnetization parameter σ_m and the magnetic energy dissipation Δ E_m. The results, displayed in Fig. 5 show that, by applying a micro-sized slab target, relativistic MR can be accomplished at approximately a_0=2.5 (where σ_m = 1.7), with a laser energy of only 50 mJ. Such laser systems are readily available worldwide, which may open a way to extensive experimental study of relativistic MR and greatly advance our knowledge of the fundamental processes such as particle acceleration and magnetic energy dissipation. Moreover, as the whole setup is micro-scale, the PIC simulations are considerably less expensive than in previous laser-plasma driven MR experiments <cit.>. Since some processes of interest, such as the particle acceleration, can only be captured by means of fully-kinetic PIC simulations, it is therefore more accessible to guide and interpret future MR experiments via numerical simulations with the proposed scenario. Nevertheless, it is worth noting that despite the small physical size (L∼1-10μ m), due to the high energy density, in dimensionless variables <cit.> [L/(√(β)d_i)∼10, where d_i is the ion skin depth] the system here is comparable to previous laser-plasma MR experiments.Figure 5 illustrates that as the incident laser intensity grows, the rate at which σ_m increases gradually slows after the initial fast-growing phase (3<a_0<10), while an opposite evolution is observed for Δ E_m. This is because the reconnection site moves towards the high-density region as the laser-driven electron beams become increasingly intense, which results in stronger field dissipation (more electrons gain energy from the reconnecting field). On the other hand, the enhancement of the local plasma frequency ω_p slows down the growth of σ_m. Nevertheless, it is shown that for the normalized laser amplitude a_0=30 (I∼10^21W/cm^2), the regime of σ∼ 120 can be accessed and the released magnetic energy due to field dissipation is as high as 75 mJ. With next-generation Petawatt laser facilities, such as ELI <cit.>, these results might open a new realm of possibilities for novel experimental studies of laboratory astrophysics and highly-efficient particle acceleration induced by relativistic reconnection.§ METHODSThe 3D PIC simulations presented in this work were conducted with the code EPOCH <cit.>. In this study we restrict the simulations to the collisionless case, which is justified by the high temperature (∼ 10 keV) achieved in laser-plasma interactions, leading to particles having mean free paths larger than the system size.A moderately high-intensity laser beam with normalized amplitude a_0 = 5, and a focus spot of 4λ_0 is used to drive the relativistic MR. The temporal profile of the laser pulse is T(t) = sin^2(π t/τ), where 0≤ t ≤τ = 15 T_0, The power and energy of laser pulse are approximately 12 TW and 200 mJ, respectively. The plasma slab dimension is x× y× z = 20(7)λ_0×1λ_0×10λ_0, where the number in the bracket in x direction denotes the length of the coronal region. The slab is pre-ionized (proton-electron plasma), the initial temperature is T_e=T_p=1 keV. The plasma density is uniform (n=n_0) for the main part of the slab, and decreases exponentially with x in the coronal region, i.e. n=n_0exp(-(x-x_0)^2/2l^2), where x≥ x_0=21λ_0 and l=2λ_0 is the scale length. In most of the simulations we presented in this work [except for Fig. 5], n_0 = 20 n_c is applied, while in Fig. 5, we use n_0 = 50 n_c for the scan runs in order to avoid problems caused by self-induced transparency at ultrahigh laser intensities. The ultra-short laser pulse duration and the relatively-low plasma density are used to improve computational efficiency, which may cause slight differences in the simulation outputs, but do not crucially alter the underlying physics of relativistic MR. For the primary simulation we presented in this work, the dimensions of the simulation box are x× y× z = 30λ_0× 15λ_0× 15λ_0, which is sampled by 1200 × 600 × 600 cells with 5 macro particles for electron species and 3 for proton species in each cell. The output of this high-resolution simulation was used to produce Fig. 1–3, and Fig. 4(a). The rest of the results presented in this work are conducted with a larger simulation box x× y× z = 40λ_0× 20λ_0× 20λ_0 to ensure the laser pulse and electron beams do not leave the simulation box while we analyze the magnetic field energy, but sampled with a lower resolution 800 × 400 × 400 to reduce the computational time. The numerical convergence has been confirmed by comparing the physical quantities of interest for the simulations with different resolutions. § SUPPLEMENTAL MATERIAL Here we present a simulation with the setup depicted in Panel (a) as a comparison to the results shown in our main paper. In this setup, an additional plate is placed on the -y side of the plasma slab so that half of the laser beam is blocked. The plate thickness (in the longitudinal direction) is 5λ_0 and has the same density (20n_c) as the slab. To ensure that the energy delivered to the corona [the volume within the blue box frame in Panel (a)] by the laser is roughly the same, we increase the laser amplitude by √(2) (a_0 = 7.07). In this situation, only one electron beam reaches the coronal region as shown by the inset in Panel (a), and as a resultthe effects induced by MR are much weaker.It should be noted that although almost all of the laser energy on the -y side of the slab is blocked by the newly-added plate, the surface current on this side is, as shown by Panel (b), not exactly zero. ThereforeMR may still occur in the area with an anti-parallel magnetic field configuration [zoom-in region in Panel (b)]. However, it is found that there are significant differences that support the arguments that we made in the main paper, and are shown in Panel (c-d).In Panel (c) we show the field dissipation (E_xj_x) in the corona in comparison with Fig. 4(a). Obviously, the process is much stronger and significant in the 2-beam case. The reason is that the topology of the magnetic field lines are changedsignificantly when 2 electron beams from both sides of the slab reach the corona simultaneously. In contrast, when there is only one beam (even if it is approximately twice as intense), the collective motion of electrons, which extract energy from the collapse of magnetic fields, is almost negligible.As a result, during the MR process, more magnetic energy is released and transferred to the plasma in the original 2-beam case, which is consistent with the analysis of electron energy in the corona, as shown in Panel (d). The total kinetic energy gain by the coronal electrons is 25% higher in the relativistic MR scenario proposed in the main paper. Moreover, we note a sharp increase of the E_k,total between 32 T_0 and 36 T_0 in the 2-beam case, which is in good agreement with the period during which the ultrafast relativistic MR takes place. 99 s1 Yamada, M., Kulsrud, R & Ji, H., Magnetic reconnection. Rev. Mod. Phys. 82, 603 (2010). s15 Uzdensky, D. A., Magnetic reconnection in extreme astrophysical environments. Space Sci. Rev. 160, 45 (2011). s13 Hoshino, M. & Lyubarsky, Y., Relativistic reconnection and particle acceleration. Space Sci. Rev. 173, 521 (2012). s2 Lyubarsky Y. & Kirk J. G., Reconnection in a striped pulsar wind. The Astrophysical Journal 547, 437 (2001). s7 Drenkhahn, G. & Spruit, H. C., Efficient acceleration and radiation in Poynting flux powered GRB outflows. Astronomy & Astrophysics 391, 1141 (2002). s8 Romanova, M. & Lovelace, R. V. E., Magnetic field, reconnection, and particle acceleration in extragalactic jets. Astronomy & Astrophysics 262, 26 (1992). s9 Matteo, T. Di, Magnetic reconnection: flares and coronal heating in active galactic nuclei. Monthly Notices of the Royal Astronomical Society 299, 15 (1998). s18 Lyutikov, M. & Uzdensky, D., Dynamics of relativistic reconnection. The Astrophysical Journal 589, 893 (2003). s19 Lyubarsky, Y. E., On the relativistic magnetic reconnection. Monthly Notices of the Royal Astronomical Society 358, 113 (2005). s68 Drake, J. F. et al., Electron acceleration from contracting magnetic islands during reconnection. Nature 443, 553 (2006). s21 Zenitani, S. & Hoshino, M., Particle acceleration and field dissipation in relativistic current sheet of pair plasmas. The Astrophysical Journal 670, 702 (2007). s23 Lyubarsky, Y. & Liverts, M., Particle acceleration in the driven relativistic reconnection. The Astrophysical Journal 682, 1436 (2008). s26 Nalewajko, K. et al., On the distribution of particle acceleration sites in plasmoid-dominated relativistic magnetic reconnection. The Astrophysical Journal 815, 101 (2015). s12 Sironi, L. & Spitkovsky, A., Relativistic reconnection: a efficient source of non-thermal particles. The Astrophysical Journal 783, 21 (2014). s27 Guo, F. et al., Formation of hard power laws in the energetic particle spectra resulting from relativistic magnetic reconnection. Phys. Rev. Letts. 113, 155005 (2014). s29 Liu, W. et al., Particle energization in 3D magnetic reconnection of relativistic pair plasmas. Phys. Plasmas 18, 052105 (2011). s30 Kagan, D., Milosavljevic, M. & Spitkovsky, A., A flux rope network and particle acceleration in three-dimensional relativistic magnetic reconnection. The Astrophysical Journal 774, 41 (2013). s31 Cerutti, B. et al., Three-dimensional relativistic pair plasma reconnection with radiative feedback in the crab nebula. The Astrophysical Journal 782, 104 (2014). s32 Chapman, I. T. et al. Magnetic reconnection triggering magnetohydrodynamic instabilities during a sawtooth crash in a tokamak plasma. Phys. Rev. Letts. 105 , 255002 (2010). s33 Yamada, M. et al., Study of driven magnetic reconnection in a laboratory plasma. Phys. Plasmas 4, 1936 (1997). s34 Nilson, P. M. et al., Magnetic reconnection and plasma dynamics in two-beam laser-solid interactions. Phys. Rev. Letts. 97, 255001 (2006). s37 Willingale, L. et al., Proton deflectometry of a magnetic reconnection geometry. Phys. Plasmas 17, 043104 (2010). s35 Zhong, J. Y. et al., Modelling loop-top X-ray source and reconnection outflows in solar flares with intense lasers. Nat. Phys. 6, 984 (2011). s36 Dong, Q. L. et al., Plasmoid ejection and secondary current sheet generation from magnetic reconnection in laser-plasma interaction. Phys. Rev. Letts. 108, 215001 (2012). s38 Fiksel, G. et al., Magnetic reconnection between colliding magnetized laser-produced plasma plumes. Phys. Rev. Letts. 113, 105003 (2014). s39 Rosenberg, M. J. et al., A laboratory study of asymmetric magnetic reconnection in strongly driven plasmas. Nat. Commun. 6, 6190 (2015). s40 Rosenberg, M. J. et al., Slowing of magnetic reconnection concurrent with weakening plasma inflows and increasing collisionality in strongly driven laser-plasma experiments. Phys. Rev. Lett. 114, 205004 (2015). s41 Extreme Light Infrastructure European Project, www.eli.laser.eu.s42 Yanovsky, V. et al., Ultra-high intensity- 300-TW laser at 0.1 Hz repetition rate. Opt. Express 16, 2109 (2008). s44 Stamper, J. A. et al., Spontaneous magnetic fields in laser-produced plasmas. Phys. Rev. Letts. 26, 1012 (1971). s46 Raymond, A. et al., Relativistic Magnetic Reconnection in the Laboratory. https://arxiv.org/abs/1610.06866 (2016). s65 Totorica S. R., Abel, T. & Fiuza, F., Nonthermal electron energization from magnetic reconnection in laser driven plasmas. Phys. Rev. Letts. 116, 095003 (2016). s66 Totorica S. R., Abel, T. & Fiuza, F., Particle acceleration in laser-driven magnetic reconnection. Phys. Plasmas 24, 041408 (2017). s69 Pei X. X. et al., Magnetic reconnection driven by Gekko XII laser with a Helmholtz capacitor-coil target. Phys. Plasmas 23, 032125 (2016). s51 Guo, F. et al., Particle acceleration during magnetic reconnection in a low-beta pair plasma. Phys. Plasmas 23, 055708 (2016).s52 Ping, Y. L. et al., Three-dimensional fast magnetic reconnection driven by relativistic ultraintense femtosecond lasers. Phys. Rev. E 89, 031101 (2014). s53 Gu, Y. J. et al., Fast magnetic-field annihilation in the relativistic collisionless regime driven by two ultrashort high-intensity laser pulses. Phys. Rev. E 93, 013203 (2016). s55 Swisdak, M., Quantifying gyrotropy in magnetic reconnection. Geophys. Res. Lett. 43, 43 (2016). s56 Uzdensky, D. A. & Kulsrud, R. M., Physical origin of the quadrupole out-of-plane magnetic field in Hall magnetohydrodynamic reconnection. Phys. Plasmas 13, 062305 (2006). s57 Jullien, A. et al., Highly efficient temporal cleaner for femtosecond pulses based on cross-polarized wave generation in dual cryistal scheme. Appl. Phys. B 84, 409 (2006). s58 Thaury, C. et al., Plasma mirrors for ultrahigh-intensity optics. Nat. Phys. 3, 424 (2007). s59 Lévy, A. et al., Double plasma mirror for ultrahigh temporal contrast ultraintense laser pulses. Opt. Letts. 32, 310 (2007). s60 Fischer, J. & Wegener, M., Three-dimensional optical laser lithography beyond the diffraction limit. Laser. Photon. Rev. 7, 22 (2013). s61 Naumova, N. et al., Attosecond electron bunches. Phys. Rev. Letts. 93, 195003 (2004). s62 Yi, L. et al., Bright X-Ray source from a laser-driven microplasma waveguide. Phys. Rev. Letts. 116, 115001 (2016). s63 Kaymak, V. et al., Nanoscale ultradense Z-pinch formation from laser-irradiated nanowire arrays. Phys. Rev. Letts. 117, 035004 (2016). s64 Zou, D. et al., Laser-driven ion acceleration from plasma micro-channel targets. Sci. Rep. 7, 42666 (2017). s54 Ji, H. & Daughton, W., Phase diagram for magnetic reconnection in heliophysical astrophysical, and laboratory plasmas. Phys. Plasmas 18, 111207 (2011). s67 Arber, T. D. et al., Contemporary particle-in-cell approach to laser-plasma modelling. Plasma Phys. Control. Fusion 57, 113001 (2015).§ ACKNOWLEDGEMENTSThe authors would like to thank J Tenbarge, I Pusztai, S Newton, T Dubois, and the rest of the pliona team for fruitful discussions. This work is supported by the Knut and Alice Wallenberg Foundation, the European Research Council (ERC-2014-CoG grant 64712) and National Natural Science Foundation of China (No.11505262). The simulations were performed on resources at Chalmers Centre for Computational Science and Engineering (C3SE) provided by the Swedish National Infrastructure for Computing (SNIC).§ AUTHOR CONTRIBUTIONS STATEMENT L.Q.Y. designed and conducted the simulations and analysed theresults, under supervision of T.F.. L.Q.Y. and T.F. wrote the paperwith contributions from all the coauthors. § ADDITIONAL INFORMATIONCompeting financial interests: The authors declare no competing financial interests.
http://arxiv.org/abs/1708.07676v1
{ "authors": [ "Longqing Yi", "Baifei Shen", "Alexander Pukhov", "Tünde Fülöp" ], "categories": [ "physics.plasm-ph" ], "primary_category": "physics.plasm-ph", "published": "20170825101754", "title": "Relativistic magnetic reconnection driven by a laser interacting with a micro-scale plasma slab" }
Haus der Astronomie, Campus MPIA, Königstuhl 17, D-69117 Heidelberg, [email protected] This lesson unit has been developed within the framework the EU Space Awareness project. It provides an insight into the history and navigational methods of the Bronze Age Mediterranean peoples. The students explore the link between exciting history and astronomical knowledge. Besides an overview of ancient seafaring in the Mediterranean, the students explore in two hands-on activities early navigational skills using the stars and constellations and their apparent nightly movement across the sky. In the course of the activities, they become familiar with the stellar constellations and how they are distributed across the northern and southern sky.Navigation in the Ancient Mediterranean and Beyond M. Nielbock Received August 8, 2016; accepted March 2, 2017 =================================================== § BACKGROUND INFORMATION§.§ Cardinal directionsThe cardinal directions are defined by astronomical processes like diurnal andannual apparent movements of the Sun and the apparent movements of the stars. Inancient and prehistoric times, the sky certainly had a different significancethan today. This is reflected in the many myths all around the world. As aresult, we can assume that the processes in the sky have been watched andmonitored closely. In doing this, the underlying cycles and visible phenomenawere easy to observe.For any given position on Earth except the equatorial region, the Sun alwaysculminates towards the same direction (Fig. <ref>). The regionbetween the two tropics 235 north and south of the equator is special,because the Sun can attain zenith positions at local noon throughout the year.During night, the stars rotate around the celestial poles. Archaeologicalevidence from prehistoric eras like burials and the orientation of buildingsdemonstrates that the cardinal directions were common knowledge in a multitudeof cultures already many millennia ago. Therefore, it is obvious that they wereapplied to early navigation. The magnetic compass was unknown in Europe untilthe 13th century CE <cit.>. §.§ Latitude and longitudeAny location on an area is defined by two coordinates. The surface of a sphereis a curved area, but using coordinates like up and down does not make muchsense, because the surface of a sphere has neither a beginning nor an ending.Instead, we can use spherical polar coordinates originating from the centre ofthe sphere with the radius being fixed (Fig. <ref>). Two angularcoordinates remain. Applied to the Earth, they are called the latitude and thelongitude. Its rotation provides the symmetry axis. The North Pole is defined asthe point, where the theoretical axis of rotation meets the surface of thesphere and the rotation is counter-clockwise when looking at the North Pole fromabove. The opposite point is the South Pole. The equator is defined as the greatcircle half way between the two poles.The latitudes are circles parallel to the equator. They are counted from0 at the equator to ± 90 at the poles. The longitudes are greatcircles connecting the two poles of the Earth. For a given position on Earth,the longitude going through the zenith, the point directly above, is called themeridian. This is the line the Sun apparently crosses at local noon. The originof this coordinate is defined as the Prime Meridian, and passes Greenwich, wherethe Royal Observatory of England is located. From there, longitudes are countedfrom 0 to ± 180.Example: Heidelberg in Germany is located at 494 North and 87 East. §.§ Elevation if the pole (pole height)If we project the terrestrial coordinate system of latitudes and longitudes atthe sky, we get the celestial equatorial coordinate system. The Earth's equatorbecomes the celestial equator and the geographic poles are extrapolated to buildthe celestial poles. If we were to make a photograph with a long exposure of thenorthern sky, we would see from the trails of the stars that they all revolveabout a common point, the northern celestial pole (Fig. <ref>). In the northern hemisphere, there is a moderately bright star near the celestialpole, the North Star or Polaris. It is the brightest star in the constellationthe Little Bear, Ursa Minor (Fig. <ref>). In our era, Polaris is lessthan a degree off. However, 1000 years ago, it was 8 away from the pole.Therefore, today we can use it as a proxy for the position of the celestialNorth Pole. At the southern celestial pole, there is no such star that can beobserved with the naked eye. Other procedures have to be applied to find it.If we stood exactly at the geographic North Pole, Polaris would always bedirectly overhead. We can say that its elevation would be (almost) 90.This information already introduces the horizontal coordinate system(Fig. <ref>). It is the natural reference we use every day. We, theobservers, are the origin of that coordinate system located on a flat planewhose edge is the horizon. The sky is imagined as a hemisphere above. The anglebetween an object in the sky and the horizon is the altitude or elevation. Thedirection within the plane is given as an angle between 0 and 360,the azimuth, which is usually counted clockwise from north. In navigation, thisis also called the bearing. The meridian is the line that connects North andSouth at the horizon and passes the zenith.For any other position on Earth, the celestial pole or Polaris would appear atan elevation smaller than 90. At the equator, it would just graze thehorizon, i.e. be at an elevation of 0. The correlation between thelatitude (North Pole = 90, Equator = 0) and the elevation ofPolaris is no coincidence. Figure <ref> combines all threementioned coordinate systems. For a given observer at any latitude on Earth, thelocal horizontal coordinate system touches the terrestrial spherical polarcoordinate system at a single tangent point. The sketch demonstrates that theelevation of the celestial North Pole, also called the pole height, is exactlythe northern latitude of the observer on Earth. From this we can conclude thatif we measure the elevation of Polaris, we can determine our latitude on Earthwith reasonable precision. §.§ Circumpolar stars and constellationsIn ancient history, e.g. during the Bronze Age, Polaris could not be used todetermine north. Due to the precession of the Earth's axis, it was about30 away from the celestial North Pole in 3,500 BCE. Instead, the starThuban (α Draconis) was more appropriate, as it was less than 4off. However, it was considerably fainter than Polaris and perhaps not alwaysvisible to the naked eye.When looking at the night sky, some stars within a certain radius around thecelestial poles never set; they are circumpolar (see Fig. <ref>).Navigators were skilled enough to determine the true position of the celestialpole, by observing a few stars close to it. This method also works for thesouthern celestial pole. There are two videos that demonstrate the phenomenon.CircumpolarStars Heidelberg 49degN (Duration: 0:57)<https://youtu.be/uzeey9VPA48>CircumpolarStars Habana 23degN (Duration: 0:49)<https://youtu.be/zggfQC_d7UQ>They show the movement of the night sky when looking north for two differentlatitudes coinciding with the cities of Heidelberg, Germany (49 North)and Lisbon, Portugal (23 North). The videos illustrate that* there are always stars and constellations that never set. Those are thecircumpolar stars and constellations.* the angle between the celestial pole (Polaris) and the horizon depends onthe latitude of the observer. In fact, these angles are identical.* the circumpolar region depends on the latitude of the observer. It isbigger for locations closer to the pole. If the students are familiar with the usage of a planisphere, they can study thesame phenomenon by watching the following two videos. They show the rotation ofthe sky for the latitudes 20 and 4. The transparent area revealsthe visible sky for a given point in time. The dashed circle indicates theregion of circumpolar stars and constellations.CircumPolarStars phi N20 (Duration: 0:37)<https://youtu.be/Uv-xcdqhV00>CircumPolarStars phi N45 (Duration: 0:37)<https://youtu.be/VZ6RmdzbpPw>When sailing north or south, sailors observe that with changing elevation of thecelestial pole the circumpolar range is altered, too. Therefore, whenevernavigators see the same star or constellation culminating – i.e. passing themeridian – at the same elevation, they stay on the “latitude”.Although theeducated among the ancient Greek were familiar with the concept of latitude of aspherical Earth, common sailors were probably not. For them, it was sufficientto realise the connection between the elevation of stars and their course.Ancient navigators knew the night sky very well. In particular, they utilisedthe relative positions of constellation which helped them to determine theirposition in terms of latitude. §.§ Early seafaring and navigation in the MediterraneanNavigation using celestial objects is a skill that emerged already long beforehumans roamed the Earth. Today, we know numerous examples among animals whofind their course using the day or night sky. Bees and monarch butterfliesnavigate by the Sun <cit.>, just like starlings do<cit.>. Even more impressive is the ability of birds<cit.> and seals<cit.> to identify the position of stars during night-timefor steering a course. However, in our modern civilisation with intenseillumination of cities, strong lights can be mistaken for celestial objects. Forinstance, moths use the moon to maintain a constant course, but if confused by astreet lamp, they keep on circling around it until exhaustion<cit.>. Hence, light pollution is a serious threat tomany animals. Its magnitude is demonstrated by Fig. <ref>.Among the first humans to have navigated the open sea were the aboriginalsettlers of Australia some 50,000 years ago <cit.>. Theoldest records of seafaring in the Mediterranean date back to 7,000 BCE<cit.>, admittedly done with boats or small ships thatwere propelled by paddles only. The routes were restricted close the coast wherelandmarks helped to navigate to the desired destinations. In order to be able tocross larger distances, propulsion independent of muscle force is needed.Therefore, the sail was one of the most important inventions in human history,in its significance similar to the wheel. Around the middle of the 4thmillennium BCE, Egyptian ships sailed the eastern Mediterranean<cit.> and established trade routes with Byblos inPhoenicia, the biblical Canaan, now Lebanon. This is about the time when theBronze Age began. Tin is an important ingredient to bronze. After the depletionof the local deposits, tin sources in central and western Europe triggered largescale trade <cit.>. Transportation over large distancesinside and outside the Mediterranean was accomplished by ships.Soon, the navigators realised that celestial objects, especially stars, can beused to keep the course of a ship. Such skills have been mentioned in earlyliterature like Homer’s Odyssey which is believed to date back to the 8thcentury BCE. The original sources are thought to originate from the Bronze Age,in which the Minoans of Crete were a particularly influential people that livedbetween 3,650 and 1,450 BCE in the northern Mediterranean, and who sailed theAegean Sea. Since many of their sacral buildings were aligned with the cardinaldirections and astronomical phenomena like the rising Sun and the equinoxes<cit.>, it is reasonable tothink that they used this knowledge for navigation, too<cit.>. The Minoans sailed to the island of Thera andEgypt, which would have taken them on open water for several days.The Greek poet Aratos of Soli published his Phainomena around 275 BCE<cit.> in which he provided detailed positions ofconstellations and their order of rising and setting, which would be vitalinformation for any navigator to maintain a given course. He would simply havepointed his ship at a bearing and be able to keep it with the help of stellarconstellations that appeared towards that heading. The azimuth of a given starwhen rising or setting remains constant throughout the year, except for a slowvariation caused by the 26,000 years period of the precession of the Earth'saxis. Interestingly, Aratos’ positions did not fit the Late Bronze and EarlyIron Age but the era of the Minoan reign <cit.> some2,000 years earlier.Around 1200 BCE, the Phoenicians became the dominating civilisation in theMediterranean. They built colonies along the southern and western coasts of theMediterranean and beyond. Among them was the colony of Gades (now Cadíz) justoutside the Strait of Gibraltar which served as a trading point for goods andresources from Northern Europe<cit.>. Several documentedvoyages through the Atlantic Ocean took them to Britain and even several hundredmiles south along the African coast <cit.>.The Greek historian Herodotus (ca. 484 – 420 BCE) reports of a Phoenicianexpedition funded by the Egyptian Pharaoh Necho II (610 -– 595 BCE) that set outfrom the Red Sea to circumnavigate Africa and returned to Egypt via theMediterranean<cit.>. Thesailors apparently reported that at times the Sun was located North<cit.> which is expected after crossing the equatorto the south. All this speaks in favour of extraordinary navigational skills.After the Persians conquered the Phoenician homeland in 539 BCE, their influencedeclined, but was re-established by descendants of their colonies, theCarthaginians. §.§ Pytheas A very notable and well documented long distance voyage has been passed on byancient authors and scholars like Strabo, Pliny and Diodorus of Sicily. It isthe voyage of Pytheas (ca. 380 – 310 BCE), a Greek astronomer, geographer andexplorer from Marseille who around 320 BCE apparently left the Mediterranean,travelled along the European west coast and made it up north until the BritishIsles and beyond the Arctic Circle, during which he possibly reached Iceland orthe Faroe Islands that he called Thule<cit.>.Massalia (or Massilia), as it was called then, was founded by Phocean Greeksaround 600 BCE, and quickly evolved into one of the biggest and wealthiest Greekoutposts in the Western Mediterranean with strong trade relations to Celtictribes who occupied most of Europe <cit.>. Pytheaswas born into the Late Bronze Age, when the trade with resources from NorthernEurope was flourishing. Not much was known in Greek geography about this part ofthe world, except that the barbarians living there mined the tin ore anddelivered the precious amber that the whole Mediterranean so desperately longedfor. Perhaps it was out of pure curiosity why Pytheas set out to explore theseshores.His voyage was a milestone, because Pytheas was a scientist and a greatobserver. He already used a gnomon or a sundial, which allowed him to determinehis latitude and measure the time during his voyage<cit.>. He also noticed that in summer the Sun shineslonger at higher latitudes. In addition, he was the first to notice a relationbetween the tides, which are practically not present in the Mediterranean, andthe lunar phases <cit.>.apparent name=Apparent movement, description=Movement of celestial objects which in fact is caused by the rotation of the Earth. cardinal name=Cardinal directions, description=Main directions, i.e. North, South, West, East circumpol name=Circumpolar, description=Property of celestial objects that never set below the horizon. culminate name=Culmination, description=Passing the meridian of celestial objects. These objects attain their highest or lowest elevation there. diurnal name=Diurnal, description=Concerning a period that is caused by the daily rotation of the Earth around its axis. ele name=Elevation, description=Angular distance between a celestial object and the horizon. gc name=Great circle, description=A circle on a sphere, whose radius is identical to the radius of the sphere. meridian name=Meridian, description=A line that connects North and South at the horizon via the zenith. poleheight name=Pole height, description=Elevation of a celestial pole. Its value is identical to the latitude of the observer on Earth. precess name=Precession, description=Besides the rotation of a gyroscope or any spinning body, the rotation axis often also moves in space. This is called precession. As a result, the rotation axis constantly changes its orientation and points to different points in space.The full cycle of the precession of the Earth's axis takes roughly 26,000 years. spc name=Spherical polar coordinates, description=The natural coordinate system of a flat plane is Cartesian and measures distances in two perpendicular directions (ahead, back, left, right). For a sphere, this is not very useful, because it has neither beginning nor ending. Instead, the fixed point is the centre of the sphere. When projected outside from the central position, any point on the surface of the sphere can be determined by two angles with one of them being related to the symmetry axis. Such axis defines two poles. In addition, there is the radius that represents the third dimension of space, which permits determining each point within a sphere. This defines the spherical polar coordinates. When defining points on the surface of a sphere, the radius stays constant. sundial name=Sundial, description=A stick that projects a shadow cast by the Sun. The orientation and length of the shadow permits determining time and latitude. zenith name=Zenith, description=Point in the sky directly above. § LIST OF MATERIALThe list contains items needed by one student. The teacher may decide that theywork in groups of two.* Worksheet* Pair of compasses* Pencil* Ruler* Calculator* Protractor* Torch (for supplemental third activity)* Magnetic compass (optional, third activity)* Computer with MS Excel installed* Excel spreadsheet:astroedu1645_AncientMediterranean_BrightStars.xlsx § GOALSWith this activity, the students will learn that * celestial navigation has been developed already many centuries ago.* apart from using Polaris there are other methods to determine cardinaldirections from the positions of stars.* ancient navigators were able to successfully navigate on open waterfollowing stars and constellations. § LEARNING OBJECTIVESThe studentswill be able to * describe methods to determine the cardinal directions form observing the sky.* name prominent stellar constellations.* explain the nature of circumpolar stars and constellations.* use an Excel spreadsheet for calculations.* describe the importance of improved navigational skills for early civilisations. § TARGET GROUP DETAILS Suggested age range: 14 – 19 yearsSuggested school level: Middle School, Secondary SchoolDuration: 90 minutes§ EVALUATIONAccording to the suggested questions listed in the description of the activity,the teacher should guide the students to recognise the positions and theapparent movement of celestial objects as indicators for cardinal directions.Before working on activity 1, the students should look closely at the mapprovided. A visit at a planetarium helps remembering the constellations. Let thestudents name constellations they already know.Ask the students (see Q&A in the activity description) where the North Starwould be when observed from the terrestrial North Pole and the Equator. Then askthem, how this position changes when travelling between these locations. Whenthat concept is understood, introduce the rotation and the apparent motion ofthe stars. Show them the picture of the star trails and ask them, where theycome from. Ask them, which of the stars or constellations remain above thehorizon for the different locations on Earth mentioned above. Those arecircumpolar stars and constellations.Explain the usage of the Excel spreadsheet needed for activity 2. Let thestudents compare their results for different latitudes.Discuss with the students, what the reasons for seafaring could have been inancient epochs.The third optional activity acts as a wrap-up and can be used to evaluate, whatthe students have understood.§ FULL DESCRIPTION OF THE ACTIVITY §.§ IntroductionIt would be beneficial, if the activity be included into a larger context ofseafaring, e.g. in geography, history, literature, etc.Tip: This activity could be combined with other forms of acquiring knowledgelike giving oral presentations in history, literature or geography highlightingnavigation. This would prepare the field in a much more interactive way thanwhat a teacher can achieve by summarising the facts.There are certainly good documentaries available on sea exploration that couldbe shown. As an introduction to celestial navigation in general and the earlynavigators, let the students watch the following videos. The last one is inFrench. This could be done in junction with French lessons in school. If not,tell the story about Pytheas as outlined in the background information. A linkto literature or history classes may be established by reading “TheExtraordinary Voyage of Pytheas” by B. Cunliffe.Episode 2: Celestial Navigation (Duration: 4:39)<https://www.youtube.com/watch?v=DoOuSo9qElI>How did early Sailors navigate the Oceans? | The Curious Engineer (Duration: 6:20)<https://www.youtube.com/watch?v=4DlNhbkPiYY>World Explorers in 10 Minutes (Duration: 9:59)<https://www.youtube.com/watch?v=iUkOfzhvMMs>Once upon a time … man: The Explorers - The first navigators (Duration: 23:13)<https://www.youtube.com/watch?v=KuryXLnHsEY>Pythéas, un Massaliote méconnu (French, duration: 9:57)<https://www.youtube.com/watch?v=knBNHbbu-ao>Ask the students, if they had an idea for how long mankind already uses ships tocross oceans. One may point out the spread of the Homo sapiens to islands andisolated continents like Australia.Possible answers:We know for sure that ships have been used to cross large distances alreadysince 3,000 BCE or earlier. However, the early settlers of Australia must havefound a way to cross the Oceans around 50,000 BCE.Ask them, what could have been the benefit to try to explore the seas. Perhaps,someone knows historic cultures or peoples that were famous sailors. The teachercan support this with a few examples of ancient seafaring peoples, e.g. from theMediterranean.Possible answers:Finding new resources and food, trade, spirit of exploration, curiosity.Ask the students, how they find the way to school every day. What supports theirorientation to not get lost? As soon as reference points (buildings, trafficlights, bus stops, etc.) have been mentioned ask the students how navigatorswere able to find their way on the seas. In early times, they used sailingdirections in connection to landmarks that can be recognised. But for this, theships would have to stay close to the coast. Lighthouses improved the situation.Magnetic compasses have been a rather late invention around the 11th century CE,and they were not used in Europe before the 13th century. But what could be usedas reference points at open sea? Probably the students will soon mentioncelestial objects like the Sun, the Moon and stars.Suggested additional questions, especially after showing the introductory videos:Q: Who was Pytheas? A: He was an ancient Greek scientist and explorer.Q: Where and when did he live? A: He lived in the 4th century BCE during the Late Bronze Age in Massalia, nowMarseille.Q: Where did he travel? A: Pytheas travelled north along the Atlantic coast of Europe to Britain andprobably to the Arctic Circle and Iceland.Q: What did he observe and discover during his voyage? A: He was the first Greek to travel so far to the north. He noticed that thelength of daylight depends on latitude. He was also the first to relate thetides to the phases of the moon.§.§ Activity 1: Circumpolar constellations and starsIn absence of a bright star at the celestial poles, ancient navigators were ableto find it by observing a few circumpolar stars. These navigators wereexperienced enough to determine true north by recognising the relative positionof such stars and by their paths around it.In addition, they used circumpolar constellations and stars to infer theirlatitude. This means, they never rise or set – they are always above thehorizon. While today we can simply measure the elevation of Polaris above thehorizon, ancient navigators saw that star many degrees away from the celestialNorth Pole. In the southern hemisphere, there is no such stellar indicatoranyway. So, instead of measuring the elevation of Polaris, they observed whichstars and constellations were still visible above the horizon when they attainedtheir lowest elevation above the horizon (lower culmination) during theirapparent orbit around the celestial pole.Let the students watch the two following videos that demonstrate the phenomenonof circumpolar stars and constellations for two locations on Earth. They showthe simulated daily apparent rotation of the sky around the northern celestialpole.CircumpolarStars Heidelberg 49degN (Duration: 0:57) <https://youtu.be/uzeey9VPA48>CircumpolarStars Habana 23degN (Duration: 0:49) <https://youtu.be/zggfQC_d7UQ>The students will notice that * there are always stars and constellations that never set. Those are thecircumpolar stars and constellations.* the angle between the celestial pole (Polaris) and the horizon depends onthe latitude of the observer. In fact, these angles are identical.* the circumpolar region depends on the latitude of the observer. It isbigger for locations closer to the pole. If the students are familiar with the usage of a planisphere, they can study thesame phenomenon by watching the following two videos.CircumPolarStars phi N20 (Duration: 0:37) <https://youtu.be/Uv-xcdqhV00>CircumPolarStars phi N45 (Duration: 0:37) <https://youtu.be/VZ6RmdzbpPw>They show the rotation of the sky for the latitudes 20 and 45. Thetransparent area reveals the visible sky for a given point in time. The dashedcircle indicates the region of circumpolar stars and constellations.§.§.§ QuestionsQ: What is special about the geographic North and South Poles of the Earthcompared to other locations? A: They define the rotation axis of the Earth.Q: How do you find North and the other cardinal directions without a compass? A: Celestial bodies, e.g. stars like Polaris which indicates the celestial north pole.Q: Why does the North Star (Polaris) indicate North? A: In our lifetime, Polaris is close to the celestial north pole.Q: Where in the sky would be the celestial North/South Pole if you stood exactlyat the terrestrial North/South Pole? A: At the zenith, i.e. directly overhead.Q: How would this position change, if you travelled towards the equator? A: Its elevation would decline from zenith to the horizon.Q: What are circumpolar constellations? A: These are constellations that revolve around one of the celestial poles andnever rise or set. They are always above the horizon.Q: Which of the visible constellations would be circumpolar, if you stood on theterrestrial North/South Pole/equator? A: The entire northern/southern hemisphere (poles). None at the equator.Q: If the North Star was not visible, how would you be able to determine you latitude anyway? A: Since the circumpolar stars and constellations depend on the latitude, justlike the elevation of Polaris, the ones that always stay above the horizonindicate, where I am.§.§.§ ExerciseThe task is now to walk in the footsteps of a navigator that lived around 5,000years ago. Based on those skills, the students will determine the constellationsthat are circumpolar when observed from given positions on Earth.The table below contains the names of six cities along with their latitudesφ. Negative values indicate southern latitudes. A seventh row is empty,where the students can add the details of their home town. From this, they willhave to calculate the angular radii ϱ from the celestial pole. Thecalculation is simple, because is the same as the pole height and the latitude: φ = ϱ Then they select the map that matches the hemisphere. The students use thecompasses to draw circles of those radii around the corresponding pole. Theconstellations inside that circle are circumpolar. The constellations that arejust fully or partially visible for a given city are added to the table.Possible solutions are added in italics. The table prepared for the exercise iscontained in the worksheet.§.§.§ Detailed instructions* Determine the map scale. The angular scale is 90 from the poles tothe outer circle, i.e. the celestial equator.* Convert the latitudes in the table into radii in the scale of the maps andadd them to the table.* For each of the cities: * Select the suitable map.* Use the compasses to draw a circle with a radius that was determined forthat city.* Find and note the visible circumpolar constellations. If they are toomany, just select the most prominent ones.§.§.§ DiscussionIn ancient times, Polaris did not coincide with the celestial North Pole.Explain the importance of circumpolar stars and constellations for ancientnavigators.Possible result: They provided an excellent tool to maintain latitude and helped to not get lostat open sea.§.§.§ SolutionsThe map scale is: 1cm∝ 10§.§ Activity 2: Stars guide the wayIn the absence of a star like Polaris that indicates a celestial pole, ancient navigators used other stars and constellations to determine cardinal directions and their ship’s course. They realised that the positions where they appear and disappear at the horizon (the bearings) do not change during a lifetime. Experienced navigators knew the brightest stars and constellations by heart.§.§.§ QuestionsQ: Can you determine the cardinal directions from other stars than Polaris? Note that there is no star at the South Pole. A: Yes. If you know the stars and constellations, they can guide the way, as they return to the same positions each day.Q: Why can you use rising and setting stars and constellations to steer a course on sea? A: The position at the horizon when rising and setting does not change (except for a very slow long term variation).Q: Would you be able to see the same stars every night during the year? A: No, the time of rise and set changes. Stars visible during winter nights are up during summer daytimes.§.§.§ ExerciseThe students will produce a stellar compass similar to Fig. <ref>. The calculations that are needed to convert the sky coordinates of the stars into horizontal coordinates, i.e. azimuth and elevation, are pretty complex. Therefore, this activity comes with an Excel file that does it for them. It consists of 57 bright stars plus the Pleiades which is a very prominent group of stars.All they have to do is entering the latitude of their location and the elevation of the stars in the corresponding line at the bottom of the spreadsheet. For the elevation, 10 is a good value. This means, they will get the azimuths of the stars when observed at an elevation of 10. One can also use different values, but this exercise is meant for finding stars that just rise or set. The azimuth is an angle along the horizon, counting clockwise from North.The last two columns (AZ1, AZ2) then display two azimuths, one when the star is rising and one when the star is setting. Note that the distribution of azimuths for rising and setting stars is symmetric relative to the meridian, i.e. the line that connects North and South. The cells that show #NA do not contain valid numbers. These stars never rise or set. They are either circumpolar or below the horizon.The students translate the values into the stellar compass below. They use a protractor and indicate the position of each star on the circle. Then they write its name next to it.§.§.§ DiscussionOne of the methods to navigate through the ancient Mediterranean was to stay close to the shores. Besides the danger of shallow waters, explain why Bronze Age mariners must have had methods that would have allowed them to safely navigate on open waters. You may want to look at a map of the Mediterranean.Possible answers: The ancient peoples also visited islands for trade or other reasons. Many of them are not visible from coastlines of the Mediterranean. The voyages often would also last longer than just a few hours. Vessels of that age were able to pass five nautical miles per hour on average. There are also reports that were passed on through the ages which tell us about celestial navigation. §.§ Activity 3: Do it yourself! (Supplemental)Nothing is more instructive than actually applying to real conditions what has been learned and exercised in theory. Therefore, the results from the previous two activities can be tested in the field by observing the night sky.This activity can be done by the students themselves at home or as a group event with the class.Select a clear evening and a site with a good view to the horizon. As soon as it is dark enough to see the stars, let the students use their dimmed torches to inspect their maps with the circumpolar ranges from activity 1. A dimmed torch - even better: a red one – helps to keep the eyes adapted to the dark.After identifying the brightest stars, let them use their stellar compasses from activity 2. The students should point the markers of one or some of the stars to the stars at the sky. Let them identify North (or South, depending on which celestial pole is visible from your location). If in the northern hemisphere, does this match the direction to the North Star, Polaris? In the southern hemisphere, a magnetic compass might be needed.Let the students identify the constellations they see in the sky on their maps. Ask them to look North (South in the southern hemisphere) and name the stars and constellations that are just above the horizon. Does this coincide with the maps? Note that there should be a circle that indicates the circumpolar range for the local latitude.Try to highlight that by doing this activity, they are working like the navigators from 4,000 years ago.§ CONNECTION TO SCHOOL CURRICULUMThis activity is part of the Space Awareness category “Navigation Through The Ages” and related to the curricula topics: * Coordinate systems* Basic concepts, latitude, longitude* Celestial navigation* Constellations* Instruments § CONCLUSIONThe students learn about navigational methods andseafaring of the ancient epochs like the Bronze Age. Within two activities (plus one optional supplemental activity), theywill learn how the apparent diurnal paths of stars can help to find the cardinaldirections and to set course to known destinations in the Mediterranean. This resource was developed in the framework of Space Awareness. Space Awarenessis funded by the European Commission’s Horizon 2020 Programme under grantagreement no. 638653. aa§ SUPPLEMENTAL MATERIALThis unit is part of a larger educational package called “Navigation Through the Ages” that introduces several historical and modern techniques used for navigation. An overview is provided via:http://www.space-awareness.org/media/activities/attach/b3cd8f59-6503-43b3-a9e4-440bf7abf70f/Navigation%20through%20the%20ages%20compl_z6wSkvW.pdfNavigation_through_the_Ages.pdfThe supplemental material is available online via the Space Awareness project website at <http://www.space-awareness.org>. The direct download links are listed as follows:* Worksheets: https://drive.google.com/file/d/0Bzo1-KZyHftXNDY2bEktbW5kZG8/view?usp=sharingastroedu1645-Ancient-Mediterranean-WS.pdf* Excel file: https://drive.google.com/file/d/0Bzo1-KZyHftXbjYtOE5CbHlkckU/view?usp=sharingastroedu1645_AncientMediterranean_BrightStars.xlsx
http://arxiv.org/abs/1708.07700v3
{ "authors": [ "Markus Nielbock" ], "categories": [ "physics.ed-ph", "physics.pop-ph" ], "primary_category": "physics.ed-ph", "published": "20170825114945", "title": "Navigation in the Ancient Mediterranean and Beyond" }
Shannon Entropy Estimation in ∞-Alphabets from Convergence Results Jorge F. Silva [email protected] of Electrical Engineering Information and Decision System Group University of Chile Santiago, Chile, Off. 508, Ph. 56-2-9784090December 30, 2023 ============================================================================================================================================================================================================================= The problem of Shannon entropy estimation in countable infinite alphabets is addressed from the study and use of convergence results of the entropy functional,which is known to be discontinuouswith respect to the total variation distance in ∞-alphabets. Sufficient conditions for theconvergence of the entropy are used,including scenarios with both finitely and infinitely supported assumptions on the distributions. From this new perspective,four plug-in histogram-based estimators are studied showing that convergence results are instrumental to derive new strong consistency and rate of convergences results.Different scenarios and conditions are used on both theestimators and the underlying distribution, considering for examplefinite and unknown supported assumptions and summable tail bounded conditions. Shannon entropy, infinite alphabets, convergence properties, histogram-based estimators, data-driven partitions, Barron estimator. § INTRODUCTION The problem of Shannon entropy estimation has a long history in information theory, statistics and computer science <cit.>.This problem belongs to the category of scalar functional estimation that has been richly studied in non-parametric statistics. Starting with thefinite size alphabet scenario,the classical plug-in estimate (i.e., the empirical distribution evaluated on the functional) is well known to be consistent, minimax optimal and asymptotically efficient <cit.> when the number of samples n goes to infinity. In the finite alphabet setting, more recent research has been interested in looking at the called large alphabet regime,meaning a non-asymptotic under-sampling regime where the numbers of samples n is on the order of, or even smaller than, the size of the alphabetdenoted by k. In thiscontext,it has been shown that the classical plug-in estimator is sub-optimal as it suffers from severe bias <cit.>.For characterizing optimality in this high dimensional context,a non-asymptotic minimax mean square error analysis under a finite n and k has been conducted by several authors <cit.> considering the minimax risk R^*(k,n)[R^*(k,n) = inf_Ĥ(·)sup_μ∈𝒫(k)𝔼_X_1,..X_n∼μ^n{( Ĥ(X_1,..,X_n) - H(μ) )^2 } where 𝒫(k) denotes the collection of probabilities on [k] ≡{1,..,k }.]. <cit.> first showed that it was possible to construct an entropy estimator that uses a sub-linear sampling size to achieve minimax consistency when k goes to infinity,in the sense that there is a sequence (n_k)=o(k) where R^*(k,n_k) ⟶ 0 as k goes to infinity.Aset of results by <cit.> show that the optimal scaling of the sampling size with respect to k to achieve the aforementioned asymptotic consistency for entropy estimation is O(k/log(k)).A refined set of results for the complete characterization of R^*(k,n),the specific scaling of the sampling complexity, andthe achievability of the obtained minimax L_2 risk for the family {𝒫(k): k≥ 1} with practical estimators have been presented in <cit.>. Contrasting this set of results, it is well-known that the equivalent problem of estimating the distribution consistently (in total variation) in finite alphabet requieres a sampling complexity that scales like O(k).Consequently, the large alphabet results for entropy estimationshow that the task of entropy estimation in finite alphabets is simpler than estimating the high dimensional distribution in terms of sampling complexity.These findings are consistent with the observation that the entropy is a continuous functional of the space of distributions (in the total variational distance sense) for the finite alphabet case <cit.>.§.§ Infinite AlphabetsIn this work we are interested in the infinite alphabet scenario, i.e., on the estimation of the entropy when the alphabet is countable infinite and we have a finite number of samples. This problem is an infinite alphabet regime as the size of the alphabet goes unbounded and nis kept finite for the analysis,which contrasts with the large and finite alphabet regime elaborated above. As argued in <cit.>,this is a challenging non-parametric learning problembecause some of thefinite alphabet properties of the entropy do not extend to this infinite dimensional problem. Notably, it has been shown that the Shannon entropy is not a continuous functional with respect to the total variational distance in infinite alphabets <cit.>.Inparticular,Ho et al.<cit.> showed concrete examples where convergence in χ^2-divergence and in direct informationdivergence(I-divergence), both stronger than total variational convergence <cit.>,do not implyconvergence of the entropy functional. In addition, <cit.> showedthe discontinuityof the entropy with respect to the reverseI-divergence <cit.>,and consequently,with respect to the total variational distance[Thedistinction between reverse and direct I-divergence was pointed out in the work of Barron et al. <cit.>,in the context of distribution estimation.]. In entropy estimation,the discontinuity of the entropy with respect to the divergence implies that the minimax mean square error isunbounded[ R^*_n = inf_Ĥ(·)sup_μ∈ℍ(𝕏) 𝔼_X_1,..X_n∼μ^n{( Ĥ(X_1,..,X_n) - H(μ) )^2 }=∞, where ℍ(𝕏) denotes the family of finite entropy distribution over the countable alphabet set 𝕏. The proof of this result follows from <cit.> andthe Le Cam's two point method <cit.>. The argument is presented in Appendix <ref>.]. Consequently, there is no universal minimaxconsistent estimator (in the mean square error over sense) of the entropy over the family of finite entropy distributions.Consideringa point-wise or sample wise convergence to zero of the estimation error (instead of theworse case expected error analysis mentioned above), <cit.> showed the remarkable result that the classical plug-in estimate is strongly consistent and consistent in the mean square error sense for any finite entropy distribution, i.e., the simplerand straightforwardplug-in estimator of entropy is universal, where convergence to the right limiting value is achieved almost surely despitethe discontinuity of theentropy functional. Moving on the analysis of the (point-wise) rate of convergence of the estimation error,they presented a finite length lower bound for this error of any arbitrary estimation scheme <cit.> showing as a corollary that no universal rate of convergence (to zero) can be achieved for entropy estimation in infinite alphabets <cit.>. Finally constrained the problem to a family of distributions with some specific power tail bounded conditions, <cit.> show a sharp finite length expression for the rate of convergence of the estimation error of the classical plug-in estimate.§.§ From convergenceto Entropy estimationConsidering the discontinuity of the entropy in ∞-alphabets, this work looks at the problem of point-wise almost sure entropy estimation from the new angle ofstudying and applying some recent entropy convergence results and their derived bounds in ∞-alphabets <cit.>. Entropy convergence results have stablished concrete conditions on both the limiting distribution μ and the way a sequence of distributions {μ_n: n≥ 0} convergence to μ such thatlim_n →∞H(μ_n )=H(μ) is satisfied. The conjecture that motivates this research is that putting these conditionsin the context of a learning task,i.e., where {μ_n: n≥ 0} is a random sequence of distributions driven by the classical empirical process,will offer the possibility to study new family of plug-in estimates with the objective to derive new strong consistency and rate of convergence results under some regularity conditions on μ.On the practical side, this work explores a data-driven histogram based estimator as a key case of study,because this approach offers the flexibility to adapt to learning task when appropriate bounds for the estimation and approximation error are derived from the analysis of the problem. On the specifics, we begin revisiting the classical plug-in entropy estimator considering the un-explored and relevant scenario where μ (the data-generated distribution)has a finite but arbitrary large and unknown support.This was declared to be a challenging problem in <cit.> because of the discontinuity of the entropy. Finite-length (non-asymptotic) deviation inequalities and intervals of confidence are derived extending the results presented in <cit.>. Here it is shown that the classicalplug-in estimate achieves optimal rates of convergences. Relaxing the finite support restriction on μ, we present two histogram-based plug-in estimates,one based onthe celebrated Barron-Györfi-van der Meulen estimate <cit.>; and the other on a data-driven partition of the space <cit.>.For the Barron plug-in estimate almost sure consistency is shown for entropy estimation and distribution estimation in direct I-divergence under some mild support conditions on μ. On the other hand for the data-driven partition scheme,we show that the estimator is strongly consistent distribution-free,matching the universal result obtained for the classical plug-in estimate in <cit.>. Furthermore, new almost sure rate of convergence results (in the estimation error) are obtained for distributions with finite but unknown support and families of distributions with power and exponential tail dominating conditions. In this context, our result shows that thisadaptive scheme offers optimal and near optimal rate of convergences,as it approaches arbitrary closely the rate of convergence O(1/√(n)) that is optimal for the finite alphabet problem <cit.>.The rest of the paper is organized as follows.Section <ref> introduces some basic concepts,notation and summarizes the entropy convergence results used in this study. Sections <ref>, <ref> and <ref> statethe main results of this work. The main technical derivation are presented in Section <ref>, Finally summary and final discussion are given in Section <ref>. The proof of some technical auxiliary results are relegated to the Appendix section.§ PRELIMINARIES Let 𝕏 be a countablyinfinite set, without loss of generality the integers,and let 𝒫(𝕏) denote the collection of probability measuresin 𝕏.For μ and v in 𝒫(𝕏),and μabsolutely continuous with respect to v (i.e., μ≪ v) [ifv(A)=0 implies that μ(A)=0 for any event A∈ℬ(𝕏).],dμ/d v(x) denotes theRadon-Nikodym (RN) derivativeof μ with respect to v.Every μ∈𝒫(𝕏) is absolutely continuouswith respect to the counting measure λ (or the Lebesgue measure)[λ(A)=|A| for all A ⊂𝕏.], where its RN derivate is the probability mass function (pmf),f_μ(x)≡d μ/d λ(x)=μ({x}), ∀ x ∈𝕏.Finally for any μ∈𝒫(𝕏), A_μ≡{x∈𝕏:f_μ(x) >0} denotes its support and ℱ(𝕏) ≡{μ∈𝒫(𝕏): λ(A_μ) < ∞}denotes the collection ofprobabilities with finite support.Let μ and v in 𝒫(𝕏), the total variation distance of μ and v isgiven by <cit.> V(μ,v) ≡sup_A ∈ℬ(𝕏)|v(A)-μ(A)|,where ℬ(𝕏) is a short-hand for the subsets of 𝕏.The Kullback-Leibler divergence or I-divergence of μ with respect to v is given by D(μ||v) ≡∑_x ∈ A_μ f_μ(x) logf_μ(x)/f_v(x),when μ≪ v, and D(μ||v) isset to infinite, otherwise <cit.>[The well-known Pinsker`s inequality offers a relationshipbetween the I-divergence and the variational distance<cit.>: ∀μ,v ∈𝒫(𝕏), 2 ln 2·V(μ,v)^2 ≤ D(μ,v).].The Shannon entropy of μ∈𝒫(𝕏) isgiven by <cit.>:H(μ) ≡- ∑_x ∈ A_μ f_μ(x)log f_μ(x).In this context, it is useful to denote by ℍ(𝕏) ⊂𝒫(𝕏) the collection of probabilities where (<ref>) is well defined,by𝒜𝒞(𝕏|v) ≡{μ∈𝒫(𝕏):μ≪ v } the collection of measures absolutely continuous with respect to v ∈𝒫(𝕏), andbyℍ(𝕏|v) ⊂𝒜𝒞(𝕏|v) the collection of probabilitieswhere(<ref>) is well defined for v∈𝒫(𝕏).Concerning convergence,a sequence {μ_n: n ∈ℕ}⊂𝒫(𝕏)is said to converge in total variation to μ∈𝒫(𝕏) iflim_n →∞V(μ_n,μ)=0.For countable alphabets,<cit.>shows that the convergence in total variation is equivalent to the weak convergence[{μ_n: n ∈ℕ}⊂𝒫(𝕏)is said to convergeweakly to μ∈𝒫(𝕏) if for any bounded function g(·): 𝕏→ℝ,lim_n →∞∑_x∈𝕏 g(x) f_μ_n(x)= ∑_x ∈𝕏g(x) f_μ(x).],which is denoted here by μ_n ⇒μ, and the point-wise convergence of thepmf's. Furthermore from (<ref>),the convergence in total variation implies the uniform convergenceof the pmf's, i.e, lim_n →∞sup_x ∈𝕏| μ_n({x})- μ({x})|=0.Therefore in this countable case,all the four previously mentioned notions of convergence are equivalent:total variation; weak convergence; point-wise convergence of the pmf`s;and uniform convergence of the pmf's.We conclude with the convergence in I-divergenceintroduced by<cit.>. We say that {μ_n: n ∈ℕ} converges to μ in direct and reverse I-divergence if lim_n →∞D(μ || μ_n)=0 and lim_n →∞D(μ_n || μ)=0,respectively.From Pinsker's inequality, the convergence inI-divergence implies theweak convergence in (<ref>), where it is known that the converseis nottrue <cit.>.§.§ Convergence results for the Shannon entropy The discontinuity of the entropy rises the problem of finding conditions under which convergence of the entropy can be obtainedin infinite alphabets. On this topic, Ho et al. <cit.> have studied the interplay between the entropy and thetotal variation distance stipulating conditions for convergenceby assuming a finite support on the involved distributions. On the other hand,<cit.>obtained convergence of the entropy by imposing a power dominating condition <cit.>on the limiting probability measure μ, for all the sequences {μ_n: n≥ 0} converging in reverse I-divergence to μ <cit.>. More recently, <cit.> have addressed the entropy convergence studying a number of new settings that involveconditions on the limiting measure μ,as well as the way the sequence {μ_n: n≥ 0} convergences to μ in the space of distributions. This convergence results offer sufficient conditions where the entropy evaluated in a sequence of distribution convergences to the entropy of its limiting distribution and, consequently,the possibility of applying these results when analyszng plug-in entropy estimator.The results used in this work are summarized in the rest of this section.Let us begin with the case when μ∈ℱ(𝕏), i.e., the support of the limiting measure is finite and unknown.Let us assume that μ∈ℱ(𝕏) and {μ_n: n ∈ℕ}⊂𝒜𝒞(𝕏|μ). If μ_n ⇒μ,then lim_n →∞D(μ_n || μ)=0 and lim_n →∞H(μ_n )=H(μ). This result is well-known because when A_μ_n⊂ A_μ for all n the scenario reduces to the finite alphabet case, in which the entropy is known to be continuous<cit.>.Becausewe obtain two inequalities used in the rest of the exposition,a proof is provided here. μ and μ_n belong to ℍ(𝕏) from the finite-supported assumption. The same reason can be used to show that D(μ_n||μ)<∞,since μ_n ≪μ for all n. Let us consider the following identity: H(μ )- H(μ_n) = ∑_x ∈ A_μ (f_μ_n(x)-f_μ(x)) log f_μ(x) + D(μ_n||μ). The first term on the right hand side (RHS) of (<ref>) is upper bounded by 𝐌_μ· V(μ_n,μ)where 𝐌_μ = log1/𝐦_μ≡sup_x ∈ A_μ| logμ({x})| < ∞. For the second term,we have that D(μ_n||μ)≤log e ·∑_x ∈ A_μ_n f_μ_n(x) |f_μ_n(x)/f_μ(x)-1| ≤log e/𝐦_μ·sup_x∈ A_μ|f_μ_n(x) - f_μ(x)| ≤log e/𝐦_μ· V(μ_n,μ). and, consequently, |H(μ )- H(μ_n)| ≤[ 𝐌_μ+log e/𝐦_μ] · V(μ_n,μ). Under the assumptions of Proposition <ref>, we note that the reverse I-divergence andthe entropy difference are bounded by the total variation by(<ref>) and (<ref>), respectively. Note however that these bounds are distribution dependent function of 𝐦_μ(𝐌_μ) in (<ref>).[It is simple to note that 𝐦_μ(𝐌_μ)<∞ if,and only if, μ∈ℱ(𝕏).] The next result relaxes the assumption thatμ_n ≪μand offersa necessary and sufficient condition for the convergence of the entropy. <cit.> Let μ∈ℱ(𝕏) and {μ_n: n ∈ℕ}⊂ℱ(𝕏). If μ_n ⇒μ,then there exists N>0 such that μ≪μ_n ∀ n≥ N, and lim_n →∞D(μ || μ_n)=0. Furthermore, lim_n →∞H(μ_n )=H(μ), if and only if [μ(· |B) in (<ref>) denotes the conditional probability of μ given the event B ⊂𝕏], lim_n →∞μ_n (A_μ_n∖ A_μ)· H(μ_n(· |A_μ_n∖ A_μ))=0⇔lim_n →∞∑_x∈ A_μ_n∖ A_μ f_μ_n(x) log1/f_μ_n(x)=0. Lemma <ref> tells us thatin order to achieve entropy convergence (on top of the weak convergence), it is necessary and sufficient to ask for a vanishingexpression (with n) of the entropy of μ_n restricted to the elements of the set A_μ_n∖ A_μ. Two remarks about this result: 1) The convergence in direct I-divergence does not imply the convergence of the entropy[Concrete examples are presented in <cit.> and <cit.>.]. 2) Under the assumption that μ∈ℱ(𝕏),μ is eventually absolutelycontinuous with respect to μ_n, and the convergence in total variations is equivalent to the convergence in direct I-divergence. We conclude this section with the case when the support of μ is infinite and unknown, i.e., |A_μ|=∞. In this context, we highlight two results: <cit.> Let us consider thatμ∈ℍ(𝕏) and {μ_n: n≥ 0}⊂𝒜𝒞(𝕏|μ). If μ_n ⇒μ and M ≡sup_n≥ 1sup_x∈ A_μf_μ_n(x)/f_μ(x) < ∞, then, μ_n ∈ℍ(𝕏) ∩ℍ(𝕏|μ) for all n and it follows that lim_n →∞D(μ_n || μ)=0and lim_n →∞H(μ_n )=H(μ).Interpreting Lemma <ref>, we have that to obtain the convergence of the entropy functional without imposing a finite support assumption on μ,a uniform bounding condition (UBC) μ-almost everywhere was added in (<ref>).This UBC allows the use ofthe dominated convergence theorem <cit.>,and it is strictly needed in that sense <cit.>. Finally by adding this UBC,the convergence on reverse I-divergence is also obtained as a byproduct.Finally,when μ≪μ_n for all n, we consider the following result:<cit.> Let μ∈ℍ(𝕏) and asequence of measures {μ_n: n≥ 1 }⊂ℍ(𝕏) such that μ≪μ_n for all n ≥ 1. If μ_n ⇒μ and sup_n≥ 1sup_x∈ A_μ|logf_μ_n(x)/f_μ(x)|< ∞ then, μ∈ℍ(𝕏|μ_n) for all n≥ 1, and lim_n →∞D(μ || μ_n)=0. Furthermore, lim_n →∞H(μ_n )=H(μ), if and only if, lim_n →∞∑_x ∈ A_μ_n∖ A_μ f_μ_n(x) log1/f_μ_n(x)=0. This result shows again the non-sufficiency of the convergence in direct I-divergence to achieve entropy convergence in the regime when μ≪μ_n. In fact, Lemma <ref> may be interpreted as an extension of Lemma <ref> when we relax the finite support assumption on μ. § SHANNON ENTROPY ESTIMATION Let μ be a probabilityin ℍ(𝕏), andlet us denote by X_1,X_2,X_3,… the empirical process inducedfrom i.i.d. realizationsof a random variable driven by μ,i.e., X_i ∼μ, for all i≥ 0. Let ℙ_μ denote the distribution of the empirical process in (𝕏^∞,ℬ(𝕏^∞)) and ℙ^n_μ denote the finite block distribution of X^n_1≡ (X_1,..., X_n) in the product space (𝕏^n,ℬ(𝕏^n)). Given a realization of X_1,X_2,X_3, …, X_n, we can construct anhistogram-based estimator like classical empirical probability given by: μ̂_n(A) ≡1/n∑_k=1^n 1_A(X_k),∀ A ⊂𝕏,with pmf denoted by f_μ̂_n(x)= μ̂_n({x}) for all x ∈𝕏. A natural estimator of the entropyis the plug-in estimateof μ̂_n given byH(μ̂_n) = - ∑_x ∈𝕏 f_μ̂_n(x) log f_μ̂_n(x),which is a measurable function of X_1, …, X_n.[This dependency on the data will beimplicit for the rest of the exposition.]For the rest of the exposition, we use the convergence results in Section <ref> to derivestrong consistency results for plug-in histogram-based estimates,like H(μ̂_n) in (<ref>), as well as finite length concentration inequalitiesto obtain almost-sure rate of convergence for the estimation error|H(μ̂_n)- H(μ) |. §.§ Revisiting the classicalPlug-in estimator for finite and unknown supported distributions We start analyzing the case where μ has a finite and unknown support.Aconsequence of the strong law of large numbers <cit.>isthat ∀ x ∈𝕏, lim_n→∞μ̂_n({x})= μ({x}), ℙ_μ-almost surely (a.s.),hence lim_n→∞ V(μ̂_n, μ)=0,ℙ_μ-a.s.On the other hand,it is clear that A_μ̂_n⊂ A_μ with probability one. Then adoptingProposition <ref> it follows that lim_n →∞D(μ̂_n|| μ)=0and lim_n →∞H(μ̂_n )=H(μ), ℙ_μ-a.s.,i.e., μ̂_n is a strongly consistent estimator of μ in reverse I-divergenceand H(μ̂_n) is a strongly consistent estimate of H(μ) distribution-free in ℱ(𝕏).Furthermore, we can state the following: Let μ∈ℱ(𝕏)and μ̂_n bein (<ref>). Then μ̂_n ∈ℋ(𝕏) ∩ℋ(𝕏|μ),ℙ_μ-a.s and ∀ n ≥ 1, ∀ϵ>0, ℙ^n_μ(D(μ̂_n || μ) > ϵ) ≤ 2^|A_μ|+1· e^- 2 𝐦_μ^2 ·n ϵ^2/loge ^2 , ℙ^n_μ( |H(μ̂_n)- H(μ) |> ϵ) ≤ 2^|A_μ| +1· e^-2 n ϵ ^2/(𝐌_μ + log e/𝐦_μ)^2. Moreover, D(μ||μ̂_n)is eventually well-defined with probability one, and ∀ϵ>0, and for any n ≥ 1, ℙ^n_μ(D(μ || μ̂_n) > ϵ) ≤ 2^|A_μ|+1·[ e^-2 n ϵ^2/log e^2·(1/𝐦_u+1)^2 + e^- n 𝐦_μ^2 ]. This resultimplies thatfor any τ∈ (0,1/2) and μ∈ℱ(𝕏), |H(μ̂_n)- H(μ) |,D(μ̂_n ||μ ) andD(μ || μ̂_n) goes to zero as o(n^-τ) ℙ_μ-a.s.Furthermore, 𝔼_ℙ^n_μ( |H(μ̂_n)- H(μ) | ) and𝔼_ℙ^n_μ(D(μ̂_n || μ))behave like O(1/√(n)) for all μ∈ℱ(𝕏) from(<ref>) in Sec. <ref>, which is the optimalrate of convergence of the finite alphabet scenario.As a direct corollary of (<ref>),it is possible to derive intervals of confidence for the estimation error|H((μ̂_n)- H(μ_n)|:for all δ>0 and n≥ 1, ℙ_μ( |H((μ̂_n)- H(μ_n)| ≤(𝐌_μ + log e/𝐦_μ)√(1/2nln2^| A_μ| + 1/δ))≥ 1-δ.This confidence intervalbehaves like O(1/√(n)) as a function of n, and like O(√(ln 1/δ)) as a function ofδ, which are the same optimal asymptotic trend that can beobtained for V(μ, μ̂_n) in (<ref>).To conclude,we note that A_μ̂_n⊂ A_μ ℙ^n_μ-a.s. where for anyn≥ 1, ℙ^n_μ (A_μ̂_n≠ A_μ)>0implying that 𝔼_ℙ^n_μ(D(μ||μ̂_n))=∞, ∀ n.Then even in the finite and unknown supported scenario, μ̂_n is notconsistentin expected direct I-divergence, which is congruent with the result in<cit.>. Besides this negative result,strong consistency in direct I-divergence can be obtained from(<ref>), in the sense that lim_n →∞ D(μ || μ̂_n)=0, ℙ_μ-a.s. §.§ A simplified version of the Barron estimator for finite supported measures It is well-understoodthat consistency inexpected directI-divergence isof critical importancefor the construction of a lossless universal source coding scheme <cit.>.Here we explore an estimator that achieves this learning objective in addition to entropy estimation.For that, let μ∈ℱ(𝕏) and let us assume thatwe know a measure v∈ℱ(𝕏) such thatμ≪ v. <cit.> proposed a modified versionof the empirical measure in (<ref>) to estimate μ from i.i.d. realizations,adopting a mixture estimate of the formμ̃_n(B) = (1-a_n)·μ̂_n(B) + a_n · v(B),for all B⊂𝕏, and with (a_n)_n∈ℕ a sequence of real numbers in (0,1). Note that supp(μ̃_n)=A_v then μ≪μ̃_n for all n[Fromthe finite support assumption H(μ̃_n) <∞ and D(μ || μ̃_n) < ∞, ℙ_μ-a.s.].The following result derives from the convergence result in Lemma <ref>. Letv ∈ℱ(𝕏) and μ∈𝒜𝒞(𝕏| v), and letus consider μ̃_nin (<ref>), with respect to v, induced from i.i.d. realizations of μ. i) If(a_n) is o(1), then lim_n →∞H(μ̃_n )=H(μ), lim_n →∞D(μ || μ̃_n)=0, ℙ_μ-a.s., and lim_n →∞𝔼_ℙ_μ(D(μ || μ̃_n))=0. ii) Furthermore, if (a_n) is O(n^-p) with p > 2, then for all τ∈ (0,1/2),| H(μ̃_n)-H(μ) |andD(μ||μ̃_n) are o(n^-τ) ℙ_μ-a.s, and𝔼_ℙ_μ( | H(μ̃_n)-H(μ) |) and 𝔼_ℙ_μ(D(μ||μ̃_n)) are O(1/√(n)). § THE BARRON-GYÖRFI-VAN DER MEULEN ESTIMATOR The celebrated Barron estimate wasproposed by Barron, Györfi and van der Meulen <cit.> in the context of an abstract and in general continuous measurable space. It was designed as a variation of the classical histogram-based scheme to achieve a consistent estimate of the distribution indirect I-divergence <cit.>.[As mentioned before,consistency in expected direct I-divergence is an important learning topic becauseof its connection with lossless universalsource coding <cit.>,where it is well-known that there is no distribution-free consistent estimate in direct I-divergencein theinfinite alphabet case <cit.>.]Here we revisit the Barron estimate in our countable alphabet scenario,with the objective of estimating the Shannon entropy consistently, which to the best of our knowledge has not been previously addressed in the literature. For that purpose, Lemma <ref> will be used as a key result.Let v ∈𝒫(𝕏) of infinite support (i.e., 𝐦_v=inf_x∈ A_v v({x})=0). We wantto construct a strongly consistent estimate of the entropyrestricted to the collection of probabilities inℍ(𝕏|v). For that, let us consider a sequence (h_n)_n≥ 0 with values in (0,1) and let us denote by π_n={A_n,1,A_n,2,…, A_n,m_n} the finite partition of 𝕏with maximal cardinality satisfying that v(A_n,i) ≥ h_n,∀ i ∈{1,..,m_n}.Note that m_n=|π_n|≤ 1/h_n for all n≥ 1,and because ofthe fact that inf_x∈ A_v v({x})=0 it is simple to verify thatif (h_n) is o(1) and then lim_n →∞ m_n=∞. Note that π_n offers an approximated statistically equivalent partition of 𝕏 with respect to the reference measure v.In this context,given X_1,…,X_n,i.i.d. realizations ofμ∈ℍ(𝕏|v), the idea proposed by <cit.> was to estimate the RN derivative d μ/d v(x)by the followinghistogram-based construction:d μ^*_n/d v (x) = (1-a_n) ·μ̂_n(A_n(x))/v(A_n(x)) + a_n,∀ x ∈ A_v,where a_n is a real number in (0,1), A_n(x) denotes the cell in π_n that contains the point x, and μ̂_n is theempirical measure in (<ref>). Note thatf_μ^*_n(x)=d μ^*_n/d λ(x)=f_v(x)·[(1-a_n) ·μ̂_n(A_n(x))/v(A_n(x)) + a_n],∀ x ∈𝕏,and, consequently,∀ B ⊂𝕏 μ^*_n(B)=(1-a_n) ∑_i=1^m_nμ̂_n(A_n,i)·v(B∩ A_n,i)/v(A_n,i) +a_nv(B).By construction A_μ⊂ A_v ⊂ supp(μ^*_n) and, consequently,μ≪μ^*_n for all n≥ 1. The next result shows sufficient conditions on the sequences (a_n) and (h_n)to guarantee a strongly consistent estimateof the entropy H(μ) and of μ in direct I-divergence,distributionfree in ℍ(𝕏|v).The proof is based on verifying that the sufficient conditions of Lemma <ref> are satisfied ℙ_μ-a.s. Let v be in𝒫(𝕏) ∩ℍ(𝕏) with infinite support,and let us consider μ in ℍ(𝕏|v). If we have that: i)(a_n) is o(1) and (h_n) is o(1), ii) ∃τ∈ (0,1/2), such that the sequence(1/a_n· h_n) is o(n^τ), then μ∈ℍ(𝕏) ∩ℍ(𝕏|μ^*_n) for all n≥ 1 and lim_n →∞H( μ^*_n)=H(μ) and lim_n →∞D(μ || μ^*_n)=0,ℙ_μ-a.s..The Barron estimator <cit.>was originally proposedin the context of distributions defined in an abstract measurable space.Then if we restrict <cit.> to our countable alphabet case, the following result isobtained: <cit.>Letus consider v ∈𝒫(𝕏) andμ∈ℍ(𝕏|v).If (a_n) is o(1), (h_n) is o(1) andlimsup_n →∞1/n a_n h_n≤ 1then lim_n →∞ D(μ ||μ^*_n)=0, ℙ_μ-a.s. Therefore, when the objective is the estimation of distributions consistently in direct I-divergence,Corollary <ref> should be considered to be a better result[Corollary <ref> offers weaker conditions than Theorem <ref> (in particular condition ii)).]. On the other hand, the proof of Theorem <ref> is based on verifying the sufficient conditions of Lemma <ref>,where the objective is to achieve the convergence of the entropy,and as a consequence, theconvergence in direct I-divergences. Therefore,we can say that the stronger conditions of Theorem <ref> are needed when the objective is entropy estimation. This can be justified from the fact that convergence in direct I-divergence does not imply entropy convergence in the countable case,as was discussed in Section <ref> (see, Lemmas <ref> and <ref>).§ A DATA-DRIVEN HISTOGRAM-BASED ESTIMATORData-driven partitions can approximate better the nature of the empirical distribution in the sample space with few quantization bins <cit.>. They has the flexibility to improve the approximation quality of histogram-based estimates and from that obtain betterperformances in different non-parametric learning tasks <cit.>. One of the basic principle of this approachis to partition 𝕏 into data-dependent cells in order to preserve a critical number of samples per cell. This last condition will be crucialto derive a compromise between an estimation and approximation error that will be used in the proof of two of the main results of this section (Theorems <ref> and <ref>). Given X_1,..,X_n i.i.d. realizations driven by μ∈ℍ(𝕏) and ϵ>0,let us define the data-driven set Γ_ϵ≡{x ∈𝕏: μ̂_n( {x })≥ϵ},and ϕ_≡Γ_ϵ^c. Let Π_ϵ≡{{x }: x ∈Γ_ϵ}∪{ϕ_ϵ}⊂ℬ(𝕏) be a data-driven partitionwith maximal resolution in Γ_ϵ, and σ_ϵ≡σ(Π_ϵ) be the smallest sigma field that containsΠ_ϵ[As Π_ϵ is a finite partition,σ_ϵ is the collection of sets that are union of elementsof Π_ϵ.]. We proposethe conditionalempirical measure restricted toΓ_ϵ by: μ̂_n,ϵ≡μ̂_n(· |Γ_ϵ).Note that by construction supp(μ̂_n,ϵ)=Γ_ϵ⊂ A_μ, ℙ_μ-a.s. andconsequently μ̂_n,ϵ≪μ for all n≥ 1. Furthermore,| Γ_ϵ| ≤1/ϵand importantly in the context of the entropy functional we have that 𝐦_μ̂_n^ϵ≡inf_x∈Γ_ϵμ̂_n({x })≥ϵ.The next result establishes a mild sufficient condition on (ϵ_n) for which the plug-in estimate H(μ̂_n,ϵ_n) is strongly consistent distribution-free inℍ(𝕏).Considering thatwe are in the regime where μ̂_n,ϵ_n≪μ, ℙ_μ-a.s.,the proofuses Lemma <ref> as acentral result. If (ϵ_n) is O(n^-τ) with τ∈ (0,1), then for all μ∈ℍ(𝕏) lim_n →∞H(μ̂_n,ϵ_n)=H(μ), ℙ_μ-a.s. This theorem tells us that this construction offers a universal estimation of the entropy in the strongalmost sure sense,as long as the design parameter τ belongs to (0,1). Complementing Theorem <ref>,the next result offers almost sure rates of converge for afamily of distributions with a power tail bounded condition (TBC). In particular, we consider the family of distributions studied by <cit.> in the context of characterizing the rate of convergences for the classical plug-in estimate. Let us assume that for some p>1 there are two constants 0<k_0 ≤ k_1 such that k_0· x^-p≤μ ( { x})≤ k_1 x^-p for all x∈𝕏. If we consider that (ϵ_n)=(n^-τ^*) for τ^*=1/2+1/p, then |H(μ) - H(μ̂_n,ϵ_n)| isO(n^- 1-1/p/2+1/p),ℙ_μ-a.s.This result states that under thementioned p-power TBC on f_μ(·), the plug-in estimate H(μ̂_n,ϵ_n)can offer a rateof convergence to the true limit that is O(n^- 1-1/p/2+1/p) with probability one. The proof of this results set the approximation sequence (ϵ_n) function of p,by finding an optimal tradeoff between estimation and approximation errors while performing a finite length (non-asymptotic) analysis of the expression |H(μ) - H(μ̂_n,ϵ_n)| (the details of this analysis are presented in Section <ref>). It is insightful to look at two extreme regimes in this result:p approaching 1, in which the rate is arbitrarily slow (approaching a non-decaying behavior); andp →∞, where |H(μ) - H(μ̂_n,ϵ_n)| is O(n^-q)for all q∈ (0,1/2) ℙ_μ-a.s..This last power decaying range q∈ (0,1/2) matches what can be achieved for the finite alphabet scenario in Theorem <ref> (seeEq.(<ref>)) which it is known to be optimal ratefor finite alphabets.Extending Theorem <ref>, the following result addressesthe more constrained case of distributions with an exponential TBC. Let us consider α>0 and let us assume that there are k_0,k_1 with 0<k_0 ≤ k_1 and N>0 such thatk_0 · e^-α x≤μ ( { x})≤ k_1 · e^-α x for all x ≥ N.If we consider (ϵ_n)=(n^-τ) withτ∈ (0,1/2), then|H(μ) - H(μ̂_n,ϵ_n)| isO(n^-τlog n),ℙ_μ-a.s. Under this stringer TBC on f_μ(·),we note that |H(μ) - H(μ̂_n,ϵ_n)| iso(n^-q)ℙ_μ-a.s.,for any arbitrary q ∈ (0,1/2),by selecting (ϵ_n)=(n^-τ) withq<τ<1/2. This last condition on τ is universal over α>0. Remarkably for any distribution with this exponential TBC, we can approximate (arbitrarely closely)the optimal almost sure rate of convergence achieved for the finite alphabet scenario.Finally,we revisit the finite and unknown supported scenario,where we show that the data-driven estimate exhibit the same optimal almost sure convergence rate performance of the classical plug-in entropy estimate studied inSec. <ref>. Let us assume that μ∈ℱ(𝒳) and (ϵ_n) being o(1). Then for all ϵ>0 there is N>0 such that∀ n≥ N ℙ^n_μ( |H(μ̂_n,ϵ_n)- H(μ) |> ϵ) ≤ 2^|A_μ|+1·[ e^-2 n ϵ ^2/(𝐌_μ + log e/𝐦_μ)^2 + e^- n 𝐦_μ^2/4]. The proof of this result reduces to verify that μ̂_n,ϵ_n detects A_μ almost surely when n goes to infinity and from this, it follows that H(μ̂_n,ϵ_n) matches the optimal almost sure performance of H(μ̂_n) under the assumption that μ∈ℱ(𝒳).In particular,(<ref>) implies that |H(μ̂_n,ϵ_n)- H(μ) | is o(n^-q) almost surely for all q∈ (0,1/2) as long as ϵ_n → 0with n. § PROOFS OF THE MAIN RESULTS §.§ Theorem <ref>: Letμ be in ℱ(𝕏),then |A_μ|≤ k for some k>1. FromHoeffding's inequality<cit.> ∀ n≥ 1, and for any ϵ>0, ℙ^n_μ(V(μ̂_n, μ) > ϵ) ≤ 2^k+1· e^- 2n ϵ^2 and 𝔼_ℙ^n_μ(V(μ̂_n, μ)) ≤ 2 √((k+1)log 2/n). Considering that μ̂_n≪μ ℙ_μ-a.s, we can use Proposition <ref> to obtain that D(μ̂_n||μ) ≤log e/𝐦_μ· V(μ̂_n,μ),and |H((μ̂_n)- H(μ_n)|≤[ 𝐌_μ+log e/𝐦_μ] · V(μ̂_n,μ). Hence,(<ref>) and (<ref>) derive from (<ref>). For the direct I-divergence, let us consider a sequence (x_i)_i≥ 1 and the following function (a stopping time): T_o(x_1,x_2,…) ≡inf{n≥ 1: A_μ̂_n(x^n_1)=A_μ}, i.e, T_o(x_1,x_2,…) is the point where the support of μ̂_n(x^n_1) is equal to A_μ and, consequently, the direct I-divergence is well-defined (since μ∈ℱ(𝕏)). [By the uniform convergence of μ̂_n to μ_n (ℙ_μ-a.s.) and the finite support assumption of μ,itissimple to verify that ℙ_μ(T_o(X_1,X_2,…) < ∞)=1.] Let us define the event: ℬ_n ≡{x_1,x_2,..:T_o(x_1,x_2,..)≤ n}⊂𝕏^ℕ, i.e.,the collection of sequences in 𝕏^ℕ where at time n, A_μ̂_n=A_μ and, consequently, D(μ||μ̂_n) < ∞. Restricted to this set, D(μ || μ̂_n)≤∑_x ∈ A_μ̂_n||μ f_μ̂_n(x) logf_μ̂_n(x)/f_μ(x) + ∑_x ∈ A_μ∖ A_μ̂_n || μ f_μ̂_n(x) logf_μ(x)/f_μ̂_n(x)≤log e ·∑_x ∈ A_μ̂_n||μf_μ̂_n(x) ·(f_μ̂_n(x)/f_μ(x) -1)+ log e ·[μ(A_μ∖ A_μ̂_n || μ) - μ̂_n((A_μ∖ A_μ̂_n || μ))]≤log e·(1/𝐦_u+1) V(μ, μ̂_n), where in the first inequalityA_μ̂_n||μ≡{x ∈ A_μ̂_n:f_μ̂_n(x) > f_μ(x) }, and the last is obtained by the definition of the total variational distance. In addition,let usdefine the ϵ-deviation set 𝒜^ϵ_n ≡{x_1,x_2,...:D(μ||μ̂_n(x^n_1))>ϵ}⊂𝕏^ℕ. Then by additivity and monotonicity of ℙ_μ,we have that ℙ_μ(𝒜^ϵ_n) ≤ℙ_μ(𝒜^ϵ_n ∩ℬ_n) + ℙ_μ (ℬ_n^c). By definition of ℬ_n,(<ref>) and (<ref>) ℙ_μ(𝒜^ϵ_n ∩ℬ_n)≤ℙ_μ (V(μ||μ̂_n) log e·(1/𝐦_u+1) >ϵ)≤ 2^|A_μ| +1· e^-2 n ϵ^2/log e^2·(1/𝐦_u+1)^2. On the other hand,∀ϵ_o ∈ (0,𝐦_μ) if V(μ, μ̂_n) ≤ϵ_o then T_o≤ n. Consequently ℬ^c_n ⊂{x_1,x_2,..:V(μ, μ̂_n(x^n_1))>ϵ_o}, and again from (<ref>) ℙ_μ(ℬ_n^c)≤ 2^|A_μ|+1· e^-2 n ϵ_o^2, for all n≥ 1 and ∀ϵ_o∈ (0,𝐦_μ). Integrating the results in(<ref>) and (<ref>) and considering ϵ_0 =𝐦_μ/√(2),suffices to show(<ref>). §.§ Theorem <ref> As (a_n) is o(1),it is simple to verify thatlim_n →∞V(μ̃_n, μ)=0, ℙ_μ-a.s. Also note that the support disagreement between μ̃_n and μ is bounded by the hypothesis, thenlim_n →∞μ̃_n (A_μ_n∖ A_μ) ·log|A_μ̃_n∖ A_μ| ≤lim_n →∞μ̃_n (A_μ_n∖ A_μ) ·log|A_v|=0, ℙ_μ-a.s.Therefore from Lemma <ref>,we have the strong consistency ofH(μ̃_n) and the almost sure convergence of D( μ||μ̃_n) to zero. Note that D( μ||μ̃_n) is uniformly upper bounded by log e · (1/𝐦_μ+1) V(μ, μ̃_n)(see (<ref>) in the proof of Theorem <ref>).Then the convergence in probability of D( μ||μ̃_n) implies theconvergence of its mean <cit.>, which concludes the proof of the first part.Concerning rate of convergences, we use the following:H(μ )- H(μ̃_n)= ∑_x ∈ A_μ∩ A_μ̃_n[f_μ̃_n(x)-f_μ(x)]log f_μ(x) + ∑_x∈ A_μ∩ A_μ̃_n f_μ̃_n(x) logf_μ̃_n(x)/f_μ(x)-∑_x∈ A_μ̃_n∖ A_μ f_μ̃_n(x) log1/f_μ̃_n(x).The absolute value of the first term in the right hand side (RHS) of (<ref>) isbounded by 𝐌_μ· V(μ̃_n,μ) and the second term is boundedby log e/𝐦_μ· V(μ̃_n,μ), from the assumption that μ∈ℱ(𝕏). For the last term, note that f_μ̃_n(x)=a_n· v({ x}) for allx ∈ A_μ̃_n∖ A_μ and that A_μ̃_n =A_v, then 0≤∑_x∈ A_μ̃_n∖ A_μ f_μ̃_n(x) log1/f_μ̃_n(x)≤ a_n· (H(v) + log1/a_n· v(A_v∖ A_μ)).On the other hand,V(μ̃_n,μ) =1/2∑_x∈ A_μ| (1-a_n)μ̂_n({ x}) +a_n v({ x})- μ({ x})| + ∑_x ∈ A_v∖ A_μ a_n v({ x}).≤ (1-a_n)· V(μ̂_n, μ) +a_n.Integrating these bounds in (<ref>),|H(μ )- H(μ̃_n)| ≤ (𝐌_μ + log e/𝐦_μ) · ((1-a_n)· V(μ̂_n, μ) +a_n) + a_n · H(v) + a_n·log1/a_n= K_1· V(μ̂_n, μ) + K_2· a_n + a_n·log1/a_n,forconstants K_1>0 and K_2>0 function of μ and v.Under the assumption that μ∈ℱ(𝕏),the Hoeffding's inequality <cit.> tells us thatℙ_μ(V(μ̂_n, μ) >ϵ) ≤ C_1 · e^-C_2 n ϵ^2 (for some distribution free constants C_1>0 and C_2>0). From this inequality,V(μ̂_n, μ) goes to zero as o(n^-τ) ℙ_μ-a.s. ∀τ∈ (0,1/2) and 𝔼_ℙ_μ(V(μ̂_n, μ)) is O(1/√(n)). On the other hand, under the assumption in ii) (K_2· a_n + a_n·log1/a_n) is O(1/√(n)), which from (<ref>) proves the convergence rate results for |H(μ )- H(μ̃_n)|.Considering the direct I-divergence, D(μ|| μ̃_n) ≤log e ·∑_x ∈ A_μ f_μ(x) |f_μ(x)/f_μ̃_n(x)-1| ≤log e/𝐦_μ̃_n· V(μ̃_n,μ),then the uniform convergence ofμ̃_n({x}) to μ({x}) ℙ_μ-a.s. in A_μ, and the fact that |A_μ|<∞, imply thatfor an arbitrary smallϵ>0 (in particular smaller than 𝐦_μ), lim_n ⟶∞ D(μ|| μ̃_n)≤log e/𝐦_μ-ϵ·lim_n ⟶∞V(μ̃_n,μ),ℙ_μ-a.s.,which suffices to obtain the convergence results for the I-divergence.§.§ Theorem <ref> Let us define the oracle Barron measure μ̃_n by: f_μ̃_n(x) = d μ̃_n/d λ(x) = f_v(x) [(1-a_n) ·μ(A_n(x))/v(A_n(x)) + a_n], where we consider the true measure instead of its empirical version in (<ref>). Then, the following convergence results can be obtained (see Proposition <ref> in Appendix <ref>), lim_n →∞sup_x∈ A_μ̃_n|d μ̃_n/d μ^*_n(x)-1|=0,ℙ_μ-a.s. Let 𝒜 denote the collection of sequences x_1,x_2,.... where the convergence in (<ref>) is holding[This set is almost-surely typical meaning that ℙ_μ(𝒜)=1.]. The rest of the proof reduces to show that for any arbitrary (x_n)_n≥ 1∈𝒜, its respective sequence of induced measures{μ^*_n: n≥ 1 }[For simplicity, the dependency of μ^*_non the sequence (x_n)_n≥ 1 will be considered implicit for the rest of the proof.] satisfies the sufficient conditionsof Lemma <ref> for entropy convergence. Let fix an arbitrary (x_n)_n≥ 1∈𝒜: Weak convergence μ^*_n ⇒μ: Withoutloss of generalitywe consider that A_μ̃_n=A_v for all n≥ 1. Since a_n → 0 and h_n → 0, f_μ̃_n(x) →μ({x}) ∀ x ∈ A_v, we got the weak convergence of μ̃_n to μ.On the other hand by definition of𝒜, lim_n →∞sup_x∈ A_μ̃_n|f_μ̃_n(x)/f_μ^*_n(x)-1| =0 that implies that lim_n →∞| f_μ^*_n(x) - f_μ̃_n(x)|=0 for all x ∈ A_v, and consequentlyμ^*_n ⇒μ. The condition in (<ref>): By construction μ≪μ^*_n,μ≪μ̃_n and μ̃_n ≈μ^*_nfor all n, then we will use the following equality: logd μ/d μ^*_n (x) = logd μ/d μ̃_n(x)+ logd μ̃_n/d μ^*_n (x), for all x ∈ A_μ. Concerning the approximation error term of (<ref>), i.e.,logd μ/d μ̃_n(x), ∀ x∈ A_μ, d μ̃_n/d μ(x)= (1-a_n) [μ(A_n(x))/μ({x})v({x})/v(A_n(x))] + a_n v({x})/μ ({x}). Given that μ∈ℍ(𝕏|v) this is equivalent to state that log (d μ/d v(x)) is boundedμ-almost everywhere, which is equivalent to say that𝐦≡inf_x∈ A_μd μ/d v(x) >0 and M ≡sup_x∈ A_μd μ/d v(x) < ∞. From this, ∀ A ⊂ A_μ, 𝐦 v(A) ≤μ(A)≤ Mv(A). Then we have that, ∀ x ∈ A_μ 𝐦/M≤[μ(A_n(x))/μ({x})v({x})/v(A_n(x))] ≤M/𝐦. Thereforefor n sufficient large, 0<1/2𝐦/M≤d μ̃_n/d μ(x) ≤M/𝐦 + M < ∞ for all x inA_μ. Hence,there exists N_o>0 such that sup_n≥ N_osup_x ∈ A_μ|logd μ̃_n/d μ(x)| < ∞. For the estimation error term of (<ref>), i.e., logd μ̃_n/d μ^*_n (x),note that from the fact that(x_n)∈𝒜, and the convergence in (<ref>),there exists N_1>0 such that for all n≥ N_1 sup_x∈ A_μ|logd μ̃_n/d μ^*_n(x)| < ∞,given that A_μ⊂ A_μ̃_n=A_v. Then using (<ref>), for all n ≥max{N_0,N_1 } sup_x∈ A_μ| logd μ^*_n/d μ(x)| < ∞, which verifies (<ref>). The condition in (<ref>): Defining the function ϕ^*_n(x) ≡ 1_A_v∖ A_μ(x) · f_μ^*_n(x) log (1/f_μ^*_n(x)), we want to verify thatlim_n →∞∫_𝕏ϕ^*_n(x) d λ(x)=0. Considering that (x_n)∈𝒜, for all ϵ>0 there exists N(ϵ)>0, such that sup_x∈ A_μ̃_n|f_μ̃_n(x)/f_μ^*_n(x)-1|<ϵ, then (1-ϵ) f_μ̃_n(x) < f_μ^*_n(x) < (1+ ϵ) f_μ̃_n(x),for allx ∈ A_v. From (<ref>),0 ≤ϕ^*_n(x) ≤ (1+ ϵ) f_μ̃_n(x) log (1/(1- ϵ) f_μ̃_n(x)) for all n ≥ N(ϵ).Analyzing f_μ̃_n(x) in (<ref>) there are two scenarios: if A_n(x)∩ A_μ = ∅ then f_μ̃_n(x)=a_n f_v(x), and otherwise f_μ̃_n(x)=f_v(x)(a_n+(1-a_n)μ(A_n(x)∩ A_μ)/v(A_n(x))).Let us define: ℬ_n ≡{x ∈ A_v∖ A_μ: A_n(x)∩ A_μ = ∅} and 𝒞_n ≡{x∈ A_v∖ A_μ: A_n(x)∩ A_μ∅}. Then for all n≥ N(ϵ), ∑_𝕏ϕ^*_n(x) ≤∑_A_v∖ A_μ (1+ ϵ) f_μ̃_n(x) log 1/((1- ϵ) f_μ̃_n(x)) = ∑_ℬ_n (1+ϵ) a_n f_v(x) log1/(1-ϵ)a_n f_v(x) + ∑_𝕏ϕ̃_n(x), with ϕ̃_n(x) ≡ 1_𝒞_n(x) · (1+ϵ) f_μ̃_n(x)log1/(1- ϵ)f_μ̃_n(x).The left term in (<ref>) is upper bounded by a_n(1+ϵ) (H(v) + log (1/a_n)) which goes to zero with n from (a_n) being o(1) and the fact that v∈ℍ(𝕏).For the right term in (<ref>), (h_n) being o(1) implies that∀ x∈ A_v∖ A_μ,x belongs to ℬ_n eventually in n, thenϕ̃_n(x) tends to zero point-wise as n goes to infinity. On the other hand, for all x ∈𝒞_n (see (<ref>)),we have that 1/1/m + 1≤μ(A_n(x) ∩ A_μ )/v(A_n(x) ∩ A_μ) +v(A_v∖ A_μ)≤μ(A_n(x))/v(A_n(x))≤μ(A_n(x)∩ A_μ)/v(A_n(x)∩ A_μ)≤ M. These inequalities derive from (<ref>). Consequently for all x∈𝕏, ifn sufficiently large such that a_n<0.5, then 0≤ϕ̃_n(x)≤ (1+ϵ)(a_n+(1-a_n)M)f_v(x) log1/(1-ϵ)(a_n+(1-a_n)m/(m+1))≤ (1 + ϵ)(1+M)f_v(x) [ log2(m+1)/(1-ϵ) + log1/f_v(x)] . Hence from (<ref>), ϕ̃_n(x) isbounded by afix function that is l_1(𝕏) by the assumption that v∈ℍ(𝕏). Then by the dominated convergence theorem <cit.> and (<ref>), lim_n →∞∑_𝕏ϕ^*_n(x) ≤lim_n →∞∑_𝕏ϕ̃_n(x). In summary, we have shown that for any arbitrary(x_n)∈𝒜 the sufficient conditions of Lemma <ref> are satisfied, which proves the result in (<ref>) reminding that ℙ_μ(𝒜)=1.§.§ Theorem <ref> Let usfirst introduce the oracle measure μ_ϵ_n≡μ(· |Γ_ϵ_n) ∈𝒫(𝕏). Note that μ_ϵ_n is a random probability measure (function of the i.i.d sequence X_1,..,X_n) as Γ_ϵ_n is a data-driven set (<ref>).We will first show that: lim_n →∞H(μ_ϵ_n)=H(μ) and lim_n →∞D(μ_ϵ_n|| μ)=0,ℙ_μ-a.s. Under the assumption on (ϵ_n) of Theorem <ref>, lim_n →∞| μ(Γ_ϵ_n)- μ̂_n(Γ_ϵ_n) |=0, ℙ_μ-a.s.[This result derives from the fact that lim_n →∞ V(μ/σ_ϵ_n, μ̂_n/σ_ϵ_n) =0,ℙ_μ-a.s. , from (<ref>).] In addition,since (ϵ_n) is o(1) then lim_n →∞μ̂_n(Γ_ϵ_n) =1, which implies that lim_n →∞μ(Γ_ϵ_n)=1 ℙ_μ-a.s. From this μ_ϵ_n⇒μ, ℙ_μ-a.s. Let usconsider a sequences (x_n) where lim_n →∞μ(Γ_ϵ_n)=1.Constrained to that limsup_n →∞sup_x∈ A_μf_μ_ϵ_n(x) /f_μ(x) = limsup_n →∞1/μ(Γ_ϵ_n) < ∞, then there is N>0 such that sup_n>Nsup_x∈ A_μf_μ_ϵ_n(x) /f_μ(x) <∞. Hence from Lemma<ref>, lim_n →∞ D(μ_ϵ_n || μ)=0 and lim_n →∞| H(μ_ϵ_n) - H(μ) |=0.Finally,the set of sequences (x_n) where lim_n →∞μ(Γ_ϵ_n)=1 has probability one with respect to ℙ_μ, which proves (<ref>). For the rest of the proof,we concentrate on the analysis of |H(μ̂_n,ϵ_n)- H(μ_ϵ_n) | that can be attributed to the estimation error aspect of the problem. It is worth noting that by constructionsupp(μ̂_n,ϵ_n)=supp(μ_ϵ_n)=Γ_ϵ_n, ℙ_μ-a.s., consequently we can use H(μ̂_n,ϵ_n)- H(μ_ϵ_n) = ∑_x∈Γ_ϵ_n[ μ_ϵ_n ({x })- μ̂_n,ϵ_n ({x }) ] logμ̂_n,ϵ_n ({ x }) + D(μ_ϵ_n || μ̂_n,ϵ_n). The first term on the RHSof (<ref>) is upper boundedby log 1/ 𝐦_μ̂_n^ϵ_n· V(μ_ϵ_n, μ̂_n,ϵ_n) ≤log 1/ϵ_n · V(μ_ϵ_n, μ̂_n,ϵ_n). Concerning the second term on the RHSof (<ref>), it is possible to show(details presented in Appendix <ref>) that D(μ_ϵ_n || μ̂_n,ϵ_n) ≤2 loge/ϵ_n/μ(Γ_ϵ_n)· V(μ/σ_ϵ_n, μ̂_n/σ_ϵ_n), where V(μ/σ_ϵ_n, μ̂_n/σ_ϵ_n) ≡sup_A ∈σ_ϵ_n| μ(A) - μ̂_n(A)|. In addition, it can be verified(details presented in Appendix <ref>) that V(μ_ϵ_n, μ̂_n,ϵ_n) ≤ K ·V(μ/σ_ϵ_n, μ̂_n/σ_ϵ_n), for some universal constant K>0.Therefore from (<ref>), (<ref>) and (<ref>), there is C>0 such that: |H(μ̂_n,ϵ_n)- H(μ_ϵ_n)|≤C/μ(Γ_ϵ_n)log1/ϵ_n·V(μ/σ_ϵ_n, μ̂_n/σ_ϵ_n). As mentioned before,μ(Γ_ϵ_n) goes to 1 almost surely,then we need to concentrate on the analysis of the asymptotic behavior of log1/ϵ_n· V(μ/σ_ϵ_n, μ̂_n/σ_ϵ_n). From Hoeffding's inequality <cit.>, we have that ∀δ>0 ℙ^n_μ(log1/ϵ_n· V(μ/σ_ϵ_n, μ̂_n/σ_ϵ_n)>δ) ≤ 2^| Γ_ϵ_n|+1· e^- 2n δ^2/(log 1/ϵ_n)^2, considering that by construction | σ_ϵ_n| ≤ 2^| Γ_ϵ_n|+1≤2^1/ϵ_n +1. Assuming that (ϵ_n) is O(n^-τ), lnℙ^n_μ( log 1/ϵ_n· V(μ/σ_ϵ_n, μ̂_n/σ_ϵ_n)>δ) ≤(n^τ+1) ln 2 -2nδ^2/τlog n. Therefore for all τ∈ (0,1), δ>0 and any arbitrary l∈ (τ, 1) limsup_n →∞1/n^l·lnℙ^n_μ( log 1/ϵ_n· V(μ/σ_ϵ_n, μ̂_n/σ_ϵ_n)>δ) < 0. This is sufficient toshow that ∑_n≥ 1ℙ^n_μ(log1/ϵ_n· V(μ/σ_ϵ_n, μ̂_n/σ_ϵ_n)>δ) < ∞ that concludes the argument from theBorel-Cantelli Lemma. §.§ Theorem <ref> Weconsider | H(μ) - H(μ̂_n,ϵ_n) | ≤| H(μ) - H(μ_ϵ_n) | +| H(μ_ϵ_n) - H(μ̂_n,ϵ_n) | to analize the approximationand the estimation error terms separately. §.§.§ Approximation error analysis: Note that | H(μ) - H(μ_ϵ_n) |is a random object as μ_ϵ_n in (<ref>) is a functionof the data-dependent partition and, consequently, a function of X_1,..,X_n. In thefollowing, we consider the oracle set Γ̃_ϵ_n≡{x ∈𝕏:μ( {x })≥ϵ_n}, and the oracle conditional measure[Γ̃_ϵ_n is a deterministic function of (ϵ_n) and so isthe measure μ̃_ϵ_n in (<ref>).] μ̃_ϵ_n≡μ(· | Γ̃_ϵ_n) ∈𝒫(𝕏). Fromdefinitions and triangular inequality: | H(μ) - H( μ̃_ϵ_n) | ≤∑_x∈Γ̃^c_ϵ_nμ ( {x }) log1/μ ( {x })+ log1/μ (Γ̃_ϵ_n)+ (1/μ (Γ̃_ϵ_n)- 1 ) ·∑_x∈Γ̃_ϵ_nμ ( { x}) log1/μ ( {x }), and, similarly,the approximation error is bounded by | H(μ) - H(μ_ϵ_n) | ≤∑_x∈Γ^c_ϵ_nμ ( {x }) log1/μ ( {x }) + log1/μ (Γ_ϵ_n) +(1/μ (Γ_ϵ_n)-1 ) ·∑_x∈Γ_ϵ_nμ ( { x}) log1/μ ( {x }). We denote the RHS of (<ref>) and (<ref>) by a_ϵ_n and b_ϵ_n(X_1,..,X_n), respectively. We can show that if (ϵ_n) is O(n^-τ) and τ∈ (0,1/2), then limsup_n →∞ b_ϵ_n(X_1,..,X_n) - a_2 ϵ_n≤ 0,ℙ_μ-a.s., which from (<ref>) implies that | H(μ) - H(μ_ϵ_n) | is O(a_2 ϵ_n), ℙ_μ-a.s. The proof of (<ref>) is presented in Appendix <ref>. Then, weneed to analyze the rate of convergence of the deterministic sequence (a_2ϵ_n). Analyzing the RHS of (<ref>), we recognize two independent terms: the partial entropy sum ∑_x∈Γ̃^c_ϵ_nμ ( {x }) log1/μ ( {x }) and the rest that is bounded asymptotically by μ(Γ̃^c_ϵ_n) (1+H(μ)), using the fact that ln x ≤ x-1 for x≥ 1. Here is where the tail condition on μ plays arole. From the tail condition, we have that μ(Γ̃^c_ϵ_n)≤μ( { (k_o/ϵ_n)^1/p+1,(k_o/ϵ_n)^1/p+2,(k_o/ϵ_n)^1/p+3, …})= ∑_x≥(k_o/ϵ_n)^1/p+1μ({x }) ≤ k_1 ·𝒮_ (k_o/ϵ_n)^1/p+1, where 𝒮_x_o≡∑_x≥ x_o x^-p.Similarly as {0,1,.., (k_o/ϵ_n)^1/p}⊂Γ̃_ϵ_n, then ∑_x∈Γ̃^c_ϵ_nμ ( {x }) log1/μ ( {x }) ≤∑_x≥(k_o/ϵ_n)^1/p+1μ ( {x }) log1/μ ( {x })≤∑_x≥(k_o/ϵ_n)^1/p+1 k_1 x^-p·log1/k_0 x^-p≤ k_1log p ·ℛ_(k_o/ϵ_n)^1/p+1 + k_1 log 1/k_0 ·𝒮_(k_o/ϵ_n)^1/p+1, whereℛ_x_o≡∑_x≥ x_o x^-plog x. In Appendix <ref>, it is shown that 𝒮_x_o≤ C_0· x_o^1-p and ℛ_x_o≤ C_1 · x_o^1-p for constantsC_1>0 and C_0>0. Integrating these results in the RHS of (<ref>) and (<ref>) and considering that (ϵ_n) is O(n^-τ), we have that both μ(Γ̃^c_ϵ_n) and ∑_x∈Γ̃^c_ϵ_nμ ( {x }) log1/μ ( {x }) areO(n^-τ(p-1)/p). This implies that our oracle sequence (a_ϵ_n) is O(n^-τ(p-1)/p). In conclusion, if ϵ_n is O(n^-τ) for τ∈ (0,1/2), it follows that | H(μ) - H(μ_ϵ_n) | isO(n^-τ(p-1)/p), ℙ_μ-a.s.§.§.§ Estimation error analysis: Let us consider | H(μ_ϵ_n) - H(μ̂_n,ϵ_n) |,from the bound in (<ref>) and the fact that for any τ∈ (0,1),lim_n →∞μ(Γ_ϵ_n)=1 ℙ_μ-a.s. from (<ref>),the problem reduces to analyze the rate of convergenceof the following random object:ρ_n(X_1,..,X_n) ≡log1/ϵ_n· V(μ/σ(Γ_ϵ_n), μ̂_n/σ(Γ_ϵ_n)). We will analize, instead, theoracle version of ρ_n(X_1,..,X_n) given by:ξ_n(X_1,..,X_n) ≡log1/ϵ_n· V(μ/σ(Γ̃_ϵ_n/2), μ̂_n/σ(Γ̃_ϵ_n/2)), where Γ̃_ϵ≡{x∈𝕏: μ({ x})≥ϵ} is the oracle counterpart of Γ_ϵ in (<ref>). To do so, we can show that if ϵ_n is O(n^-τ)with τ∈ (0, 1/2), thenliminf_n →∞ξ_n(X_1,..,X_n) - ρ_n(X_1,..,X_n) ≥ 0, ℙ_μ-a.s.The proof of (<ref>) is presented in Appendix <ref>.Moving to the almost sure rate of convergenceof ξ_n(X_1,..,X_n), it is simple to showfor our p-power dominating distribution that if (ϵ_n) is O(n^-τ) and τ∈ (0,p)thenlim_n →∞ξ_n(X_1,..,X_n)=0ℙ_μ-a.s,and, more specifically, ξ_n(X_1,..,X_n)iso(n^-q)for allq∈ (0,(1-τ/p)/2),ℙ_μ-a.s.The argument is presented in Appendix <ref>.In conclusion, if ϵ_n is O(n^-τ) for τ∈ (0,1/2), it follows that| H(μ_ϵ_n) - H(μ̂_n,ϵ_n) | isO(n^-q), ℙ_μ-a.s., for all q∈ (0,(1-τ/p)/2). §.§.§ Estimation vs. approximation error: Coming back to (<ref>) and using (<ref>) and (<ref>),the analysis reduces to finding the solution τ^*in (0,1/2) that offers thebest trade-off between the estimation and approximation error rate: τ^* ≡max_τ (0,1/2)min{(1-τ/p)/2, τ(p-1)/p}.It is simple to verify that τ^*=1/2. Then by considering τ arbitrary close to the admissible limit 1/2,we canachieve a rate of convergence for | H(μ) - H(μ̂_n,ϵ_n) | that is arbitrary close to O(n^-1/2(1-1/p)), ℙ-a.s. More formally,for any l ∈ (0,1/2(1-1/p)) we can take τ∈ (l/(1-1/p), 1/2) where| H(μ) - H(μ̂_n,ϵ_n) | is o(n^-l), ℙ_μ-a.s., from (<ref>) and (<ref>).Finally, a simple corollary of this analysis is to consider τ(p)=1/2+1/p<1/2 where:| H(μ) - H(μ̂_n,ϵ_n) |isO(n^- 1-1/p/2+1/p),ℙ_μ-a.s.,which concludes the argument. §.§ Theorem <ref> The argument follows the proof of Th. <ref>. In particular, we use the estimation-approximation error bound: | H(μ) - H(μ̂_n,ϵ_n) | ≤| H(μ) - H(μ_ϵ_n) | +| H(μ_ϵ_n) - H(μ̂_n,ϵ_n) |, and the following two results derived in the proof of Th. <ref>: If (ϵ_n) is O(n^-τ) with τ∈ (0,1/2) then (for the approximation error) | H(μ) - H(μ_ϵ_n) |is O(a_2ϵ_n) ℙ_μ-a.s., with a_ϵ_n = ∑_x∈Γ̃^c_ϵ_nμ ( {x }) log1/μ ( {x })+ μ(Γ̃^c_ϵ_n) (1+H(μ)), while (for the estimation error) | H(μ_ϵ_n) - H(μ̂_n,ϵ_n) |isO (ξ_n(X_1,..,X_n))ℙ_μ-a.s., with ξ_n(X_1,..,X_n) = log1/ϵ_n· V(μ/σ(Γ̃_ϵ_n/2), μ̂_n/σ(Γ̃_ϵ_n/2)). For the estimation error,we need to bound the rate of convergence of ξ_n(X_1,..,X_n) to zero almost surely.We first note that {1,..,x_o(ϵ_n) }=Γ̃_ϵ_n with x_o(ϵ_n)= ⌊1/αln (k_0/ϵ_n)⌋.Then from Hoeffding's inequality we have that ℙ^n_μ({ξ_n(X_1,..,X_n) > δ})≤ 2^| (Γ̃_ϵ_n/2) |·e^-2n δ^2/log(1/ϵ_n)^2≤ 2^1/αln (2k_0/ϵ_n)+1· e^-2n δ^2/log(1/ϵ_n)^2. Considering ϵ_n=O(n^-τ),an arbitrary sequence (δ_n) being o(1) and l>0, it follows from (<ref>) that 1/n^l·lnℙ^n_μ({ξ_n(X_1,..,X_n) > δ_n})≤1/n^lln(2)[ 1/αln (2k_0/ϵ_n)+1 ] - n^1-lδ_n^2/log(1/ϵ_n)^2. We note that the first term in the RHS of (<ref>) is O(1/n^llog n) and goes to zero for all l>0, while the second term is O(n^1-lδ_n^2/log n^2). Ifwe consider δ_n=O(n^-q), this second term is O(n^1-2q-l·1/log n^2). Therefore, for any q∈ (0,1/2) we can take an arbitrary l∈ (0,1-2q] such that ℙ^n_μ({ξ_n(X_1,..,X_n) > δ_n}) is O(e^-n^l) from (<ref>). This result implies,from the Borel-Cantelli Lemma, that ξ_n(X_1,..,X_n) is o(δ_n), ℙ_μ-a.s, which in summary shows that | H(μ_ϵ_n) - H(μ̂_n,ϵ_n) | is O(n^-q) for all q∈ (0,1/2). For the approximation error, it is simple to verify that: μ(Γ̃^c_ϵ_n) ≤ k_1 ·∑_x≥ x_o(ϵ_n)+1 e^-α x= k_1·𝒮̃_x_o(ϵ_n)+1 and ∑_x∈Γ̃^c_ϵ_nμ ( {x }) log1/μ ( {x }) ≤∑_x≥ x_o(ϵ_n)+1 k_1 e^-α xlog1/k_0 e^-α x = k_1 log1/k_0·𝒮̃_x_o(ϵ_n)+1+ αlog e · k_1·ℛ̃_x_o(ϵ_n)+1, where 𝒮̃_x_o≡∑_x≥ x_o e^-α xand ℛ̃_x_o≡∑_x≥ x_o x· e^-α x. At this point, it is not difficult to show that 𝒮̃_x_o≤ M_1 e^-α x_o and ℛ̃_x_o≤ M_2 e^-α x_o· x_o for some constants M_1>0 and M_2>0. Integrating these partial steps,we have that a_ϵ_n≤ k_1 (1+H(μ)+log1/k_0) ·𝒮̃_x_o(ϵ_n)+1 + αlog e · k_1·ℛ̃_x_o(ϵ_n)+1≤ O_1·ϵ_n + O_2·ϵ_n log1/ϵ_n for some constant O_1>0 and O_2>0. The last step is from the evaluation of x_o(ϵ_n)= ⌊1/αln (k_0/ϵ_n)⌋. Therefore from (<ref>) and (<ref>), it follows that | H(μ) - H(μ_ϵ_n) | is O(n^-τlog n)ℙ_μ-a.s. for all τ∈ (0,1/2). The argument concludes by integrating in (<ref>) the almost sure convergence results obtained for the estimation and approximation in errors.§.§ Theorem <ref> Let us define the event ℬ^ϵ_n={x^n_1 ∈𝕏^n: Γ_ϵ(x^n_1)=A_μ}, that represents the detection of the support of μ from the data for a given ϵ>0 in (<ref>). Note that the dependency on the data for Γ_ϵis made explicit in this notation. In addition, let us consider the deviation event 𝒜^ϵ_n(μ)= { x^n_1 ∈𝕏^n: V(μ,μ̂_n) >ϵ}. By the hypothesis that | A_μ|<∞, then 𝐦_μ=min_x∈ A_μ f_μ(x)>0. Therefore ifx^n_1∈ (𝒜^𝐦_μ/2_n(μ))^cthen μ̂_n( { x }) ≥𝐦_μ/2forall x∈ A_μ, which implies that(ℬ^ϵ_n)^c⊂𝒜^𝐦_μ/2_n(μ) as long as 0<ϵ≤𝐦_μ/2. Using the hypothesis that ϵ_n → 0,there is N>0 such that for all n≥ N (ℬ^ϵ_n_n)^c⊂𝒜^𝐦_μ/2_n(μ) and, consequently, ℙ^n _μ ((ℬ^ϵ_n_n)^c) ≤ℙ^n _μ (𝒜^𝐦_μ/2_n(μ)) ≤ 2^k+1· e^- n 𝐦_μ^2/4, the last from Hoeffding's inequality considering k=| A_μ| <∞. If we consider the events: 𝒞^ϵ_n(μ)={ x^n_1 ∈𝕏^n:|H(μ̂_n,ϵ_n)- H(μ) |>ϵ} and𝒟^ϵ_n(μ)={ x^n_1 ∈𝕏^n: |H(μ̂_n)- H(μ) |>ϵ} and we use the fact that by definition μ̂_n,ϵ_n=μ̂_n conditioning onℬ^ϵ_n_n,it follows that 𝒞^ϵ_n(μ) ∩ℬ^ϵ_n_n ⊂𝒟^ϵ_n(μ). Then,for all ϵ>0 and n≥ N ℙ^n_μ(𝒞^ϵ_n(μ))≤ℙ^n_μ(𝒞^ϵ_n(μ) ∩ℬ^ϵ_n_n) + ℙ^n _μ ((ℬ^ϵ_n_n)^c)≤ℙ^n_μ(𝒟^ϵ_n(μ)) + ℙ^n _μ ((ℬ^ϵ_n_n)^c)≤ 2^k+1[ e^-2 n ϵ ^2/(𝐌_μ + log e/𝐦_μ)^2 + e^- n 𝐦_μ^2/4], the last inequality from Theorem <ref> and (<ref>). § SUMMARY AND FINAL REMARKS In this work we show that entropy convergence results are instrumental to derive new (strongly consistent) estimation results for the Shannon entropy in infinite alphabets, and as a byproduct,distribution estimators that are strongly consistent in direct and reverseI-divergence. Adopting a set of sufficient conditions for entropy convergences in the context of four plug-in histogram-based schemes, we show concrete conditions where strong consistency for entropy estimation in ∞-alphabets can be obtained (Theorems <ref>, <ref> and <ref>). In addition,we explore the relevantcase where the target distribution has a finite but unknown support,deriving for the overall estimation erroralmost sure rate of convergence results (Theorems <ref> and <ref>) that match the optimal asymptotic rate that can be obtained in the finite alphabet version of the problem. Finally, we focus on the case of a data-driven plug-in estimate that restricts the support where we estimate the distribution. The basic idea was to have design parameters to control estimation error effects in the context of entropy estimation. The conjecture was that this approach will offers degrees of freedom to restrict the learning complexity while introducing an approximation error and, consequently, the possibility to find an adequate balance between these two learning components.Adopting a entropy convergence result (Lemma <ref>), we show that this data-driven scheme offers the same universal estimation attributes than the classical plug-in estimate under some mild conditions on its threshold design parameter (Theorem <ref>). In addition,by addressing thetechnical task of deriving concrete closed-form expressions for the estimationand approximation error in this adaptive data-driven context (where a subset of the space is selected adaptively from the data to restrict the estimation of the distribution), we show a solution werealmost sure rate of convergence of the overall estimation error are obtained over a family of distribution with some concrete tail bounded conditions (Theorems <ref> and <ref>). These results show the potential thatdata-driven frameworks has to adapt to the complexity of entropy estimation in unbounded alphabets by restricting the inference to a finite but dynamic subset of the space that scales with the amount of data and the distribution of the data in the sampling space. Concerning the classical plug-in estimate presented in Section <ref>, it is important to mention that thework of<cit.>shows that lim_n →∞H(μ̂_n )=H(μ) almost surelydistribution-free and,furthermore, it provides rates of convergences for families with specific tail-bounded conditions <cit.>. On this context, Theorem <ref>focuses on the case when μ∈ℱ(𝕏), wherenew finite-length deviation inequalities and confidence intervals were derived.From that perspective,it complements the result presented in <cit.> in the non-explored scenario when μ∈ℱ(𝕏). It is also important to mention two results by <cit.> for the plug-in estimate in (<ref>). They derive bounds for the object ℙ_μ^n(|H(μ̂_n)- H(μ) |≥ϵ)and from this determine confidence intervals under a finite and known support restriction on the distribution μ. In contrast, our result in Theorem <ref>resolves the case for afinite and unknown supported distribution, that was declared to be a challenging problem based on the arguments presented in <cit.> concerning the discontinuity of the entropy.The work is supported by funding from FONDECYT Grant 1170854, CONICYT-Chile and the Advanced Center for Electrical and Electronic Engineering (AC3E), Basal Project FB0008.The author is grateful to Patricio Parada for his insights and stimulating discussion in the initial stage of this work.§ MINIMAX RISK FOR FINITE ENTROPY DISTRIBUTIONS IN INFINITE ALPHABETSR^*_n =∞.For the proof, we use the following lemma that follows from <cit.>.Let us fix two arbitrary real numbers δ>0 and ϵ>0. Then there are P, Q two finite supported distributions on ℍ(𝕏)that satisfy that D(P||Q)<ϵ while H(Q)-H(P)>δ. [The proof of Lemma <ref> derives from the same construction presented in the proof of <cit.>, i.e., P=(p_1,..,p_L) and a modification of it Q_M=(p_1· (1-1/√(M)), p_2+p_1/M√(M),..., p_L+p_1/M√(M),p_1/M√(M),...,p_1/M√(M)) both of finite support and consequently in ℍ(𝕏). It is simple to verify that as M goes to infinity D(P||Q_M) ⟶ 0 while H(Q_M)-H(P)⟶∞.] For any pair of distribution P, Q in ℍ(𝕏), Le Cam's two point method <cit.> shows that: R^*_n ≥1/4( H(Q)-H(P) )^2 exp^-n D(P||Q). Adopting Lemma <ref> and (<ref>), for any n and any arbitrary ϵ>0 and δ>0, we have that R^*_n> δ^2exp^-nϵ/4. Then exploiting the discontinuity of the entropy in infinite alphabets, we can fix ϵ and make δ arbitrar large. § PROPOSITION <REF> Under the assumptions of Theorem <ref>: lim_n →∞sup_x∈ A_μ̃_n|d μ̃_n/d μ^*_n(x)-1|=0,ℙ_μ-a.s.First note that supp(μ̃_n)=supp(μ^*_n), then d μ̃_n/d μ^*_n(x) is well-defined and ∀ x ∈ A_μ̃_n d μ̃_n/d μ^*_n(x)= (1-a_n)·μ(A_n(x)) +a_n v(A_n(x))/(1-a_n)·μ̂_n(A_n(x)) +a_n v(A_n(x)).Then by construction,sup_x∈ A_μ̃_n|d μ̃_n/d μ^*_n(x)-1| ≤sup_A∈π_n|μ̂_n(A)-μ(A)|/a_n· h_n.From Hoeffding's inequality we have that∀ϵ>0 ℙ^n_μ( sup_A ∈π_n|μ̂_n(A) - μ(A)| > ϵ) ≤ 2 ·|π_n| ·exp ^- 2n ϵ^2.By condition ii),given that (1/a_nh_n) is o(n^τ) for some τ∈ (0,1/2), then there exists τ_o ∈ (0,1) such that,lim_n →∞1/n^τ_olnℙ^n_μ(sup_x∈ A_μ̃_n|d μ̃_n/d μ^*_n(x)-1| > ϵ) ≤ lim_n →∞1/n^τ_oln (2 |π_n| ) - 2· (n^1-τ_o/2 a_n h_n ϵ)^2= - ∞.This implies that ℙ^n_μ(sup_x∈ A_μ̃_n|d μ̃_n/d μ^*_n(x)-1| > ϵ) is eventually dominated by a constant time (e^-n^τ_o)_n≥ 1,which from the Borel-Cantelli lemma <cit.> implies thatlim_n →∞sup_x∈ A_μ̃_n|d μ̃_n/d μ^*_n(x)-1|=0,ℙ_μ-a.s. § PROPOSITION <REF> D(μ_ϵ_n || μ̂_n,ϵ_n) ≤2 loge/ϵ_n/μ(Γ_ϵ_n)· V(μ/σ_ϵ_n, μ̂_n/σ_ϵ_n) From definition, D(μ_ϵ_n || μ̂_n,ϵ_n) = 1/μ(Γ_ϵ_n)∑_x ∈Γ_ϵ_n f_μ(x) logf_μ(x)/f_μ̂_n(x) + logμ̂_n(Γ_ϵ_n) /μ(Γ_ϵ_n). For the right term in the RHS of (<ref>): logμ̂_n(Γ_ϵ_n) /μ(Γ_ϵ_n)≤log(e) /μ(Γ_ϵ_n)|μ̂_n(Γ_ϵ_n)-μ(Γ_ϵ_n) |. For the left term in the RHS of (<ref>): | ∑_x ∈Γ_ϵ_n f_μ(x) logf_μ(x)/f_μ̂_n(x)| = | ∑_ x ∈Γ_ϵ_nf_μ(x)≤ f_μ̂_n(x) f_μ(x) logf_μ(x)/f_μ̂_n(x) + ∑_ x ∈Γ_ϵ_n f_μ(x) > f_μ̂_n(x) ≥ϵ_n f_μ(x) logf_μ(x)/f_μ̂_n(x)| ≤∑_ x ∈Γ_ϵ_nf_μ(x)≤ f_μ̂_n(x) f_μ(x) logf_μ̂_n(x) /f_μ(x) + ∑_ x ∈Γ_ϵ_n f_μ(x) > f_μ̂_n(x) ≥ϵ_nf_μ̂_n(x) logf_μ(x)/f_μ̂_n(x)+ ∑_ x ∈Γ_ϵ_n f_μ(x) > f_μ̂_n(x) ≥ϵ_n( f_μ(x)-f_μ̂_n(x)) ·logf_μ(x)/f_μ̂_n(x)≤log e [ ∑_ x ∈Γ_ϵ_nf_μ(x)≤ f_μ̂_n(x) (f_μ̂_n(x)-f_μ(x)) + ∑_ x ∈Γ_ϵ_n f_μ(x) > f_μ̂_n(x) (f_μ(x)- f_μ̂_n(x))] + log1/ϵ_n·∑_ x ∈Γ_ϵ_n f_μ(x) > f_μ̂_n(x) ( f_μ(x)-f_μ̂_n(x))≤(log e+ log1/ϵ_n) ·∑_x∈Γ_ϵ_n| f_μ(x)- f_μ̂_n(x) |. The first inequality in (<ref>) is by triangular inequality,the second in (<ref>) is from the fact that ln x≤ x-1 for x>0. Finally from definition of the totalvariational distance over σ_ϵ_n in (<ref>) we have that 2 · V(μ/σ_ϵ_n, μ̂_n/σ_ϵ_n)=∑_x∈Γ_ϵ_n| f_μ(x)- f_μ̂_n(x) | + |μ̂_n(Γ_ϵ_n)-μ(Γ_ϵ_n) |, which concludes the argument from (<ref>), (<ref>) and (<ref>). § PROPOSITION <REF> Considering that (k_n) →∞, there exists K>0 and N>0 such that ∀ n ≥ N, V(μ̃_k_n, μ̂^*_k_n,n )≤ K · V(μ/σ_k_n, μ̂_n/σ_k_n). V(μ̃_k_n, μ̂^*_k_n,n ) = 1/2∑_x ∈ A_μ∩Γ_k_n|μ{x}/μ(Γ_k_n)- μ̂_n{x}/μ̂_n(Γ_k_n)|≤1/2μ(Γ_k_n)[∑_x ∈ A_μ∩Γ_k_n|μ̂_n{x}- μ{x}| + ∑_x ∈ A_μ∩Γ_k_nμ̂_n{x}| μ(Γ_k_n)/μ̂_n(Γ_k_n)-1| ] = 1/2μ(Γ_k_n)[ 2 · V(μ/σ_k_n, μ̂_n/σ_k_n) + | μ(Γ_k_n)- μ̂_n(Γ_k_n) | ]≤3 · V(μ/σ_k_n, μ̂_n/σ_k_n)/2μ(Γ_k_n)By the hypothesisμ(Γ_k_n) → 1, which concludes the proof. § PROPOSITION <REF> If ϵ_n is O(n^-τ)with τ∈ (0, 1/2), then limsup_n →∞ b_ϵ_n(X_1,..,X_n) - a_2 ϵ_n≤ 0,ℙ_μ-a.s. Let us define the set ℬ_n={(x_1,..,x_n):Γ̃_2 ϵ_n⊂Γ_ϵ_n}⊂𝕏^n. From definitionevery sequence (x_1,..,x_n) ∈ℬ_nis such that b_ϵ_n(x_1,..,x_n) ≤ a_2ϵ_n and, consequently,we just need to prove that ℙ_μ(liminf_n →∞ℬ_n) = ℙ_μ (∪_n≥ 1∩_k≥ nℬ_k)=1 <cit.>. Furthermore, if sup_x∈Γ̃_2ϵ_n| μ̂_n({x })- μ({x }) | ≤ϵ_n, then by definition of Γ̃_2 ϵ_n in (<ref>),we have that μ̂_n ({ x})≥ϵ_n for all x∈Γ_2 ϵ_n (i.e., Γ̃_2 ϵ_n⊂Γ_ϵ_n).From this ℙ^n_μ (ℬ^c_n) ≤ℙ^n_μ( sup_x∈Γ̃_2ϵ_n| μ̂_n({x })- μ({x }) | > ϵ_n )≤| Γ̃_2ϵ_n| · e^-2n ϵ_n^2≤1/2ϵ_n· e^-2n ϵ_n^2 , from the Hoeffding's inequality <cit.>, the union bound and the fact that by construction | Γ̃_2 ϵ_n| ≤1/2ϵ_n. If we consider ϵ_n=O(n^-τ) and l>0, we have that: 1/n^l·lnℙ^n_μ (ℬ^c_n) ≤1/n^lln (1/2 · n^τ) - 2n^1-2τ -l. From (<ref>) for anyτ∈ (0,1/2) there is l ∈ (0,1-2τ] such that ℙ^n_μ (ℬ^c_n) is bounded by a term O(e^-n^l). This implies that ∑_n≥ 1ℙ^n_μ (ℬ^c_n) <∞, that suffices toshow thatℙ_μ (∪_n≥ 1∩_k≥ nℬ_k)=1. § AUXILIARY RESULTS FOR THEOREM <REF> Let us first consider the series𝒮_x_o= ∑_x≥ x_o x^-p =x_o^-p·( 1+( x_o/x_o+1)^p+ (x_o/x_o+2)^p + …) =x_o^-p·( 𝒮̃_x_o,0+𝒮̃_x_o,1 + …+ 𝒮̃_x_o,x_o-1),where 𝒮̃_x_o,j≡∑_k=0^∞( k· x_o +j/x_o)^-p for allj∈{0,..,x_o-1 }. It is simple to verify that for allj∈{0,..,x_o-1 }, 𝒮̃_x_o,j≤𝒮̃_x_o,0 =∑_k≥ 0 k^-p <∞ given thatby hypothesis p>1. Consequently,𝒮_x_o≤ x_o^1-p·∑_k≥ 0 k^-p.Similarly, for the second series we have that: ℛ_x_o = ∑_x≥ x_ox^-plog x= x_o^-p·(log (x_o)+(x_o/x_o+1) log (x_o+1) +(x_o/x_o+2) log (x_o+2) + …)=x_o^-p·( ℛ̃_x_o,0+ℛ̃_x_o,2 + …+ℛ̃_x_o,x_o-1),where ℛ̃_x_o,j≡∑_k=1^∞( k· x_o +j/x_o)^-p·log (kx_o+j)for allj∈{0,..,x_o-1 }.Note again thatℛ̃_x_o,j≤ℛ̃_x_o,0 <∞ for all j∈{0,..,x_o-1 }, and, consequently, ℛ_x_o≤ x_o^1-p·∑_k≥ 1 k^-plog k from (<ref>).§ PROPOSITION <REF> If ϵ_n is O(n^-τ)with τ∈ (0, 1/2), then liminf_n →∞ξ_n(X_1,..,X_n) - ρ_n(X_1,..,X_n) ≥ 0, ℙ_μ-a.s. By definition if σ(Γ_ϵ_n)⊂σ( Γ̃_ϵ_n/2) then ξ_n(X_1,..,X_n) ≥ρ_n(X_1,..,X_n). Consequently,if we define the set: ℬ_n= { (x_1,..,x_n): σ(Γ_ϵ_n)⊂σ( Γ̃_ϵ_n/2)}, then the proof reduced to verify that ℙ_μ(liminf_n→∞ℬ_n) = ℙ_μ( ∪_n≥ 1 ∩_k≥ nℬ_k)=1. On the other hand, if sup_x∈Γ_ϵ_n|μ̂_n({ x})-μ ( { x } ) | ≤ϵ_n/2 then by definition of Γ_ϵ, for all x∈Γ_ϵ_n μ ({ x}) ≥ϵ_n/2, i.e., Γ_ϵ_n⊂Γ̃_ϵ_n/2.In other words, 𝒞_n= { (x_1,..,x_n): sup_x∈Γ_ϵ_n|μ̂_n({ x})-μ ( { x } ) | ≤ϵ_n/2 }⊂ℬ_n. Finally, ℙ^n_μ (𝒞^c_n)= ℙ^n_μ( sup_x∈Γ_ϵ_n|μ̂_n({ x})-μ ( { x } ) | > ϵ_n/2 ) ≤| Γ_ϵ_n| ·e^-n ϵ^2/2≤1/ϵ_n·e^-n ϵ^2/2. In this context, if we consider ϵ_n=O(n^-τ) and l>0, then we have that: 1/n^l·lnℙ^n_μ (𝒞^c_n) ≤τ·ln n/n^l - n^1-2τ -l/2. Therefore, we have that for anyτ∈ (0,1/2) we can take l ∈ (0,1-2τ] such that ℙ^n_μ (𝒞^c_n) is bounded by a term O(e^-n^l).Then, the Borel Cantelli lemma tells us that ℙ_μ( ∪_n≥ 1 ∩_k≥ n𝒞_k)=1, which concludes the proof from (<ref>).§ PROPOSITION <REF> For the p-power tail dominating distribution stated in Theorem <ref>,if (ϵ_n) is O(n^-τ) with τ∈ (0,p) thenξ_n(X_1,..,X_n)iso(n^-q)for allq∈ (0,(1-τ/p)/2), ℙ_μ-a.s. From the Hoeffding's inequality we have that ℙ^n_μ({ x_1,..,x_n: ξ_n(x_1,..,x_n) > δ})≤| σ(Γ̃_ϵ_n/2) | ·e^-2n δ^2/log(1/ϵ_n)^2≤ 2^(2k_o/ϵ_n)^1/p+1· e^-2n δ^2/log(1/ϵ_n)^2, the second inequality using that Γ̃_ϵ≤ (k_0/ϵ) ^1/p+1from the definition of Γ̃_ϵ in (<ref>) and the tail bounded assumption on μ. If we consider ϵ_n=O(n^-τ) and l>0, then we have that: 1/n^l·lnℙ^n_μ({ x_1,..,x_n: ξ_n(x_1,..,x_n) > δ}) ≤ln 2· (C n^τ/p-l+n^-l) -2δ^2/τ^2·n^1-l/log n^2 for some constant C>0. Then in order to obtain that ξ_n(X_1,..,X_n) converges almost surely to zero from (<ref>),it is sufficient that l>0, l<1, and l>τ/p. This implies that if τ<p, there is l∈ (τ/p,1) such that such that ℙ^n_μ (ξ_n(x_1,..,x_n) > δ ) is bounded by a term O(e^-n^l) and, consequently,lim_n →∞ξ_n(X_1,..,X_n)=0, ℙ_μ-a.s.[Using the same steps used in Appendix <ref>.] Moving to the rate of convergence of ξ_n(X_1,..,X_n) (assuming that τ<p), let us consider δ_n=n^-q for some q≥ 0.From (<ref>): 1/n^l·lnℙ^n_μ({ x_1,..,x_n: ξ_n(x_1,..,x_n) > δ_n}) ≤ln 2· (C n^τ/p-l+n^-l) -2δ^2/τ^2·n^1-2q-l/log n^2. To make ξ_n(X_1,..,X_n) being o(n^-q) ℙ-a.s., a sufficient condition is that l>0, l>τ/p, and l<1-2q. Therefore (considering that τ<p),the admissibility condition on the existence of a exponential rate of convergence O(e^-n^l) for l>0 for the deviation event { x_1,..,x_n: ξ_n(x_1,..,x_n) > δ_n} is that τ/p < 1-2q, which is equivalent to 0<q<1-τ/p/2.
http://arxiv.org/abs/1710.06835v2
{ "authors": [ "Jorge F. Silva" ], "categories": [ "cs.IT", "cs.LG", "math.IT" ], "primary_category": "cs.IT", "published": "20170827195418", "title": "Shannon Entropy Estimation in $\\infty$-Alphabets from Convergence Results" }
[ [ December 30, 2023 ===================== The Belgian chocolate problem involves maximizing a parameter δ over a non-convex region of polynomials. In this paper we detail a global optimization method for this problem that outperforms previous such methods by exploiting underlying algebraic structure. Previous work has focused on iterative methods that, due to the complicated non-convex feasible region, may require many iterations or result in non-optimal δ. By contrast, our method locates the largest known value of δ in a non-iterative manner. We do this by using the algebraic structure to go directly to large limiting values, reducing the problem to a simpler combinatorial optimization problem. While these limiting values are not necessarily feasible, we give an explicit algorithm for arbitrarily approximating them by feasible δ.Using this approach, we find the largest known value of δ to date, δ = 0.9808348.We also demonstrate that in low degree settings, our method recovers previously known upper bounds on δ and that prior methods converge towards the δ we find. § INTRODUCTION Global optimization problems of practical interest can often be cast as optimization programs over non-convex feasible regions. Unfortunately, iterative optimization over such regions may require large numbers of iterations and result in non-global maxima. Finding all or even many critical points of such programs is generally an arduous, computationally expensive task. In this paper we show that by exploiting the underlying algebraic structure, we can directly find the largest known values of the Belgian chocolate problem, a famous open problem bridging optimization and control theory. Moreover, this algebraic method does not require any iterative approach. Instead of relying on eventual convergence, our method algebraically identifies points that provide the largest value of the Belgian chocolate problem so far. While this approach may seem foreign to the reader, we will show that our algebraic optimization method outperforms prior global optimization methods for solving the Belgian chocolate problem. We will contrast our method with the optimization method of Chang and Sahinidis <cit.> in particular. Their method used iterative branch-and-reduce techniques <cit.> to find what was the largest known value of δ until our new approach. Due to the complicated feasible region, their method may take huge numbers of iterations or converge to suboptimal points. Our method eliminates the need for these expensive iterative computations by locating and jumping directly to the larger values of δ. This approach has two primary benefits over <cit.>. First, it allows us to more efficiently find δ as we can bypass the expensive iterative computations. This also allows us to extend our approach to cases that were not computationally tractable for <cit.>. Second, our approach allows us to produce larger values of δ by finding a finite set of structured limit points. In low-degree cases, this set provably contains the supremum of the problem, while in higher degree cases, the set contains larger values of δ than found in <cit.>. The Belgian chocolate problem is a famous open problem in control theory proposed by Blondel in 1994. In the language of control theory, Blondel wanted to determine the largest value of a process parameter for which stabilization of an unstable plant could be achieved by a stable minimum-phase controller <cit.>. Blondel designed the plant to be a low-degree system that was resistant to known stabilization methods, in the hope that a solution would lead to development of new stabilization techniques. Specifically, Blondel wanted to determine the largest value of δ > 0 for which the transfer function P(s) = (s^2-1)/(s^2-2δ s+1) can be stabilized by a proper, bistable controller. For readers unfamiliar with control theory, this problem can be stated in simple algebraic terms. To do so, we will require the notion of a stable polynomial. A polynomial is stable if all its roots have negative real part. The Belgian chocolate problem is then as follows. Belgian chocolate problem: Determine for which δ > 0 there exist real, stable polynomials x(s), y(s), z(s) with (x) ≥ (y) satisfying z(s) = (s^2-2δ s+1)x(s)+(s^2-1)y(s). We call such δ admissible. In general, stability of x,y,z becomes harder to achieve the larger δ is. Therefore, we are primarily interested in the supremum of all admissible δ. If we fix a maximum degree n for x and y, then this gives us the following global optimization problem for each n. Belgian chocolate problem (optimization version): δ, x(s), y(s)maximizeδsubject to x, y, z are stable, z(s) = (s^2-2δ s+1)x(s)+(s^2-1)y(s),(y) ≤(x) ≤ n. Note that we can view a degree n polynomial with real coefficients as a (n+1)-dimensional real vector of its coefficients. Under this viewpoint, the space of polynomials x, y, z that are stable and satisfy (<ref>) is an extremely complicated non-convex space. As a result, it is difficult to employ global optimization methods directly to this problem. The formulation above does suggest an undercurrent of algebra in this problem. This will be exploited to transform the problem into a combinatorial optimization problem by finding points that are essentially local optima. Previous work has employed various optimization methods to find even larger admissible δ. Patel et al. <cit.> were the first to show that δ = 0.9 is admissible by x, y of degree at most 11, answering a long-standing question of Blondel. They further showed that δ = 0.93720712277 is admissible. In 2005, Burke et al. <cit.> showed that δ = 0.9 is admissible with x,y of degree at most 3. They also improved the record to δ = 0.94375 using gradient sampling techniques. In 2007, Chang and Sahinidis used branch-and-reduce techniques to find admissible δ as large as 0.973974 <cit.>. In 2012, Boston used algebraic techniques to give examples of admissible δ up to 0.97646152 <cit.>. Boston found polynomials that are almost stable and satisfy (<ref>). Boston then used ad hoc methods to perturb these to find stable x,y,z satisfying (<ref>). While effective, no systematic method for perturbing these polynomials to find stable ones was given. In this paper, we extend the approach used by Boston in 2012 <cit.> to achieve the largest known value of δ so far. We will refer to this method as the method of algebraic specification. We show that these almost stable polynomials serve as limiting values of the optimization program. Empirically, these almost stable polynomials achieve the supremum over all feasible δ. Furthermore, we give a theoretically rigorous method for perturbing the almost stable polynomials produced by algebraic specification to obtain stable polynomials. Our approach shows that all δ≤ 0.9808348 are admissible. This gives the largest known admissible value of δ to date. We further show that previous global optimization methods are tending towards the limiting values of δ found via our optimization method. We do not assume any familiarity on the reader's part with the algebra and control theory and will introduce all relevant notions. While we focus on the Belgian chocolate problem throughout the paper, we emphasize that the general theme of this paper concerns the underlying optimization program. We aim to illustrate that by considering the algebraic structure contained within an optimization problem, we can develop better global optimization methods.§ MOTIVATION FOR OUR APPROACH In order to explain our approach, we will discuss previous approaches to the Belgian chocolate problem in more detail. Such approaches typically perform iterative non-convex optimization in the space of stable controllers in order to maximize δ. In <cit.>, Chang and Sahinidis formulated, for each n, a non-convex optimization program that sought to maximize δ subject to the polynomials x, y, (s^2-2δ s +1)x+(s^2-1)y being stable and such that n ≥(x) ≥(y). For notational convenience, we will always define z = (s^2-2δ s + 1)x+(s^2-1)y. Chang and Sahinidis used branch-and-reduce techniques to attack this problem for n up to 10. Examining the roots of the x,y,z they found for (x) = 6,8,10, a pattern emerges. Almost all the roots of these polynomials are close to the imaginary axis and are close to a few other roots. In fact, most of these roots have real part in the interval (-0.01,0). In other words, the x,y,z are approximated by polynomials with many repeated roots on the imaginary axis. It is also worth noting that the only roots of x that were omitted are very close to -δ±√(δ^2-1). This suggests that x should have a factor close to (s^2+2δ s+1). This suggests the following approach. Instead of using non-convex optimization to iteratively push x,y,z towards polynomials possessing repeated roots on the imaginary axis, we will algebraically construct polynomials with this property. This will allow us to immediately find large limit points of the optimization problem in (<ref>). While the x,y,z we construct are not stable, they are close to being stable. We will show later that we can perturb x,y,z and thereby push their roots just to the left of the imaginary axis, causing them to be stable. This occurs at the expense of decreasing δ by an arbitrarily small amount. Our method only requires examining finitely many such limit points. Moreover, for reasonable degrees of x and y, these limit points can be found relatively efficiently. By simply checking each of these limit points, we reduce to a combinatorial optimization problem. This combinatorial optimization problem provably achieves the supremal values of δ for (x) ≤ 4. For higher degree x, our method finds larger values of δ than any previous optimization method thus far. In the sections below we will further explain and motivate our approach, and show how this leads to the largest admissible δ found up to this point.§ MAIN RESULTS §.§ Preliminaries Given t ∈, we let (t) denote its real part. We will let [s] denote the set of polynomials in s with real coefficients. For p(s) ∈[s], we call p(s) stable if every root t of p satisfies (t) < 0. We let H denote the set of all stable polynomials in [s]. We call p(s) quasi-stable if every root t of p satisfies (t) ≤ 0. We let H denote the set of quasi-stable polynomials of [s]. We let H^m, H^m denote the sets of stable and quasi-stable polynomials respectively of degree at most m. We call δ admissible if there exist x,y ∈ H such that (x) ≥(y) and (s^2-2δ s+1)x(s) + (s^2-1)y(s) ∈ H. We call δ quasi-admissible if there exist x,y ∈H such that (x) ≥(y) and (s^2-2δ s+1)x(s) + (s^2-1)y(s) ∈H. Note that since quasi-stability is weaker than stability, quasi-admissibility is weaker than admissibility. Our main theorem (Theorem <ref> below) will show that if δ is quasi-admissible, then all smaller δ are admissible. Note that this implies that the Belgian chocolate problem is equivalent to finding the supremum of all admissible δ. We will then find quasi-admissible δ in order to establish which δ are admissible. This is the core of our approach. These quasi-admissible δ are easily identified and are limit points of admissible δ. In practice, one verifies stability by using the Routh-Hurwitz criteria. Suppose we have a polynomial p(s) = a_0s^n + a_1s^n-1 + … + a_n-1s + a_n ∈[s] such that a_0 > 0. Then we define the n× n Hurwitz matrix A(p) as A(p) = [ a_1 a_3 a_5 … … 0 0; a_0 a_2 a_6 … … 0 0; 0 a_1 a_3 … … 0 0; 0 a_0 a_2 … … 0 0; ⋮ ⋮ ⋮ ⋱ ⋱ ⋮ ⋮; 0 0 0 … … a_n-2 a_n ]. Adolf Hurwitz showed that a real polynomial p with positive leading coefficient is stable if and only if all leading principal minors of A(p) are positive. While it may seem natural to conjecture that p is quasi-stable if and only if all leading principal minors are nonnegative, this only works in one direction. Suppose p is a real polynomial with positive leading coefficient. If p is quasi-stable then all the leading principal minors of A(p) are nonnegative. If p(s) is quasi-stable, then for all ϵ > 0, p(s+ϵ) is stable. Therefore, for all ϵ > 0, the leading minors of A(p(s+ϵ)) are all positive. Note that lim_ϵ→ 0 A(p(s+ϵ)) = A(p). Since the minors of a matrix are expressible as polynomial functions of the entries of the matrix, the leading principal minors of A are limits of positive real numbers. They are therefore nonnegative. To see that the converse doesn't hold, consider p(s) = s^4 + 198s^2 + 101^2. Its Hurwitz matrix has nonnegative leading principal minors, but p is not quasi-stable. This example, as well as a more complete characterization of quasi-stability given below, can be found in <cit.>. In particular, it is shown in <cit.> that a real polynomial p with positive leading coefficient is quasi-stable if and only if for all ϵ > 0, A(p(s+ϵ)) has positive leading principal minors. §.§ Quasi-admissible and admissible δ We first present the following theorem concerning which δ are admissible. We will defer the proof until later as it is a simple corollary to a stronger theorem about approximating polynomials in H by polynomials in H. If δ is admissible then all δ̂ < δ are also admissible. For δ = 1, note that the Belgian chocolate problem reduces to whether there are x,y ∈ H with (x) ≥(y) such that (s-1)^2x + (s^2-1)y ∈ H. This cannot occur for non-zero x,y since (s-1)^2x + (s^2-1)y has a root at s = 1. Theorem <ref> then implies that any δ≥ 1 is not admissible. In 2012, Bergweiler and Eremenko showed that any admissible δ must satisfy δ < 0.999579 <cit.>. On the other hand, if we fix x,y then there is no single largest admissible δ associated to x,y. Standard results from control theory show that if δ is admissible by x, y then for ϵ small enough, δ+ϵ is admissible by the same polynomials. Therefore, supremum δ^* over all admissible δ will not be associated to stable x,y. From an optimization point of view, the associated optimization program in (<ref>) has an open feasible region. In particular, the set of admissible δ for (<ref>) is of the form (0,δ_n^*) for some δ_n^* that is not admissible by x,y of degree at most n. However, as we will later demonstrate, quasi-admissible δ lie on the boundary of this feasible region. Moreover, quasi-admissible δ naturally serve as analogues of local maxima. We will therefore find quasi-admissible δ and use these to find admissible δ. In Section <ref> we will prove the following theorem relating admissible and quasi-admissible δ. The following is the main theorem of our work and demonstrates the utility of searching for quasi-admissible δ. If δ is quasi-admissible, then all δ̂ < δ are admissible. Moreover, if δ is quasi-admissible by quasi-stable x,y of degree at most n, then any δ̂ < δ is admissible by stable x̂, ŷ of degree at most n. This theorem shows that to find admissible δ, we need only to find quasi-admissible δ. In fact our theorem will show that if δ is quasi-admissible via x,y of degree at most n, then all δ̂ < δ are admissible via x,y of degree at most n as well. In short, quasi-admissible δ serve as upper limit points of admissible δ. Also note that since admissible implies quasi-admissible, Theorem <ref> implies Theorem <ref>. The proof of Theorem <ref> will be deferred until Section <ref>. In fact, we will do more than just prove the theorem. We will given an explicit algoritm for approximating quasi-stable δ̂ by stable δ within any desired tolerance. We will also be able to use the techniques in Section <ref> to prove the following theorem showing that admissible δ are always smaller than some quasi-admissible δ. If δ is admissible by x,y of degree at most n then there is some δ̂ > δ that is quasi-admissible by x̂, ŷ of degree at most n. Moreover, this δ̂ is not admissible by these polynomials. In other words, for any admissible δ, there is a larger δ̂ that is quasi-admissible but not necessarily admissible. Therefore, we can restrict to looking at polynomials x,y,z with at least one root on the imaginary axis.§ LOW DEGREE EXAMPLES In this section we demonstrate that in low-degree settings, the supremum of all admissible δ in (<ref>) is actually a quasi-admissible δ. By looking at quasi-stable polynomials that are not stable, we can greatly reduce our search space and directly find the supremum of the optimization program in (<ref>). For small degrees of x, y, we will algebraically design quasi-stable polynomials that achieve previously known bounds on the Belgian chocolate problem in these degrees. Burke et al. <cit.> showed that for x∈ H^3, y ∈ H^0, any admissible δ must satisfy δ < √(2+√(2))/2 and for x ∈ H^4, y ∈ H^0, δ must satisfy δ < √(10+2√(5))/4. He et al. <cit.> later found x ∈ H^4, y ∈ H^0 admitting δ close to this bound. In fact, these upper bounds on admissible δ are actually quasi-admissible δ that can be obtained in a straightforward manner. For example, suppose we restrict to x of degree 3, y of degree 0. Then for some A, B, C, k ∈, we have x(s) = s^3+ As^2 + Bs + C y(s) = k Instead of trying to find admissible δ using this x and y, we will try to find quasi-admissible δ. That is, we want δ such that z(s) = (s^2-2δ s + 1)x(s) + (s^2-1)y(s) ∈H. In other words, this z(s) can be quasi-stable instead of just stable. Note that z(s) must be of degree 5. We will specify a form for z(s) that ensures it is quasi-stable. Consider the case z(s) = s^5. This is clearly quasi-stable as its only roots are at s = 0. To ensure that z(s) = s^5 and equation (<ref>) holds, we require (s^2-2δ s + 1)(s^3+ As^2 + Bs + C)+(s^2-1)k = s^5 Equating coefficients gives us the following 5 equations in 5 unknowns. A-2δ=0 -2Aδ + B + 1=0 A - 2Bδ + C + k=0 B-2Cδ=0 C-k=0 In fact, ensuring that we have as many equations as unknowns was part of the motivation for letting z(s) = s^5. Solving for A,B,C,k,δ, we find 8δ^4-8δ^2+1=0 A = 2δ B = 4δ^2-1 C = 4δ^3-2δ k = 4δ^3-2δ Taking the largest real root of 8δ^4-8δ^2+1 gives δ = √(2+√(2))/2. Taking A,B,C,k as above yields polynomials x, y, z with real coefficients. One can verify that x is stable (via the Routh-Hurwitz test, for example), while y is degree 0 and therefore stable. Note that since z(s) = s^5, z is only quasi-stable. Therefore, there is x ∈ H^3, y ∈ H^0 for which √(2+√(2))/2 is quasi-admissible. This immediately gives the limiting value for x ∈ H^3, y ∈ H^0 discovered by Burke et al <cit.>. Combining this with Theorem <ref>, we have shown the following theorem. For (x) ≤ 3, δ = √(2+√(2))/2 is quasi-admissible and all δ < √(2+√(2))/2 are admissible. Next, suppose that x has degree 4 and y has degree 0. For A, k, δ∈, define x(s) = (s^2+2δ s + 1)(s^2+A) y(s) = k Note that as long as A ≥ 0, x will be quasi-stable and y will be stable for any k. As above, we want quasi-admissible δ. We let z(s) = s^6, so that z(s) is quasi-stable. Finding A, δ, k amounts to solving (s^2-2δ s + 1)x(s) + (s^2-1)y(s) = z(s) ⇔ (s^2-2δ s + 1)(s^2+2δ s+1)(s^2+A)+(s^2-1)k = s^6 ⇔ s^6 + (A - 4δ^2 + 2)s^4 + (-4Aδ^2 + 2A + k + 1)s^2 + (A-k) = s^6 Note that the (s^2+2δ s + 1) term in x is used to ensure that the left-hand side will have zero coefficients in its odd degree terms. Since (s^2+2δ s + 1) is stable, it does not affect stability of x. Equating coefficients and manipulating, we get the following equations. 16δ^4 -20δ^2+5=0 A -4δ^2+2=0 k -A=0 Taking the largest real root of 16δ^4 -20δ^2+5 gives δ = √(10+2√(5))/4. For this δ one can easily see that A = 4δ^2 - 2 ≥ 0, so x is quasi-stable, as are y and z by design. Once again, we were able to easily achieve the limiting value discovered by Burke et al. <cit.> discussed in Section <ref> by searching for quasi-admissible δ. Combining this with Theorem <ref>, we obtain the following theorem. For (x) ≤ 4, δ= √(10+2√(5))/4 is quasi-admissible and all δ < √(10+2√(5))/4 are admissible. The examples above demonstrate how, by considering quasi-stable x,y and z, we can find quasi-admissible δ that are limiting values of admissible δ. Moreover, the quasi-stable δ above were found by solving relatively simple algebraic equations instead of having to perform optimization over the space of stable x and y.§ ALGEBRAIC SPECIFICATION The observations in Section <ref> and Section <ref> and the examples in Section <ref> suggest the following approach which we refer to as algebraic specification. This method will be used to find the largest known values of δ found for any given degree. We wish to construct quasi-stable x(s), y(s), z(s) with repeated roots on the imaginary line satisfying (<ref>). For example, we may wish to find polynomials of the following form: x(s) = (s^2+2δ s+1)(s^2+A_1)^4(s^2+A_2)^2(s^2+A_3)^2(s^2+A_4) y(s) = k(s^2+B_1)^3(s^2+B_2)^2 z(s) = s^14(s^2+C_1)^2(s^2+C_2)(s^2+C_3) We refer to such an arrangement of x,y,z as an algebraic configuration. As long as δ > 0, the parameters {A_i}_i=1^4, {B_i}_i=1^2, and {C_i}_i=1^3 are all nonnegative, and k is real, x(s), y(s), z(s) will be real, quasi-stable polynomials. We then wish to solve (s^2-2δ s+1)x(s)+(s^2-1)y(s)=z(s) Recall that the (s^2+2δ s+1) factor in x(s) is present to ensure that the left-hand side has only even degree terms, as the right-hand side clearly only has even degree terms. Expanding (<ref>) and equating coefficients, we get 11 equations in 11 unknowns. Using PHCPack <cit.> to solve these equations and selecting the solution with the largest δ such that the A_i, B_i, C_i ≥ 0, we get the following solution, rounded to seven decimal places: δ = 0.9808348 A_1 = 1.1856917 A_2 = 6.6228807 A_3 = 0.3090555 A_4 = 0.2292503 B_1 = 0.5430391 B_2 = 0.2458118 C_1 = 4.4038385 C_2 = 0.7163490 C_3 = 7.4637156 k = 196.1845537 The actual solution has δ = 0.980834821202…. This is the largest δ we have found to date using this method. By Theorem <ref>, we conclude the following theorem. All δ≤ 0.9808348 are admissible. In general, we can form an algebraic configuration for x(s), y(s), z(s) as x(s) = (s^2+2δ s +1) ∏_i=1^m_1 (s^2+A_i)^j_i. y(s) = k∏_i=1^m_2(s^2+B_i)^k_i. z(s) = s^c∏_i=1^m_3(s^2+C_i)^ℓ_i. For fixed degrees of x, y, note there are only finitely many such configurations. Instead of performing optimization over the non-convex feasible region of the Belgian chocolate problem, we instead tackle the combinatorial optimization problem of maximizing δ among the possible configurations. Note that c in (<ref>) is whatever exponent is needed to make (z) = (x)+2. We want x,y,z to satisfy (<ref>). Expanding and equating coefficients, we get equations in the undetermined variables above. As long as the number of unknown variables equals the number of equations, we can solve and look for real solutions with δ and all A_i, B_i, C_i nonnegative. Not all quasi-stable polynomials can be formed via algebraic specification. In particular, algebraic specification forces all the roots of y,z and all but two of the roots of x to lie on the imaginary axis. However, more general quasi-stable x,y,z could have some roots with negative real part and some with zero real part. This makes the possible search space infinite and, as discussed in Section <ref>, empirically does not result in larger δ. Further evidence for this statement will be given in Section <ref>. While the method of algebraic specification has demonstrable effectiveness, it becomes computationally infeasible to solve these general equations for very large n. In particular, the space of possible algebraic configurations of x,y,z grows almost exponentially with the degree of the polynomials. For large n, an exhaustive search over the space of possible configurations becomes infeasible, especially as the equations become more difficult to solve. We will describe an algebraic configuration via the shorthand [j_1,…,j_m_1],[k_1,…, k_m_2],[ℓ_1,…, ℓ_m_3]. This represents the configuration described in (<ref>),(<ref>),(<ref>) above. In particular, if the second term of (<ref>) is empty then y = k, while if the third term of (<ref>) is empty then z is a power of s. For example, the following configuration is given by [3,1],[2],[1]: x(s) = (s^2+2δ s + 1)(s^2+A_1)^3(s^2+A_2) y(s) = k(s^2+B_1)^2 z(s) = s^10(s^2+C_1) A table containing the largest quasi-admissible δ we have found and their associated algebraic configuration for given degrees of x is given below. Note that for each entry of the table, given (x) = n and quasi-admissible δ, Theorem <ref> implies that all δ̂ < δ are admissible with x,y of degree at most n. § APPROXIMATING QUASI-ADMISSIBLE Δ BY ADMISSIBLE Δ In this section we will prove Theorem <ref>. Our proof will be algorithmic in nature. We will describe an algorithm that, given δ that is quasi-admissible by quasi-stable polynomials x, y, will produce for any δ̂ < δ stable polynomials x̂, ŷ admitting δ̂. Moreover, given (x) = n, we will ensure that (x̂) ≤ n. Suppose that for a given δ there are x,y,z ∈H with (x) ≥(y) satisfying (<ref>). Let n = (x). Define R(s) := (s^2-1)y(s)z(s). Note that for any s ∈, R(s) = 0 iff (s^2-1)y(s) = 0, R(s) = 1 iff (s^2-2δ s+1)x(s) = 0, and R(s) is infinite iff z(s) = 0. Since x,y,z are quasi-stable, we know that for (s) > 0, R(s) = 1 iff s = δ± i√(1-δ^2) and R(s) = 0 iff s = 1. All other points where R(s) is 0, 1, or infinite satisfy (s) ≤ 0. Precomposing R(s) with the fractional linear transformation f(s) = (1+s)/(1-s), we get the complex function D(s) := R(1+s1-s). Note that this fractional linear transformation maps the unit disk {s | |s| = 1} to the imaginary axis { s | (s) = 0}. Also note that f^-1(1) = 0, f^-1(δ± i√(1-δ^2)) = ± it where t = √(1-δ)/√(1+δ). Therefore, D(s) satisfies the following properties: * For |s| < 1, D(s) = 0 iff s = 0. * For |s| < 1, D(s) = 1 iff s = ± it. * |D(s)| < ∞ for |s| < 1. Note that the last holds by the quasi-stability of z(s). Since z(s) = 0 implies (s) ≤ 0, D(s) = ∞ implies |s| ≥ 1. In particular, the roots of x, y, z that have 0 real part now correspond to points |s| = 1 such that D(s) = 1, 0, ∞ respectively. For any ϵ > 0, let D_ϵ(s) := D(s/1+ϵ). D_ϵ(s) then satisfies * For |s| ≤ 1, D_ϵ(s) = 0 iff s = 0. * For |s| ≤ 1, D_ϵ(s) = 1 iff s = ± i(1+ϵ)t. * |D(s)| < ∞ for |s| ≤ 1. Precomposing with the inverse fractional linear transformation f^-1(s) = (s-1)/(s+1), we get R_ϵ(s) := D_ϵ(s-1s+1). By the properties of D_ϵ(s) above, we find that R_ϵ(s) satisfies * For (s) ≥ 0, R_ϵ(s) = 0 iff s = 1. * For (s) ≥ 0, R_ϵ(s) = 1 iff s = δ_ϵ± i√(1-δ_ϵ^2) where δ_ϵ = 1-(1+ϵ)^2t^21+(1+ϵ^2)t^2. * For (s) ≥ 0, |R_ϵ(s)| < ∞. Moreover, R_ϵ(s) ≠ 0, 1, ∞ for any s such that (s) < 0. We can rewrite R_ϵ(s) as R_ϵ(s) = p(s)/q(s). Note that by the first property of R_ϵ, the only root of p(s) in {s | (s) ≥ 0} is at s = 1. By properties of f(s), f^-1(s), one can show that p(-1) = 0. This follows from the fact that R(-1) = 0, which implies that lim_s→∞ D(s) = lim_s→∞D_ϵ(s) = 0, and therefore R_ϵ(-1)= 0. Therefore, p(s) = (s^2-1)y_ϵ(s) where y_ϵ(s) has no roots in {s | (s) ≥ 0}. By the second property of R_ϵ, the only roots of q-p in {s | (s) ≥ 0} are at ±δ_ϵ + i√(1-δ_ϵ^2). Therefore, q-p = (s^2-2δ_ϵ s+1)x_ϵ(s) where x_ϵ(s) has no roots in {s | (s) ≥ 0}. Finally, by the third property of R_ϵ we find that z_ϵ(s) = (s^2-2δ_ϵ s+1)x_ϵ(s)+(s^2-1)y_ϵ(s) is stable. Moreover, basic properties of fractional linear transformations show that if (x) = n ≥(y) = m, then x_ϵ, y_ϵ are both of degree n. Therefore, x_ϵ, y_ϵ, z_ϵ are stable polynomials satisfying (<ref>) for δ_ϵ. For any δ̂ < δ, we can take ϵ such that δ_ϵ = δ̂, proving the desired result. Note that if we start with δ admissible by stable x,y,z of degree at most n, then we can do the reverse of this procedure to perturb x,y,z to quasi-stable x̂, ŷ, ẑ. By the reverse of the arguments above, x̂, ŷ, ẑ will be quasi-stable but at least one of these polynomials will not be stable. These polynomials will be associated to some quasi-admissible δ̂ > δ. This gives the proof of Theorem <ref>. The proof above describes the following algorithm for perturbing quasi-stable x,y,z satisfying (<ref>) to obtain stable x̂, ŷ,ẑ satisfying (<ref>). Input: Real numbers δ, ϵ > 0 and real polynomials x,y,z ∈H satisfying (<ref>). Output: δ̂ and real polynomials x̂,ŷ,ẑ∈ H satisfying (<ref>). * Let R(s) = (s^2-1)y(s)/z(s). For ϵ > 0, compute R_ϵ(s) = R((2+ϵ)s + ϵϵ s + (2+ϵ)). * Reduce R_ϵ(s) to lowest terms. Suppose that in lowest terms R_ϵ(s) = p(s)/q(s). * Factor p(s) as (s^2-1)ŷ(s) and factor q(s)-p(s) as (s^2-2δ̂s+1)x̂(s). Let ẑ(s) = q(s). To further illustrate the method of algebraic specification and this algorithm for perturbing to get quasi-stable polynomials, we give the following detailed example. Say we are interested in x of degree 4. We may then give the following algebraic specification of x, y, z discussed in Section <ref>. In the shorthand of (<ref>), this is the configuration [1],[],[]. x(s) = (s^2+2δ s+1)(s^2+A) y(s) = k z(s) = s^6 As in Section <ref>, we solve (s^2-2δ s+1)x(s) + (s^2-1)y(s) = z(s). This implies that δ, A, k satisfy 16δ^4-20δ^2+5 = 0, A = 4δ^2-2, k = 4δ^2 - 2. Taking the largest root of 16δ^4-20δ^2+5 gives δ = √(10+2√(5))/4, A = k = (√(5)+1)/2. Given numerically to six decimal places, δ = 0.951057. Computing R(s) using exact arithmetic, we get R(s) = (s^2-1)y(s)z(s) = (s^2-1)(√(5)+1)2s^6 We then use a fractional linear transformation s ↦ (1+s)/(1-s) to get: D(s)= R((1+s)/(1-s))= 2s(√(5)+1)(s-1)^4s^6+6s^5+15s^4+20s^3+15s^2+6s+1 One can verify that D(s) can equal 1 on the boundary of the unit circle, so we push these away from the boundary (with ϵ = 0.01) by defining D_ϵ(s)=D(s/1+0.01)=6.40805(0.99010s-1)^4s0.942045s^6+…+5.94054s While we gave an approximate decimal form above for brevity, this computation can and should be done with exact arithmetic. We let R_ϵ(s) = f_ϵ((s-1)/(s+1)). Writing R_ϵ(s) as p(s)/q(s) in lowest terms, we get: p(s) = 64080.55401(0.990990s+199.00990)^4(s^2-1) q(s) = 0.62122× 10^14s^6 + … +0.94204 As proved above, p(s) will equal (s^2-1)ŷ(s). Dividing p(s) by the s^2-1 factor, we get a polynomial ŷ(s) such that its only root is at s = -201. Therefore ŷ(s) is stable. The denominator, ẑ(s) is easily verified to only have roots with negative part. Finally, the polynomial q(s) - p(s) will equal (s^2-2δ̂s+1)x̂(s). Finding its roots, one can show that q(s)-p(s) only has roots with negative real part, except for roots at s = 0.950097 ± 0.311954i. These roots are of the form δ̂±√(δ̂^2-1) for δ̂ = 0.950097. Therefore δ̂ = 0.950097 is admissible via the stable polynomials x̂,ŷ,ẑ. While we have decreased δ slightly, we have achieved stability in the process. By decreasing ϵ, we can get arbitrarily close to our original δ. § OPTIMALITY OF ALGEBRAIC SPECIFICATION Not only does our method of algebraic specification find larger δ than have been found before, one can view previous approaches to the Belgian chocolate problem as approximating algebraic specification. In particular, previously discovered admissible δ can be seen as approximating some quasi-admissible δ' that can be found via algebraic specification. For example, in <cit.>, Chang and Sahinidis found that δ = 0.9739744 is admissible by x(s)=s^10 + 1.97351109136261s^9+5.49402092964662s^8 + 8.78344232801755s^7+ 11.67256448604672s^6 + 13.95449016040116s^5+11.89912895529042s^4 + 9.19112429409894s^3+5.75248874640322s^2+2.03055901420484s+1.03326203778346, y(s) =0.00066128189295s^5+3.611364710425s^4+0.03394722108511s^3+3.86358782861648s^2+0.0178174691792s+1.03326203778319. The roots of x,y,z were discussed in Section <ref>. As previously noted, x,y,z are close to polynomials with repeated roots on the imaginary axis. Examining the roots of x,y,z, one can see that x,y,z are tending towards quasi-stable polynomials x', y', z' that have the same root structure as the algebraic configuration [3,1],[2],[1]. In other words, we will consider the following quasi-stable polynomials: x'(s) = (s^2+2δ' s + 1)(s^2+A_1)^3(s^2+A_2) y'(s) = k(s^2+B)^2 z'(s) = s^10(s^2+C) Solving for the free parameters and finding the largest real δ' such that A_1, A_2, B, C ≥ 0, we obtain the following values, given to seven decimal places. δ' = 0.9744993 A_1 =1.3010813 A_2 =0.4475424 B =0.5345301 C =2.5521908 k =3.4498736. One can easily verify that taking these values of the parameters, the roots of x, y, z are close to the roots of x',y',z'. These algebraically designed x', y', z' possess the root structure that x,y,z are tending towards. Moreover, the x', y', z' show that δ' is quasi-stable and their associated δ' gives an upper bound for the δ found by Chang and Sahinidis. This demonstrates that the stable polynomials found by Chang and Sahinidis are tending towards the quasi-stable ones listed above. Moreover, by Theorem <ref> all δ < 0.9744993 are admissible. In fact, many examples of admissible δ given in previous work are approximating quasi-admissible δ found via algebraic specification. This includes the previously mentioned examples in <cit.> and all admissible values of δ given by Chang and Sahinidis in <cit.>. We further conjecture that for all admissible δ, there is a quasi-admissible δ' > δ that can be achieved by algebraically specified x,y,z. More formally, if we fix x, y to be of degree at most n, let δ_n^* denote the supremum of the optimization problem in (<ref>). Note that as discussed in Section <ref>, δ_n^* is not admissible by x,y of degree at most n. The empirical evidence given in this section and in Sections <ref> and <ref> suggests that this δ_n^* is quasi-admissible and can be obtained through algebraic specification. This leads to the following conjecture. For all n, δ_n^* is quasi-admissible by some x,y,z that are formed via algebraic specification.§ CONCLUSION The Belgian chocolate problem has remained resilient to direct global optimization techniques for over a decade. Most prior work attempts to maximize δ subject to the stability constraints by applying iterative methods to complicated non-convex regions. By contrast, we find the largest known value of δ in a more direct fashion. We do this by reducing our problem to combinatorial optimization over a finite set of algebraically constructed limit points. Our key algebraic insight is that quasi-admissible δ are limiting values of the admissible δ. In fact, previous methods actually find admissible δ that approach quasi-admissible δ. We give the method of algebraic specification to design quasi-stable polynomials and directly find these quasi-admissible δ by solving a system of equations. We then show that we can perturb these quasi-stable polynomials to obtain stable polynomials with admissible δ that are arbitrarily close to the quasi-admissible δ. We show that this method recovers the largest admissible δ known to date and gives a much better understanding of the underlying landscape of admissible and quasi-admissible δ. We conjecture that for all n, the supremum of all δ admissible by x,y of degree at most n is a quasi-admissible δ that can be found through our method of algebraic specification.§ ACKNOWLEDGMENTS The authors would like to thank Bob Barmish for his valuable feedback, discussions, and advice. The first author was partially supported by the National Science Foundation grant DMS-1502553. The second author was partially supported by the Simons Foundation grant MSN179747.spmpsci
http://arxiv.org/abs/1708.08114v1
{ "authors": [ "Zachary Charles", "Nigel Boston" ], "categories": [ "math.OC" ], "primary_category": "math.OC", "published": "20170827172750", "title": "Exploiting Algebraic Structure in Global Optimization and the Belgian Chocolate Problem" }
A Conservation Law Method in Optimization Bin Shi December 30, 2023 ========================================= We propose some algorithms to find local minima in nonconvex optimization and to obtain global minima in some degree from the Newton Second Law without friction. With the key observation of the velocity observable and controllable in the motion, the algorithms simulate the Newton Second Law without friction based on symplectic Euler scheme. From the intuitive analysis of analytical solution, we give a theoretical analysis for the high-speed convergence in the algorithm proposed. Finally, we propose the experiments for strongly convex function, non-strongly convex function and nonconvex function in high-dimension. § INTRODUCTION Non-convex optimization is the dominating algorithmic technique behind many state-of-art results in machine learning, computer vision, natural language processing and reinforcement learning. Finding a global minimizer of a non-convex optimization problem is NP-hard. Instead, the local search method become increasingly important, which is based on the method from convex optimization problem. Formally, the problem of unconstrainedoptimization is stated in general terms as that of finding the minimum value that a function attains over Euclidean space, i.e.min_x ∈ℝ^nf(x).Numerous methods and algorithms have been proposed to solve the minimization problem, notably gradient methods, Newton's methods, trust-region method, ellipsoid methodand interior-point method <cit.>.First-order optimization algorithms are the most popular algorithms to perform optimization and by far the most common way to optimize neural networks, since the second-order information obtained is supremely expensive.The simplest and earliest method for minimizing a convex function f is the gradient method, i.e.,{x_k+1 = x_k - h ∇ f(x_k)Any Initial Point: x_0. .There are two significant improvements of the gradient method to speed up the convergence. One is the momentum method, named as Polyak heavy ball method, first proposed in <cit.>, i.e.,{x_k+1 = x_k - h ∇ f(x_k) + γ_k (x_k - x_k-1)Any Initial Point: x_0. .Let κ be the condition number, which is the ratio of the smallest eigenvalue and the largest eigenvalue of Hessian at local minima. The momentum method speed up the local convergence rate from 1 - 2κ to 1 - 2√(κ). The other is the notorious Nesterov's accelerated gradient method, first proposed in <cit.> and an improved version <cit.>, i.e.{y_k+1 = x_k - 1/L∇ f(x_k)x_k+1 = x_k + γ_k(x_k+1 - x_k)Any Initial Point: x_0 = y_0.where the parameter is set as γ_k = α_k (1 - α_k)/α_k^2 + α_k+1andα_k+1^2 = (1 - α_k+1)α_k^2 + α_k+1κ.The scheme devised by Nesterov does not only own the property of the local convergence for strongly convex function, but also is the global convergence scheme, from 1 - 2κ to 1 - √(κ) for strongly convex function and from 𝒪(1/n) to 𝒪(1/n^2) for non-strongly convex function.Although there is the complex algebraic trick in Nesterov's accelerated gradient method, the three methods above can be considered from continuous-time limits <cit.> to obtain physical intuition. In other words, the three methods can be regarded as the discrete scheme for solving the ODE. The gradient method (<ref>) is correspondent to { ẋ = - ∇ f(x_k) x(0)= x_0, .and the momentum method and Nesterov accelerated gradient method are correspondent to { ẍ + γ_tẋ + ∇ f(x) = 0 x(0)= x_0,ẋ(0) =0, .the difference of which are the setting of the friction parameter γ_t. There are two significant intuitive physical meaning in the two ODEs (<ref>) and (<ref>). The ODE (<ref>) is the governing equation for potential flow, a correspondent phenomena of waterfall from the height along the gradient direction. The infinitesimal generalization is correspondent to heat conduction in nature. Hence, the gradient method (<ref>) is viewed as the implement in computer or optimization simulating the phenomena in the real nature. The ODE (<ref>) is the governing equation for the heavy ball motion with friction.The infinitesimal generalization is correspondent to chord vibration in nature.Hence, the momentum method (<ref>) and the Nesterov's accelerated gradient method (<ref>) are viewed as the update version implement in computer or optimization by use of setting the friction force parameter γ_t. Furthermore, we can view the three methods above as the thought for dissipating energy implemented in the computer. The unknown objective function in black box model can be viewed as the potential energy.Hence, the initial energy is from the potential function f(x_0) at any position x_0 to the minimization value f(x^⋆) at the position x^⋆. The total energy is combined with the kinetic energy and the potential energy. The key observation in this paper is that we find the kinetic energy, or the velocity, is observable and controllable variable in the optimization process. In other words, we can compare the velocities in every step to look for local minimum in the computational process or re-set them to zero to arrive to artificially dissipate energy. Let us introduce firstly the governing motion equation in a conservation force field, that we use in this paper, for comparison as below, { ẍ = - ∇ f(x) x(0)= x_0,ẋ(0) =0..The concept of phase space, developed in the late 19th century, usually consists of all possible values of position and momentum variables. The governing motion equation in a conservation force field (<ref>) can be rewritten as{ ẋ = v v̇ = -∇ f(x) x(0)= x_0,v(0) =0 . . In this paper, we implement our discrete strategy with the utility of the observability and controllability of the velocity, or the kinetic energy, as well as artificially dissipating energy for two directions as below,* To look for local minima in non-convex function or global minima in convex function, the kinetic energy, or the norm of the velocity, is compared with that in the previous step,it will be re-set to zero until it becomes larger no longer.* To look for global minima in non-convex function, an initial larger velocity v(0) = v_0 is implemented at the any initial position x(0) = x_0. A ball is implemented with (<ref>), the local maximum of the kinetic energyis recorded to discern how many local minima exists along the trajectory. Then implementing the strategy above to find the minimum of all the local minima.For implementing our thought in practice, we utilize the scheme in the numerical method for Hamiltonian system, the symplectic Euler method. We remark that a more accuracy version is the Störmer-Verlet method for practice. §.§ An Analytical Demonstration For IntuitionFor a simple 1-D function with ill-conditioned Hessian, f(x) = 1/200 x^2 with the initial position at x_0 = 1000.The solution and the function value along the solution for (<ref>) are given by x(t) = x_0e^-1/100t f(x(t)) = 1/200x_0^2 e^-1/50t. The solution and the function value along the solution for (<ref>) with the optimal friction parameter γ_t = 1/5 are x(t) = x_0(1 + 1/10t )e^-1/10t f(x(t)) = 1/200x_0^2 (1 + 1/10t )^2 e^-1/5t.The solution and the function value along the solution for (<ref>) are x(t) = x_0cos(1/10t)andv(t) = x_0sin(1/10t)f(x(t)) = 1/200x_0^2 cos^2(1/10t)stop at the point that |v| arrive maximum. Combined with (<ref>), (<ref>) and (<ref>) with stop at the point that |v| arrive maximum, the function value approximating f(x^⋆) are shown as below,From the analytical solution for local convex quadratic function with maximum eigenvalue L and minimum eigenvalue μ, in general, the step size by 1/√(L) for momentum method and Nesterov accelerated gradient method, hence the simple estimate for iterative times is approximately n ∼π/2√(L/μ). hence, the iterative times n is proportional to the reciprocal of the square root of minimal eigenvalue √(μ), which is essentially different from the convergence rate of the gradient method and momentum method.The rest of the paper is organized as follows. Section <ref> summarize relevant existing works. In Section <ref>, we propose the artificially dissipating energy algorithm, energy conservation algorithm and the combined algorithm based on the symplectic Euler scheme, and remark a second-order scheme — the Störmer-Verlet scheme . In Section <ref>, we propose the locally theoretical analysis for High-Speed converegnce. In section <ref>, we propose the experimental result for the proposed algorithms on strongly convex function, non-strongly convex function and nonconvex function in high-dimension. Section <ref> proposes some perspective view for the proposed algorithms and two adventurous ideas based on the evolution of Newton Second Law — fluid and quantum. § RELATED WORK The history of gradient method for convex optimization can be back to the time of Euler and Lagrange. However, since it is relatively cheaper to only calculate for first-order information, this simplest and earliest method is still active in machine learning and nonconvex optimization, such as the recent work <cit.>. The natural speedup algorithms are the momentum method first proposed in <cit.> and Nesterov accelerated gradient method first proposed in <cit.> and an improved version <cit.>. A acceleration algorithm similar as Nesterov accelerated gradient method, named as FISTA, is designed to solve composition problems <cit.>.A related comprehensive work is proposed in <cit.>.The original momentum method, named as Polyak heavy ball method, is from the view of ODE in <cit.>, which contains extremely rich physical intuitive ideas and mathematical theory. An extremely important work in application on machine learning is the backpropagation learning with momentum <cit.>. Based on the thought of ODE, a lot of understanding and application on the momentum method and Nesterov accelerated gradient methods have been proposed. In <cit.>, a well-designed random initialization with momentum parameter algorithm is proposed to train both DNNs and RNNs. A seminal deep insight from ODE to understand the intuition behind Nesterov scheme is proposed in <cit.>. The understanding for momentum method based on the variation perspective is proposed on <cit.>, and the understanding from Lyaponuv analysis is proposed in <cit.>.From the stability theorem of ODE, the gradient method always converges to local minima in the sense of almost everywhere is proposed in <cit.>.Analyzing and designing iterative optimization algorithms built on integral quadratic constraints from robust control theory is proposed in <cit.>. Actually the “high momentum” phenomenon has been firstly observed in <cit.> for a restarting adaptive accelerating algorithm, and also the restarting scheme is proposed by <cit.>. However, both works above utilize restarting scheme for an auxiliary tool to accelerate the algorithm based on friction. With the concept of phase space in mechanics, we observe that the kinetic energy, or velocity, is controllable and utilizable parameter to find the local minima. Without friction term, we can still find the local minima only by the velocity parameter.Based on this view, the algorithm is proposed very easy to practice and propose the theoretical analysis. Meanwhile, the thought can be generalized to nonconvex optimization to detect local minima along the trajectory of the particle. For heavy ball method, or momentum method, the seminal theoretical paper is from <cit.>, which contains extremely rich ideas. In application in machine learning, there is also a extremely important work on backpropagation learning <cit.>. Some recent work on moment method for optimization algorithm based Bregman divergence and variational method have appeared <cit.>. For Non-strongly convex optimization,a surprising and extremely mysterious scheme, named as Nesterov accelerated gradient method, is first proposed in <cit.> andSome recent works <cit.> introduce the geometrical perspective. The most popular technique for the accelerated method comes from the ODE as well as the mechanic background. The differential equation is a traditional subfield of mathematics. However, with the development of machine learning, this study can be seen as an extension of classical mathematical fields such as dynamical systems and differential equations among others, but with the important addition of the notion of computational efficiency.Some recent work <cit.>
http://arxiv.org/abs/1708.08035v3
{ "authors": [ "Bin Shi" ], "categories": [ "math.OC", "cs.AI", "cs.NA" ], "primary_category": "math.OC", "published": "20170827015549", "title": "A Conservation Law Method in Optimization" }
The popularity of digital currencies, especially cryptocurrencies, has been continuously growing since the appearance of Bitcoin. Bitcoin's security lies in a proof-of-work scheme, which requires high computational resources at the miners.Despite advances in mobile technology, existing cryptocurrencies cannot be maintained by mobile devices due to their low processing capabilities.Mobile devices can only accommodate mobile applications (wallets) that allow users to exchange credits of cryptocurrencies.In this work, we propose LocalCoin, an alternative cryptocurrency that requires minimal computational resources, produces low data traffic and works with off-the-shelf mobile devices. LocalCoin replaces the computational hardness that is at the root of Bitcoin's security with the social hardness of ensuring that all witnesses to a transaction are colluders. Localcoin features (i) a lightweight proof-of-work scheme and (ii) a distributed blockchain. We analyze LocalCoin for double spending for passive and active attacks and prove that under the assumption of sufficient number of users and properly selected tuning parameters the probability of double spending is close to zero. Extensive simulations on real mobility traces, realistic urban settings, and random geometric graphs show that the probability of success of one transaction converges to 1 and the probability of the success of a double spending attempt converges to 0. P2P, Ad-hoc networks, cryprocurrency LocalCoin: An Ad-hoc Payment Scheme for Areas with High Connectivity Dimitris Chatzopoulos, Sujit Gujar, Boi Faltings, and Pan Hui Dimitris Chatzopoulos and Pan Hui are with The Hong Kong University of Science and Technology, Hong Kong. E-mail: {dcab,panhui}@cse.ust.hk Sujit Gujar is with the International Institute of Information Technology (IIIT), Hyderabad, India. E-mail: [email protected] Boi Faltings is with the Ecole Polytechnique Federale de Lausanne (EPFL), Switzerland. E-mail: [email protected] Manuscript received: date; revised: dateDecember 30, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION Bitcoin,proposed by Nakamoto in 2009, is the most popular cryptocurrency <cit.>. Numerous cryptocurrencies have been proposed thereafter and have attracted the attention of both financial and technological industries as well as academia <cit.> <cit.><cit.>.All the digital currencies that were proposed before Bitcoin, follow the client/server model with transactions possible only between the currency provider and the users (PAYPAL, VISA, MASTERCARD, etc). Bitcoin, in contrast, works in a decentralised manner.Decentralised cryptocurrencies have to deal with three main challenges: (i) Proof of ownership- users should be able to prove they have the amount of money they claim to have. (ii) Double spending avoidance - a defense mechanism against double spending. (Users are not able to spend the same money more than once). (iii) Incentives - for its stakeholders.Common characteristics of all the existing cryptocurrencies are: (i) Internet based (ii) use computationally expensive techniques to deal with double spending attacks and (iii) require lots of data storage.To become part of the Bitcoin peer network anyone can contribute their resources and work as a miner. Bitcoin, as well as other less popular cryptocurrencies, require their miners to employ devices with high computational capabilities and to be interconnected via the Internet. These requirements play a vital role in the quality and the guarantees of the protocols as well as in the miners' revenue.All the transactions are stored in a public ledger named blockchain in sets of blocks that are created by the miners<cit.>. Bitcoin requires from miners to solve cryptographic puzzles, which can only be solved by brute force SHA-256 hashing, in order to generate new blocks for the blockchain <cit.><cit.>. Each block has size of 1 MB and two consecutive blocks are created with 10 minutes time difference, on average.Miners earn bitcoins whenever they successfully mine a new block and put it in the blockchain. The transactions that are included in a mined block are selected by the miner who successfully mined it. The probability of a miner to mine a block is proportional to the portion of the computational resources of the Bitcoin network he controls. The probability of a transaction to be included in a mined block is proportional to the transaction fees the miner will earn if the block is mined. This gives lower priority to small transactions.Cryptocurrencies are inferior to conventional currencies, because users cannot exchange money without an Internet connection.Mobile wallets are mobile applications that allow anyone who owns credits from a cryptocurrency to create transactions in a similar way as the mobile applications that are offered by banks.The role of mobile devices in such scenarios is limited to the submission of the transaction to the authority that maintains the currency, which can be either a decentralised network (Bitcoin) or a mobile phone-based money transfer service (M-Pesa <cit.>).Mobile devices are practically unable to partake as peers in any decentralised cryptocurrency, because of (i) their lower processing capabilities compared to conventional hardware specialised for mining <cit.> and (ii) their unstable connectivity to the Internet compared to ordinary wire-line access protocols.For example, consider a university campus which might be spread across an area of some km^2 with several thousands of users with smartphones. These smartphones can be used to deploy Bitcoin-like currency. However, these devices cannot compete with Bitcoin miners in the block creation process and this gives no incentives to their owners to employ them. Despite that, with widespread usage of smartphones, which are equipped with technologies such asWiFi-direct, NFC and so on, such users can be easily interconnected.The problem that we address in this paper is whether we can develop a cryptocurrency, called LocalCoin, that requires neither an Internet connection nor devices with high computational capabilities and is based on the connectivity between users that opportunistically exchange messages. We imagine LocalCoin to complement global-scale cryptocurrencies by handling small transactions without the huge mining expense of Bitcoin-like cryptocurrencies. Localcoin wallets can get charged from Bitcoin wallets through ATM-machine like nodes in the LocalCoin network. §.§ Our ContributionsWe propose LocalCoin, a scheme that replaces the computational hardness that is at the root of Bitcoin's security with the social hardness of ensuring that all witnesses to a transaction are colluders (users assisting the malicious user to double spend). Where computational hardness provides a weakest-link security guarantee - it suffices to break the scheme once - the social hardness provides a strongest-link guarantee: if just one witness to the transaction is not cooperating, the scheme cannot be broken. This makes it possible to apply the same idea in mobile environments without sufficient computation power or internet connectivity, while taking advantage of its distributed nature <cit.>. (1) We are dealing with the proof of ownership issue by proposing a distributed block chain and requiring users to at least store the blocks containing their transactions. (2) Regarding double spending attacks, we consider the location of each user who verifies the creation of a new block. We show that if the network of the users of LocalCoin is dense enough, the probability of a double spending attempt to be successful is upper bounded by the inverse of the square of the number of users (Theorem <ref>). We also prove that the probability of a double spending attempt to be successful by a malicious user who can hire colluders to assist him in the attack, is also very low (Theorem <ref>). (3) We propose an incentive scheme based on transaction and block fees that are adjusted to the ad-hoc networks in order to encourage message exchange.In addition to the theoretical analysis, we validate our claims regarding the spread of transactions, the transaction rates, and the double spending by extensive experiments on (i) static graphs using tools from Random Geometric Graph theory, (ii) city scale simulations with mobile users with the help of the ONE simulator <cit.> as well as (iii) real data from Infocom'05, Infocom'06 and Humanet datasets <cit.>.§.§ Applicability of LocalCoinWe envision LocalCoin as a location-based cryptocurrency that enables small payments[The technology needed to support the development of LocalCoin is already mature since mobile devices have enough resources and WiFi-direct is supported by every android version since October 2013.]. Although the provided guarantees against double spending are probabilistic and leave a small chance for a double spending attack to be successful, the cost of manipulating the protocol by having a set of colluders in the proper locations outweighs the gains when the transactions are small. As we present in Section <ref>, an attacker needs to control many users, who need to be incentivised to attack the protocol and loose their potential earnings from the incentives provided by LocalCoin, in order to split the network in two parts and perform a double spending attack. In order for the chances for a double spending attack to be negligible, the number of the participants has to be high in order to guarantee high connectivity. Given that the transaction verification speed in Bitcoin and other cryptocurrencies depends on the transaction fees that will be collected by the miners <cit.>, small and local transactions are too expensive and slow to be handled by Bitcoin. LocalCoin can work in parallel with Bitcoin and send transactions to the Bitcoin network that contain many senders and receivers and are practically merged transactions. Technically, a LocalCoin block can be a transaction in the Bitcoin network. Apart from conventional money transactions, LocalCoin can also be applied to mobile computing/networking applications such as computation offloading or downloading/streaming services. Device-to-device (D2D) ecosystems have attracted the research interest and various serverless architectures and frameworks have been proposed. However, most of them either do not consider incentives for the mobile users that contribute their resources or imply the existence of a centralised server that keeps track of the reliability and the helpfulness of each user. LocalCoin can fill this gap and complement any distributed credit-based incentive scheme for D2D ecosystems.§ RELATED WORK After Nakamoto's original paper <cit.>, many research groups worked on various perspectives of the Bitcoin protocol. Tschorsch and Scheuermann in their tutorial present the existing contributions and results triggered by the proposal of Bitcoin <cit.>. Garay et al. discussed applications, such as the Byzantine agreement, that can be built on top of the Bitcoin core network<cit.>. Darkcoin, Zerocoin and CoinShuffle, motivated by the fact that a few transaction deanonymization attacks have been reported, focus on the security and privacy aspects of Bitcoin and propose extensions to fully anonymize transactions <cit.> <cit.> <cit.>. Also, CoinJoin employs amulti-signature scheme to enhance the transactions' privacy <cit.>. CommitCoin shows a commitment scheme that harnesses the existing computational power of the Bitcoin network <cit.>. Miller et al. present a formal model of anonymous and synchronous processes that communicate using one-way public broadcasts and prove that the Bitcoin protocol achieves consensus in this model in almost any case <cit.>. Also Bruce J.D. proposed a cryptocurrency that employs a mini-blockchain with finite number of blocks in order to reduce the storage requirements and improve efficiency <cit.>.Authors of <cit.> analyse the security of using Bitcoin for fast payments, where the time between the exchange of currency and goods is short. Furthermore, <cit.> investigates the restrictions on the transaction processing rate in Bitcoin as a function of both the bandwidth available to users and the network delay, both of which lower the efficiency of Bitcoin's transaction processing. The security analysis done by Bitcoin's creator assumes that block propagation delays are negligible compared to the time between the creation of two consecutive blocks. This assumption fails when the protocol is required to process transactions at high rates. Eyal et al. proposed Bitcoin-NG, the `next generation' of Bitcoin the design of which is based on scalability <cit.>. In more detail, the latency is limited only by the propagation delay of the network and the bandwidth of the capabilities of the miners. Moreover, <cit.> argue that the Bitcoin protocol is not incentive-compatible and after presenting a game theoretic analysis they also present an attack in which colluding miners obtain a revenue larger than their fair share. Also, <cit.> introduces a new defence against this 51% attack via (i) presenting a block header, (ii) introducing some extra bytes, and (iii) utilising the time-stamp more effectively in the hash generation. According to <cit.>, Bitcoin only provides eventual consistency. They propose PeerCensus, a new system, built on the Bitcoin block chain, which enables strong consistency and acts as a certification authority, manages peer identities in a peer-to-peer network, and ultimately enhances Bitcoin and similar systems with strong consistency. § PROPOSED APPROACHAs discussed earlier,cryptocurrencies have to address three main challenges. We, first, explain how Bitcoin addresses these challenges and then how LocalCoin encounters them. §.§ Bitcoin Proof of ownership: Bitcoin's main achievement is its ability to reach a consensus about a valid transaction history in a totally decentralised fashion. Bitcoin deals with the proof of ownership problem by using the concept of block chain based on a Merkle tree data structure. The block chain consists of a sequence of blocks connected in a hash chain, where every block imprints a set of transactions that have been collected from the network. Every miner is aware of the creation of a new block and consequently is able to validate the proof of ownership of a claimed Bitcoins.Users can employ their bitcoins by using a set of verified transactions. In order for one transaction to be counted as verified it has to belong to a block which is at least six blocks away from the current mined block in the block chain. Double spending avoidance: Bitcoin overcomes double spending by using a proof-of-work mechanism that imposes a delay on the verification of the transaction. In order to overcome this mechanism, one has to solve a hard problem with input that takes approximately 10 minutes for a brute force algorithm to solve.There are three main ways to attempt double spending in Bitcoin protocol: (i) race attack, (ii) Finney attack and a (iii) 51% attack. Waiting for some new blocks to be created based on the current one can easily prevent the first two attacks; One block in the case of a race attack and six in the case of a Finney attack. However a 51% attack can collapse the whole Bitcoin network but this is extremely costly. Also, Eyal and Sirer prooved that proof-of-work blockchains are vulnerable to selfish mining by attackers that control more than 1/4 of the network's mining power <cit.>. Incentives: Each miner gets a reward of 25 bitcoins for mining a block <cit.>. However, this reward halves every 4 years. Another concern is, as more and more users join as miners, the probability of mining a successful block reduces. To partially overcome this issue, miners create mining pools and they share the earnings whenever one of them solves a cryptographic puzzle <cit.>.§.§ LocalCoin In this work, we propose a new Bitcoin-like cryprocurency protocol, namely LocalCoin, for mobile ad-hoc networks in urban areas with high device density. Proof of ownership: LocalCoin uses a lightweight storage architecture by extending the concept ofblockchain in a distributed fashion, where each user can store as many blocks as she wants. The proposed distributed blockchain has a redundancy factor between the users. LocalCoin, similarly to Bitcoin, stores transactions into blocks.All the transactions in the same block are collectively verified. The size of each block is denoted by BS. In order for one block to be created a minimum number of users to verify each transaction, denoted by 𝑚𝑉𝑢, is needed (i.e., at least BS ·𝑚𝑉𝑢 users are informed about each transaction on one block).The relationship of these variables with the total amount of users affects the time needed to verify one block and prove the ownership of all the users that own these transactions.Double spending avoidance: LocalCoin nullifies Bitcoin's computation overhead via incorporating a novel protocol, which is designed for the ad-hoc environment. Bitcoin's proof of work is based on the fact that cheating is improbable because a malicious user has to solve hard problems at a faster rate than the total remaining users. In LocalCoin, cheating is made very difficult because a malicious user has to misinform the majority of a set of trusted users. Every user in the LocalCoin protocol selects the users she trusts. LocalCoin avoids double spending in two ways. (i) The receiver of one transaction will accept the transaction if and only if she receives the transaction signed by at least a minimum number of trusted users of her trusted network, denoted by 𝑚𝑇𝑟. This constraint imposes a useful delay that spreads the transaction message to more users and increases the probability of one trusted user to detect the same input to another transaction.It is worth mentioning that any initiated transaction is signed by the sender and we assume that it is impossible for a malicious user to fake a transaction by pretending to be another user. (ii) During the block creation process, every participant checks for double spending attempts. To avoid fake block creation attempts by a set of collaborative malicious users, LocalCoin enforces the average distance between the users that will verify the creation of a new block to be more than 𝑎𝑉𝑑. This last constraint allows the block creation messages to be scattered to as many users as possible. Incentives: We extend the transaction fee schema in order to motivate mobile users to participate. We propose transaction fees to motivate users to forward messages and block fees to motivate them to store as many blocks from the distributed block chain as possible. Transaction fees are important because mobile users are competing for them and they broadcast any received transaction. Every transaction includes an amount of localcoins that are collected by the mobile user who will first inform the receiver of the transaction about the transaction. The probability of a mobile user to earn the transaction fees does not depend on the technical characteristics of their mobile device since D2D communication protocols perform similarly in different devices. Block fees are important because users store the created blocks in order to be able to verify the creation of new ones. Whenever a block is created, the mobile users that verified each transaction because they where aware of it share the localcoins that were included in these transactions as block fees.LocalCoin is a lightweight protocol because: (i) any user has to only forward messages, (ii) a small subset of the total users are checking for the validity of a transaction message and (iii) users have to store the blocks that include their own transactions. The feasibility of LocalCoin is proved using concepts from random geometric graph (RGC) theory and its performance is depicted with the help of static graphs (Section <ref>), real trances from mobile users (Section<ref>) and simulations with mobile users in a city scale (Section <ref>). In order to explain LocalCoin, we assume that it is deployed as a service in an area[Terms "service" and "protocol" appear in this paper and the former is an implementation of the later.]. Examples: Figures <ref> and <ref> depict the broadcasting of a transaction and a block creation. Alice broadcasts t_Alice→ Bob to her neighbors who forward t_i → j because they hope to get the transaction fees. Their neighbors also forward the transaction for the same reason and then Eric is the first who informs Bob. If we assume for this example that 𝑚𝑇𝑟 = 2, after the reception of t_Alice → Bob with the signatures of Chris and David, Bob will broadcast his ack message and he will announce Eric as the receiver of the transaction fees. This pair of messages will be stored by at least Alice, Bob and Eric and potentially more users that participated in the forwarding and will be used in the block creation process. After collecting BS transaction pairs, David broadcasts a block creation message, Alice, Chris and Eric verify the block and Bob creates it since the average distance between them is more than aVd and 𝑚𝑉𝑢 = 5. § THE SETTINGS We assume a set of mobile users 𝒰 who are registered to LocalCoin service. Every user is self-interested and can be malicious. Each user i ∈𝒰 can utilise the service if she is inside the geographical area, d_i∈𝒟 and can change d_i only by moving to another location and not by manipulating it. We discuss further this assumption in Section <ref>. Two users can communicate only if their distance is within a threshold, so if a malicious user tries to manipulate his location, he will be detected.Any user i ∈𝒰 is able to exchange localcoins with another user j ∈𝒰 by creating one transaction t_i → j. User i needs to broadcast the transaction message that determines its characteristics. Any user j has a set of trusted users 𝑇𝑁_j and this selection is based on social interaction between users as well as on other device to device interactions <cit.>. A realistic requirement of our protocol can be to force users to select their trusted peers by pairing via NFC. The selection of 𝑇𝑁_j depends on j and they are responsible for guaranteeing that any received transaction with j as a destination should be examined before being broadcasted. The forwarding procedure is explained in detail in Section <ref>. User i owns some localcoins and in order to prove this ownership she remembers all the transactions in which she was the receiver. The transactions are stored in blocks and each block contains more than one transaction. Each block is based on a previous block by creating a blockchain. We call this chain distributed blockchain because each block is duplicated to more than one but not to every user.The mobile ad-hoc network nature of our service allows any other user k in the area to detect the transaction message, collect the information and contribute to this transaction. Any collected third party transaction can be used in the future by user k to earn money in terms of transaction fees and block fees. Transaction fees motivate mobile users to forward any received transaction message while block fees motivate mobile users to store collected transaction messages. We assume that any user k is self-interested and participates in the system in order to earn localcoins and use them later. Users that are not willing to sacrifice part of their resources in order to participate and earn localcoins are not considered since they only co-exist with the ones that participate. The more the active mobile users the better for the protocol and for that reason we design two types of incentives and also allow mobile users to stop participating if they want to save their resources and rejoin again later. We also consider malicious users that want to attack the protocol and double spend their localcoins. The robustness of Localcoin against the considered attacks is presented in Section <ref>. Each transaction t_i→ j is described by a set of inputs and outputs as presented in Table <ref>.h(t_* → i)(·) is the hash of a block that contains a transaction from anyone to user i. The outputs are: the transferred amount to user j, o_j, the transaction fees 𝑡𝑟𝑓_i → j, the block fees 𝑏𝑓_i → j, any possible change o_i and the amount of money user i owns, b_i. Bitcoin employs SHA-256 hashing algorithm, by adapting it in LocalCoin the size of each transaction will be 32 ·#input_transactions + 160 bytes. A transaction with less than 10 transactions as input requires less than a kilobyte of storage.Transactions are verified in blocks via a mechanism that is presented in Section <ref>. Observe that a transaction t_i → j additionally: (i) returns, if any, change to i, (ii) transfers transaction fees to appropriate users, and (iii) pays block fees if needed. From now on, whenever we are using a past transaction as an input to a new one, where user i transfers some money, we assume that o_i can be of any of the four previous types.Each user i keeps a transaction database 𝒯_i, which contains a subset of the distributed block chain and a set of pending/unverified transactions. This database contains all the blocks user i needs to verify her own localcoins ℬ_i ⊆𝒯_i as well as other blocks in which she was present. At any time, there are two types of transactions in the network, the verified ones that can be used as an input to a new transaction and are stored in the blockchain and the unverified ones. Unverified transactions are verified in bunches by a distributed consensus protocol and added to the distributed blockchain. § PROTOCOLWe present the basic functionalities of the LocalCoin protocol, which are categorised into three main categories; transaction messages (Section <ref>), block creation messages (Section <ref>), and block management messages (Section <ref>). §.§ Transaction Messages send(i,j,t_i → j): User i broadcasts a transaction, as described in Table <ref>, in order to transfer a number of localcoins to user j. The send command will broadcast t_i → j to all the nearby users (neighbors). Any user l operates based on the functionality of the receive(t_i → j) procedure as described in Algorithm <ref>. Whenever she receives a new transaction, she checks the sender and receiver and if she is not familiar with either of them she forwards the message hoping to collect the transaction fees. If she belongs to the trusted users of the receiver of the transaction, she examines the input transactions and signs the message if she is able to validate all of them. If she receives the message by a trusted user, she updates her transaction database according to the signed message. If she is the receiver of the message, she processes it as explained in procedure process(t_i → j,k). If she received the same transaction before, she ignores the message unless it is now signed by another trusted user of j. After receiving the transaction, j, has to wait for at least 𝑚𝑇𝑟 of her 𝑇𝑁_j trusted users to sign and forward the transaction to her. The first user who forwards this message to j, regardless of being in her trusted users, will receive the amount of 𝑡𝑟𝑓_ij if the transaction is going to be accepted by j and verified by the network. By accepting the transaction only if a subset of the trusted network signs and forwards the message to the receiver, the protocol addresses sybil attacks. User i will not stop broadcasting the transaction to any users she meets until she will receive the ack message from user j. ack(i,j,t_i → j): If j receives the message from 𝑚𝑇𝑟users of her trusted network 𝑇𝑁_j, then she broadcasts an acknowledgement message. This message also determines the address of the user that will receive the transaction fees. This message is also forwarded in the similar way to that of the send command. There is no need for a third round because user j can only assign 𝑡𝑟𝑓_ij to someone else. Everyone who receives the acknowledgement updates the knowledge about which accounts are participating in the transaction.§.§ Block Creation Messages build(BLK(t_i → j,t_i^'→ j^',…)): Whenever a user l collects BS transactions (both send and ack), that are not yet verified, shetries to build a new block. For that she needs to agree with 𝑚𝑉𝑢-1 other users about the validity of the BS transactions in order to reach to a consensus <cit.>. Her signed message also contains her location, d_l and a current value of average distance vector d. The distance vector has BS entries and each entry has the average distance between the users who verified the transaction. verify(BLK^'(t_i → j,t_i^'→j^',…)):The first 𝑚𝑉𝑢 users who verify all the transactions in the create message and have average distance between each other bigger that 𝑎𝑉𝑑 will share the block fees. Every user k who receives a verify message checks her database for unverified transactions and if she has any of the included in the message she signs them and forwards the message. Before forwarding the message, user k updates the distance entries, which she has signed. If she detects a double spending attempt she deletes her entry if it has a later time-stamp or she signs her entry and adds it into the message if it has an earlier time-stamp. In case of double spending detection, user k sets the entry for the corresponding transaction to 0 and attaches and signs her detected pair with a newer timestamp. Whenever a user receives a verify message with the location of the user not being in her coverage radius, (i) she verifies any transaction she can verify, (ii) she notes that the location of the receiver is not in her coverage radius and she marks the location entry as false and then (iii) she broadcasts the message.create(BLK^”(t_i → j,t_i^”→j^”,…)): Users who receive a message with transactions that are verified 𝑚𝑉𝑢 times and have average distance bigger than 𝑎𝑉𝑑 apart from forwarding the message they also broadcast a create message that defines the users who will share the block fees. Before broadcasting the create message, they examine if there is any entry of the 𝑚𝑉𝑢 that has been marked by another user as false and in such case, these entries are not considered in the block creation. §.§ Block Management Messagesdelete(i,t_* → i): We propose a garbage collection functionality that deletes every transaction that is not useful. The delete command is triggered after the create command in order to delete all the input transactions to the freshly verified ones since they can not be used any more. After deleting all the transactions of one block, the whole block is deleted. The motivation behind this process is to keep the size of the distributed block chain as storage efficient as possible because the mobile devices are not able to dedicate significant amount of storage for that[Bitcoin's blockchain is increasing with a rate of more than 200 MB per day <cit.>.]. sync(t): Any user can call sync function by giving only the time-stamp of her last update. By doing so, any nearby trusted user will send the newly verified transactions as well as the hash of the ones that have been deleted.§.§ The Blockchain EvolutionIf we have Λ transaction pairs per time unit, then we will have, in the long term, Λ/BS blocks per time unit. If one transaction t_i → j uses on average L_i → j transactions as an input then one block deletes ∑_k = 1^BSL_i → j^k links to past blocks. In order to become one block orphan, all its transactions need to be unpointed. Each transaction is pointed by 4 links, so this block will be deleted approximately when 4*BS transactions that point to that block are deleted. However, we cannot predict after how many block creations this will happen. Whenever a new block is created, the garbage collection process updates the past blocks on which the inputted transactions where placed. For any used transaction, in the creation of the new block, we delete the pointers to the parents of the used transactions. Figure <ref> is a pictorial view of where one transaction of the nine in the rightmost block is used as an input to a new transaction. All the links to the parent blocks of this transaction will then be deleted. If a block has no child pointers pointing at it, it is deleted and the block that is after it then points to the one before it.§.§ Parameters of LocalCoinThe performance of LocalCoin depends on multiple parameters. User availability and position determine the connectivity between the users and the ability to verify new transactions but these parameters are not regulatable by LocalCoin. On the other hand, the number of the trusted users needed for one transaction to be accepted (𝑚𝑇𝑟), the amount of the transactions in each block (BS), the number of the users need to verify the creation of one block (𝑚𝑉𝑢) and the average distance between the users who verify the block (𝑎𝑉𝑑), affects the performance of LocalCoin and characterises the trade-off between the time needed to verify a new transaction, the security against double spending and the increase in the stored data. However, since LocalCoin can be categorised as a location based service, all four parameters can be adjusted based on the required transaction speed and the maximum risk of double spending as well as the rate of the creation of a new block.§ ANALYSIS In this section we analyse and validate the performance of LocalCoin. Section <ref> introduces the connectivity conditions under which the protocol is applicable and Section <ref> presents the circumstances where a malicious user is able to successfully double spend some localcoins. Given that users' mobility increases the capacity of multihop wireless networks <cit.>, we examine the worst-case performance of LocalCoin by considering non moving users. §.§ ReachabilityEach user i is located at d_i∈𝒟 and any other user j located in the coverage area of i, D_i (i.e j ∈ D_i) is able to receive any message i broadcasts. Practically, the coverage area of each user has a radius comparatively close to the coverage area of WiFi direct. We assume that every user has the same normalised coverage radius and we denote it as r_cov=Wifi_direct_coverage/Area_of_the_supported_service. Given the location of each user and the normalised coverage radius we produce a 2-dimensional random geometric graph (RGC) G_𝒟(𝒰,r_cov) . The number of connected components of G_𝒟(𝒰,r_cov) depends on |𝒰| and r_cov and has a subcritical and a supercritical phase. In the subcritical phase the number of connected components is large while in the supercritical phase it converges to one. A well known result for d-dimensional random geometric graphs is the following <cit.>:For |𝒰|r_cov^d≥ 2log |𝒰| the G_𝒟^d(|𝒰|,r_cov) is (1) connected with probability at least 1-1/|𝒰|^2,(2) r-regular and (3) the degree of every user, with high probability, is π^d/2/Γ(1+d/2)nr^d(1+o(1)).Where Γ(·) is the Gamma function and given that we consider a 2-dimensional graph (d=2), Γ(2) = 1. We can rewrite the lemma as: If |𝒰|/log |𝒰|≥2/r_cov^2, G(|𝒰|,r_cov) is regular with degree d_𝒟=π |𝒰| r_cov^2(1+o(1)).Given a β-expander d_𝒟-regular graph, for every set S ⊂𝒰 , |S| ≤ |𝒰|/2, holds out(S) ≥β |S|. Where: out(S) = | {{u,v} | {u,v}∈ G_𝒟(|𝒰|,r_cov) , u ∈ S, v ∉ S}|Authors of<cit.> state that if the nodes of the random geometric graph are produced by Poisson point process in the 2 dimensions, its density should be in the spectrum of [0.696, 3.372]. Simulation results converge to 1.44. If for example the subscribed users are 1000 and the coverage radius of wifi-direct is 200 meters the users will be able to form a connected graph with probability 0.999999 if the area of the supported service is less than π√(2/310^11)≈ 0.8 km^2.Suppose user i wants to give some localcoins to user j. User i has d_𝒟 neighboors and by the properties of the expander graphs, there are (1 + d_𝒰)(1 + β)^l users at l hops from i. We continue expanding from i until the reachable set of users V_i has more than |𝒰|/2 users. User j may not be among them. However, if we expand from user j in the same way, we eventually obtain a set V_j of more than |𝒰|/2 users reachable from j. The sets V_i and V_j both contain more than |𝒰|/2 users so they must overlap. The overlap contains users on a path from i to j. In this way, we have shown that:For any pair of users i and j, in the same connected component, there is a path of length at most 2(l + 1) from i to j, where l = log_(1+β)𝒰/2d_𝒟.The larger the value of β, the shorter the path between any two users.In LocalCoin protocol, j has to be connected with at least 𝑚𝑇𝑟∈𝑇𝑁_j in order to accept the transaction. Theorem <ref> ensures that if user i wants to transfer some localcoins to user j the only requirement is that user j be connected with at least 𝑚𝑇𝑟 users of her trusted network. The probability of the transaction to be successful depends on two main factors. The most important factor is both i and j must belong in the same component c, that is : p(i ∈ c)· p(j ∈ c)=|% c|· |% c| = |% c|^2. Where |% c| is the fraction of the users that belong to component c. The second factor is to have at least 𝑚𝑇𝑟⊂𝑇𝑁_j of j's trusted users in the component. This probability equals:∑_l=𝑚𝑇𝑟^|𝑇𝑁_j|𝑇𝑁_jl(p·|% c|)^l(1-p· |% c|)^𝑇𝑁_j - lWhere p is the probability of one user who belongs to the trusted network of j to be able to sign i's message. It is worth mentioning that in the case of moving users, the probability of having a successful transaction is increasing because mobile users can forward the received transactions whenever they make new neighbours.§.§ Robustness Against Attacks Malicious users may try to attack LocalCoin in various ways and in case of a successful attack they may be able to steal localcoins from other users or just harm the protocol. There are two general categories of attacks in LocalCoin, the ones that are based on the proof-of-ownership and the ones that are based on double spending. In this Section we focus on malicious users who want to earn localcoins via double spending since this is the most popular attack on cryptocurrencies.Attacks where mobile users input fake transactions and try to send them to other users will not be successful since the trusted users of the recipient will not sign them but such attacks will consume the resources of the mobile users that forwarded them.Let us assume that a malicious user m wants to double spend a localcoin. We consider two types of attacks: Passive: The attacker initiates two transactions to two different receivers and broadcasts them to two different connected components of the network. If there is only a single connected component, the attempt will be detected and hence there must exist at least two disjoint components for m to be successful in double spending. In Section <ref> we show that probability of such attack decreases quadratically in |𝒰|.Active: Another possible attack is the one where m is able, with the help of some colluders (ℳ), to control an area ℛ and split 𝒟 into more than one disconnected parts artificially. We discuss such attack in Sections <ref> and <ref>. In both types of attacks we assume that the trusted users who are asked to verify transactions are aware of every block.§.§.§ A Passive double spending attackIn order for double spending to be successful, m has to employ enough colluders in order to cheat at least 2𝑚𝑉𝑢 other users. Each recipient of the fake transaction will wait until 𝑚𝑉𝑢 of her trusted nodes will forward her the fake message. Depending on BS, 𝑚𝑉𝑢 and 𝑎𝑉𝑑 the probability of double spending is changing. However, the wireless medium does not allow m to only select a number of users. Given that 𝑚𝑇𝑟 users from both receivers are aware of m's ability to initiate this transaction, we examine how the remaining parameters affect the difficulty of double spending:BS: The lower the number of transactions in one block the faster each block can be created and this allows m to try to double spend the same input and create two new blocks. If the connectivity graph between the users is partitioned, double spending is possible.𝑚𝑉𝑢: The higher the number of users needed to verify one transaction the more m needs to collaborate with him. For that, Lemma 1 can not hold and the connectivity graph has to be in the subcritical phase. 𝑎𝑉𝑑: The higher the value of 𝑎𝑉𝑑 the most difficult it is for m to double spend. Each user has on average d_𝒟 neighbours and any two users can communicate if the distance between them is less than r_cov, then if λ = 𝑎𝑉𝑑/r_cov,λ· d_𝒟 users will receive the request for fake block creation. We can paraphrase Lemma 1 and state that:If |𝒰| (𝑎𝑉𝑑/λ)^2 ≥ 2log |𝒰|and λ d_𝒟 > |𝒰|/2, doublespending is possible with probability at most1/|𝒰|^2. §.§.§ Virtual-cut attack: An active double spending attack in static graphsLet us assume that the users' positions, {d_i}, are distributed uniformly and are static. As proved in the previous subsection, it is difficult to double spend a localcoin in the induced random graph as with high probability it consists of one major component. However, a malicious user may have detailed knowledge of the graph topology and he may artificially create a virtual cut by controlling users that transmit messages selectively to one part of the graph only. If a malicious user is able to double spend a localcoin by such trick, we say he is successful in a virtual-cut attack. To complete a transaction in both components, each must contain enough users to verify the transaction and are at least 𝑎𝑉𝑑 apart. This provides a lower bound on the number of users that must be controlled to create a suitable cut. Suppose a malicious user manages to induce an artificial cut as shown in Figure <ref>. He partitions the users into two components, A_1 with 𝒰(A_1) users and A_2 with 𝒰(A_2) users by controlling users in region B. Let A = A_1∖ B and the number of the users in A are |A| = α|𝒰|/2. Also, let the average distance between any pair of users within A to be γ𝑎𝑉𝑑. That is,∑_i,j| i,j ∈ A|d_i - d_j|/|A|/2(|A|-1) = γ𝑎𝑉𝑑. Let the average distance of a user in region B with a user in regions A or B to be ζ𝑎𝑉𝑑. The malicious user has to ensure that the average distance of any pair of users who agree the transaction is at least 𝑎𝑉𝑑. Let malicious user selects ℳ users from region B for block creation with 𝒰(A_1). She needs to ensure:αζ |ℳ||𝒰|/2𝑎𝑉𝑑 + ζ|ℳ|^2/4𝑎𝑉𝑑 + γα^2|𝒰|^2/4𝑎𝑉𝑑/α |ℳ||𝒰|/2 + |ℳ|^2/4 + α^2|𝒰|^2/4 > 𝑎𝑉𝑑⇔2α |ℳ||𝒰|(ζ-1) + |ℳ|^2(ζ-1) + α^2|𝒰|^2(γ -1) >0 If the malicious user chooses to (i) increase ζ or (ii) decrease α and γ, the region B will enlarge and thus he will need to add more users into ℳ. Intuitively, to decrease |ℳ| he needs, higher ζ, or smaller values of α which in turn again need to control a bigger percentage of 𝒰 and increase |ℳ|. 0.9 Example:For α=1, γ=1/2 and ζ=1.5, m has to control at least |ℳ| ≥ (√(2)-1) |𝒰| or over 41% of the users.Note that the above double spending attack is one of the possible attack and an attacker can isolate a region ℛ by controlling all the users in a specific region. If he manages to control all the users in the shaded region, as shown in Figure <ref>, he can successfully create two virtual disconnected components in the connected graph. In order to block any message flow between the regions, r'>r+2r_cov has to hold. That is, he needs to cover an area the size of at least ℛ= π(r'^2 - (r'-2r_cov)^2) = 4π r_cov(r'-r_cov) The attacker can further optimize the attack by locating one of the two virtual components in a corner, so as to reduce the area he needs to control (Fig. <ref>). In a region A, the average distance between any two users placed with a uniform distribution is d̃r' (Under extensive simulations d̃ = 0.45). In LocalCoin protocol, a transaction will be completed in A if the average distance between any two users is at least 𝑎𝑉𝑑. That is, d̃r'>𝑎𝑉𝑑. The attacker can reduce the area that he needs to control by trying to push A further into a corner but he cannot reduce r'<𝑎𝑉𝑑/d̃. Thus to create two successful transactions, one on A and one in the remaining area 𝒟∖ A, before he gets detected, he needs to control all the users in an area the size of: ℛ = 1/4π((r')^2-(r'-2r_cov)^2) = π r_cov(r'-r_cov)> π r_cov (𝑎𝑉𝑑/d̃-r_cov) We assume that the users are uniformly distributed on a unit square.The average distance between any two users on any disc of radius r inside this unit square is d̅r (Under extensive simulations d̅ = 0.903). In LocalCoin protocol, we desire the information about each transaction to reach at least 50% of the users at the time of accepting a transaction. Thus, π r^2 > 0.5 which translates to𝑎𝑉𝑑 > 0.36.A higher value of 𝑎𝑉𝑑 will slow down transactions' verification rate but it will increase security.0.9 Example:For 𝑎𝑉𝑑=1/3 and r_cov=0.05, the malicious user needs to control a region which is0.1085 of 𝒟. As all the users are uniformly distributed on 𝒟, it amounts to controlling 10.85% of 𝒰.To be able to double spend a LocalCoin by a virtual cut, an attacker needs to control at least ℛ > π r_cov (𝑎𝑉𝑑/d̃-r_cov) fraction of the users, under the assumption that all the users are uniformly distributed and are static. §.§.§ Virtual-cut Attack: An active double spending attack in dynamic networksIn reality, even if the attacker colludes with ℳ: |ℳ| > |ℛ|/|𝒟| other users and places them appropriately to create two virtual components in the network. The remaining 𝒰∖ℳ users may be moving dynamically. So at the time when he plants a double spending attack,for the attack to be successful, none of these |𝒰|-|ℳ| users should be placed in ℛ.Thus the probability that such the double spend attack by controlling large number of users to be successful is |ℛ|/|𝒟|^(|𝒰|-|ℳ|) 0.9 Example:For |ℛ|/|𝒟|≈ 10% and |𝒰|-|ℳ| ≥ 100, the probability of a successful attack is at most 10^-6.The assumptions made on the analysis of LocalCoin are discussed in Section <ref>. § PERFORMANCE EVALUATION OF LOCALCOINWe conduct a set of simulations in static and dynamic graphs. The static random geometric graphs are produced with MATLAB (Section <ref>). We implement an event driven simulator in JAVA for dynamic graphs using real mobility traces (Section <ref>) and in order to scale up the number of the mobile users, we use the ONE simulator <cit.> (Section <ref>). After proving, in Section <ref>, that double spending is improbable in fully connected mobile ad-hoc networks, we investigate scenarios where the users are not fully connected in order to depict the robustness of LocalCoin.§.§ Evaluation with RGGs In the static analysis, we focus on the characteristics of the produced RGGs and their affect on LocalCoin. The simulations on static graphs are important because in the case where users are not moving, a malicious user m has the highest chances to successfully double spend some localcoins. We distribute uniformly the users 𝒰 in a [0,1]×[0,1] area and we study the effect of |𝒰| and r_cov on the number of connected components and the fraction of users in the largest connected component. Figure <ref> shows how |𝒰| affects the number of connected components and the size of the major connected component for three different values r_cov.[11]l.35< g r a p h i c s >Non-uniform distribution of users. Figure <ref> shows how r_cov affects the aforementioned quantities for two different user placements.Given that the uniform distribution of 𝒰 is not a realistic case, we split the examined area into a 10 by 10 grid of equally spaced 100 cells,where the number of users in each cell is determined by a Poisson distribution with different parameter. Figure <ref> depicts the used grid and the darkness of each cell depicts its populatiry.The average number of the total users is still 1000. In this non uniform case, the major component is formed for smaller values of r_cov. In the case of static graphs, double spending is possible when the number of connected components is more than 1, 𝑚𝑉𝑢 is smaller than the number of the users in the components selected to double spend and 𝑎𝑉𝑑 is small enough to apply for the users in each component. This requires at least two large components with roughly equal size. From our simulations, for |𝒰|=1000 and r_cov=0.5, the major component contains more than 90% of the users. As only one big component is getting formed the information about each transaction will spread in the network very easily. Figure <ref> presents the distribution of the users in the examined area and it is useful to understand the impact of the constraint of the average distance between the users who verify the creation of a new block on the spread of the block creation process in the whole area. In order to produce the figure, we placed randomly 1000 and 2000 users and we selected randomly one of them and we measure the distance of all the other users with the selected one. Figure <ref> shows that even if the imposed threshold in the creation of a block is less that 30% of the maximum possible value, more than 50% of the users will be informed about the creation process.In order to examine LocalCoin in more realistic cases, we consider mobile users in the next two subsections. §.§ Evaluation with Mobility TracesWe implement an event-driven simulator in Java in order to depict the performance of LocalCoin. We used three datasets, Infocom'05 and Infocom'06 from the Haggle project <cit.> and Humanet <cit.>, which contain user mobility traces in different environments. The duration of the simulation is one day. We select the first day of the first two datasets, while Humanet is one day long. We considered all the mobile users, which are 41, 78 and 56 respectively. We introduce the datasets using the concepts of Transaction Rate and Transaction Spread. We define transaction rate as the fraction of the completed transactions and the transaction spread as the average fraction of the users that have stored the transaction. Figures <ref>, <ref> and <ref> show the average time needed for one transaction to reach its destination and the transaction rate for different number of transactions per user. The receiver and the time of the transaction occurrence are generated uniformly between the users and the day. Figure <ref>, illustrates the transaction spread in the case where each user initiates one transaction at the beginning of the day. Note that all datasets are sparse with small number of users and hence, the transaction spread is slow resulting into the small values of transaction rate and transaction spread. This set of figures is needed to explain and understand the results produced by the simulation of a malicious user in these three traces.Next, we examine the chances a malicious user (m) has to deliver multiple transactions with the same input (fake transactions) to more than one users. m tries to double spend by making at least two of the receivers of his fake transactions to accept them. However, double spending will not be successful before the creation of two blocks that contain these fake transactions, which is not possible if 𝑎𝑉𝑑 is large enough. To simulate a double spending attack: m creates 2, 3, 5 or 10fake transactions. Figures <ref>, <ref> and <ref> show the average transaction spread of the fake transactions for variable number of colluders (ℳ). Multiple copies of the same transaction decrease the average spread of the fake transaction because the normal users (𝒰∖ℳ) receive at least two fake transactions with higher probability. Furthermore, most of these duplicates are stored by the colluders and not by the normal users. Figure <ref> shows the spread of the duplicates in 𝒰 and in 𝒰∖ℳ for the case of 5 fake transactions and |ℳ| = 0.5 |𝒰|. On average, less than 2% of the normal users receive the fake transactions.Only the first copy has spread like a normal transaction while the spread of others is decreasing dramatically since normal users are familiar with the first ones. Figure <ref> shows the probability of at least one of the receivers of the fake transactions to accept the transaction. For analysis purpose, we make it easier for m by using 𝑚𝑇𝑟=0. Even in this setting, the chance of m being successful in making a receiver to accept a fake transaction is <1%. Note that successful transaction does not mean that the transaction is accepted in blockchain but it is pending. §.§ Evaluation with ONE simulator We implement LocalCoin on ONE simulator to scrutinize its behaviour on a larger scale and examine how users' speed and coverage radius affect the chances of a malicious user to double spend. We consider a 4km^2 area in the center of a Metropolitan city and a |𝒰| = 1000 mobile users. As a mobility pattern we use shortest path map based movement.Figures <ref> and <ref> demonstrate the spread of a normal transaction.In Figure <ref>, the coverage radius of every broadcast is 100 meters while in Figure <ref> it is 50 meters[Wifi-direct supports a coverage radius of 200 meters but we expect it to drop radically in crowded urban areas.]. Figure <ref> shows that even if a malicious user has |ℳ| = 100 colluders to forward his fake transaction he does not have enough time to create a second fake transaction because in less than a minute almost all the users in the area will be informed of her transaction. The walking speed of the users has little influence on this case. However, if the coverage radius is 50 meters (Figure <ref>), the walking speed matters. In case of the slow movement (0.1-0.5 km/h) it takes around 3 minutes for one broadcasted message to reach more than half of the users while in the case of normal walking (0.5-1.5 km/h) it takes less than 1 minute. In order to examine more thoroughly the case when cheating is still possible, we consider the case of normally walking users (0.5 - 1.5 km/h) and coverage radius of 50 meters. A malicious user m creates two fake transactions and randomly selects the receivers of them. We examine three settings that differ in the time between the creation of the fake transactions. Figure <ref> depicts how the fraction of the colluders |ℳ|/|𝒰| affects m's chances to deliver the two fake transactions.In the first setting, m is not able to deliver 2 fake transactions, regardless of |ℳ|, because the time difference in the creation of the 2 them is only 10 seconds and m is in contact with the same users. In the second setting the time difference is 1 minute and m is able to successfully deliver two fake transactions but this happens either because he selects two users who have not met yet or have not met the same other users. In the third setting, m initiates the two transactions with 2 minutes difference. In this setting, m, delays the creation of the second fake transaction as much as possible in order to move as far as possible from the users that have the first transaction. Then, he is able to find a remote user that does not have the first transaction ∼ 60%of the times, when half of the users in the network are assisting him. But If he wait a bit more (e.g. 30 more seconds), the first transaction will be spread in all the network, as shown in Figure <ref> (after 150 seconds > 99% have received the first transaction).It is worth reminding that, in the examined scenarios, we present the cases when a mobile user can be cheated by a malicious one in terms of accepting the transaction. However, this does not mean that the malicious user managed to double spend some localcoins. In order to succeed in that, two users have to initiate block creations and succssfully verify both of these blocks. The necessity for high density comes from this need to allow the block creation procedure to both manage to spread in a big part of the network and detect the attempts for double spending.In summary, all three evaluation subsections are focused on describing the area with high connectivity, in which we argue that LocalCoin is applicable. This applicability depends on whether a normal transaction is properly spread to the whole network and a fake one is not spread and not verified. We examined both the case where the users are not moving and when they are moving either based on some available traces or by simulating their movement with the ONE simulator. In more detail, when users are static, we show that for a reasonable scenario, i.e., |𝒰|=1000,r_cov=0.05 the area consists of one major component implying feasibility of LocalCoin and impossibility of double spending attacks. When the users are walking, we show that with an average movement speed of 1kmph, they can complete their transactions in seconds while malicious users, even if they manage to find the proper time to initiate a fake transaction, a few minutes after that they will be detected in the block creation phase. § DISCUSSIONWe analysed LocalCoin by considering static graphs that are formed as random geometric graphs of which nodes are users' locations and the coverage area of the used wireless technology compared with the deployment area dictates whether two nodes are close enough to be connected. Static networks provide a lower bound in the performance of the protocol. In the case of mobile users, LocalCoin performs better because the transactions are more easily flooded to users and any malicious user has to not only consider the locations of the normal users but also their mobility, which is a very difficult task. An analysis of mobile users with different mobility patterns is part of our future work. It is notable that a conservative selection of the LocalCoin parameters (i.e., high values for 𝑚𝑇𝑟, BS, 𝑚𝑉𝑢 and 𝑎𝑉𝑑) can guarantee that double spending is not possible, however the performance of the protocol, in terms of time, will be hindered. §.§ Discussion about the assumptions of the analysis The analysis presented in Section <ref> is based on three major assumptions:(A1) The mobile users are uniformly distributed in the service area. (A2) The mobile users can not hack their location. (A3) The colluders of a malicious user are preselected and a malicious user is not able to bribe a normal user during the double spending attack. Below we discuss the reasons that drove us to these three assumptions:(A1) The motivation behind distributing uniformly the mobile users in the examined area is that it makes the functionality of LocalCoin more challenging. Although it would be easier for a malicious user to identify critical locations to occupy if the users were not uniformly distributed, it would be also easier during the implementation of LocalCoin and the selection of its tuning parameters, to make such locations insufficient to create a virtual cut that can lead to a successful double spending attack. Figure <ref> shows that a major connected component can be formed more easily in the case where the users are not uniformly distributed. (A2) Although users' actual location is a core component of LocalCoin and a malicious user may consider asking a set of colluders to lie about their location and help him on double-spending, this is against his interest because the non-colluding neighbours of the colluders will immediately detect the attack and discard the block. Normal users know their location and the average coverage radius, so LocalCoin can handle such attacks by forcing each user to check the locations that her neighbours put during the block creation process. A parallel to the block verification mechanism can detect the location manipulation attempts and intercept the double spending attacks.Given that the actual locations are only needed for the calculation of the average distance between the verifiers of a block, the block creation and verification messages can be enlarged to contain the ids of the neighbours each verifier has when signing the block. Then the rule that dictates when a block can be created (i.e. when the average distance between the verifiers is higher than aVd) can be complemented by a second one that requires the number of the distinct users that are neighbours of the verifiers to be higher than a threshold.Although the incentives for each user to participate in such mechanism are not presented and analysed in detail, a simple defence mechanism that detects and punishes all the participants every time a location manipulation has been detected can be enough to create the necessary bias against the malicious users. (A3) Our assumption is based on the fact that we envision LocalCoin to work on off-the-shelf mobile devices whose owners are using them on their needs while LocalCoin is running on the background. Mobile users are not aware of message exchange and the only way for a malicious user to double-spend is to implement another version of LocalCoin and install it in his colluders.§.§ Getting assistance from a fixed networkIt is worth mentioning that LocalCoin can benefit from fixed networks in the small scale (university campus scale) to increase the speed of the message forwarding and decrease the importance of user's density. The access points of a fixed network can be treated as normal users who do not have storage capabilities (i.e. 𝒯_AP = ∅, ∀ access point) and do not compete for the transaction fees but broadcast all the incoming messages to the other nodes of the fixed network who broadcast the messages to the associated mobile devices. Moreover, the access points can be used for the block creation process as a guarantee for the average distance between the mobile users. In more detail, each user who verifies the creation of a new block can attach a recently signed message that she received from the closest AP in order to provide a robust estimation of her location. However, these extra functionalities that are provided by a fixed network can only improve the performance of LocalCoin on the practical level. D2D architectures, with main representative being the 5G, are becoming more and more popular. The Wifi-direct technology is getting more mature and is adapted by many customer products, while, the design of LTE-direct is moving in this route. Motivated by advances in this direction and considering that any existing infrastructure, like an institutional network in a university campus, can only improve the coverage of LocalCoin, we argue that protocols like LocalCoin are applicable and implementable.§.§ Power cost of LocalCoinLocalCoin, in contrast to Bitcoin, employs a defence mechanism against double spending that does not require powerful CPUs but requires an amount of messages to be exchanges between the mobile users. Based on measurements of the power consumption of the WiFi interface we estimate that a transaction with size 1 kilobyte consumes 0.00048/0.0004 joules when transmitted/received <cit.>. Although there exist studies that show that the power needs of WiFi is higher that the CPU <cit.>, LocalCoin does not transfer messages 100% of the time. A detailed analysis of the power consumption of LocalCoin is part of our future work. § FUTURE WORKIn this work we analysed in detail the probability of a successful double spending, our next step is to analyse the proposed distributed blockchain and provide bounds regarding the required number of verifiers each users should have, the number of the blocks each user should store, on average, in order for the protocol to be functional and the relationship between the number of the stored blocks and earnings from the fees.Our future extensions of LocalCoin will be also focused on building defence mechanisms against other types of attacks where mobile users form coalitions and perform abnormally without their incentives being aligned with the incentives of the normal mobile users. § CONCLUSIONIn conclusion, LocalCoin is a decentralised cryptocurrency built on principles similar to Bitcoin but avoids any need of the Internet and computing overheads. We used a distributed blockchain structure to store the produced transactions in the form of blocks and we proposed a set of four parameters to show the existing trade-offs in our proposal due to the mobile ad-hoc nature. These 4 parameters can be adjusted to provide the required security guarantees, in terms of double spending, and at the same time affect the maximum transaction rate.LocalCoin is applicable in dense areas of mobile users and if the users are fully connected, the double spending is impossible. abbrv[ < g r a p h i c s > ]Dimitris Chatzopoulos received his Diploma and his Msc in Computer Engineering and Communications from the Department of Electrical and Computer Engineering of University of Thessaly, Volos, Greece. He is currently a PhD student at the Department of Computer Science and Engineering of The Hong Kong University of Science and Technology and a member of HKUST-DT System and Media Lab. During the summer of 2014, he was a visiting PhD student at Ecole polytechnique federale de Lausanne (EPFL). His main research interests are in the areas of mobile computing, device–to–device ecosystems and cryptocurrencies. [ < g r a p h i c s > ]Sujit Gujar is an Assistant Professor at the Machine Learning Laboratory@IIITH. Prior to this, he was a Sr. Research Associate at Indian Institute of Science. He worked as a post-doctoral researcher at Ecole polytechnique federale de Lausanne (EPFL). He also worked as a research scientist with Xerox Research Centre India where he contributed in developing a technology that enables enterprises to use crowdsourcing as a complimentary workforce. His research interests are Game Theory, Mechanism Design, Machine Learning, and Cryptography applied to modern web and AI applications such as Auctions, InternetAdvertising, Crowdsourcing,and multi-agent systems. His doctoral thesis was awarded alumni medal for best doctoral thesis in the Department of Computer Science and Automation at Indian Institute of Science. He was a recipient of Infosys fellowship for his doctoral research. He has co-authored 5 journal publications, 1 book chapter and 22 conference/workshop papers. He has 11 patents on his name. [ < g r a p h i c s > ]Boi Faltingsis a full professor of computer science at the Ecole Polytechnique Federale de Lausanne (EPFL),where he heads the Artificial Intelligence Laboratory, and has held visiting positions at NEC Research Institute, Stanford University and the HongKong University of Science and Technology. He has co-founded 6 companies in e-commerce and computer security and acted as advisor to several other companies. Prof. Faltings has published over 300 refereed papers and graduated over 30 Ph.D. students, several of which have won national and international awards. He is a fellow of the European Coordinating Committee for Artificial Intelligence and a fellow of the Association for Advancement of Artificial Intelligence (AAAI). He holds a Diploma from ETH Zurich and a Ph.D. from the University of Illinois at Urbana-Champaign. [ < g r a p h i c s > ]Pan Hui received his Ph.D degree from Computer Laboratory, University of Cambridge, and earned his MPhil and BEng both from the Department of Electrical and Electronic Engineering, University of Hong Kong. He is currently a faculty member of the Department of Computer Science and Engineering at the Hong Kong University of Science and Technology where he directs the HKUST-DT System and Media Lab. He also serves as a Distinguished Scientist of Telekom Innovation Laboratories (T-labs) Germany and an adjunct Professor of social computing and networking at Aalto University Finland. Before returning to Hong Kong, he has spent several years in T-labs and Intel Research Cambridge. He has published more than 150 research papers and has some granted and pending European patents. He has founded and chaired several IEEE/ACM conferences/workshops, and has been serving on the organising and technical program committee of numerous international conferences and workshops including ACM SIGCOMM, IEEE Infocom, ICNP, SECON, MASS, Globecom, WCNC, ITC, ICWSM and WWW. He is an associate editor for IEEE Transactions on Mobile Computing and IEEE Transactions on Cloud Computing.
http://arxiv.org/abs/1708.08086v2
{ "authors": [ "Dimitris Chatzopoulos", "Sujit Gujar", "Boi Faltings", "Pan Hui" ], "categories": [ "cs.CR" ], "primary_category": "cs.CR", "published": "20170827133943", "title": "LocalCoin: An Ad-hoc Payment Scheme for Areas with High Connectivity" }
Institute for Advanced Study, Tsinghua University, Beijing 100084, China Department of Materials Science and Engineering, University of Utah, Salt Lake City, UT 84112, USA Institute for Advanced Study, Tsinghua University, Beijing 100084, China [email protected] Institute for Quantum Science and Engineering, and Department of Physics, Southern University of Science and Technology, Shenzhen 518055 Department of Materials Science and Engineering, University of Utah, Salt Lake City, UT 84112, USA [email protected] Department of Materials Science and Engineering, University of Utah, Salt Lake City, UT 84112, USA Collaborative Innovation Center of Quantum Matter, Beijing 100084, China [email protected] Institute for Advanced Study, Tsinghua University, Beijing 100084, China Collaborative Innovation Center of Quantum Matter, Beijing 100084, China NaFe_0.5Cu_0.5As represents a rare exception in the metallic iron pnictide family, in which a small insulating gap is opened. Based on first-principles study, we provide a comprehensive theoretical characterization of this insulating compound. The Fe^3+ spin degree of freedom is quantified as a quasi-1D S=5/2 Heisenberg model. The itinerant As hole state is downfolded to a p_xy-orbital hopping model on a square lattice.A unique orbital-dependent Hund's coupling between the spin and the hole is revealed. Several important material properties are analyzed, including (a) factors affecting the small p-d charge-transfer gap; (b) role of the extra interchain Fe; and (c) the quasi-1D spin excitation in the Fe chains. The experimental manifestations of these properties are discussed. Electronic and spin dynamics in the insulating iron pnictide NaFe_0.5Cu_0.5As Zheng Liu December 30, 2023 =============================================================================§ INTRODUCTIONWhile the physics of high-temperature cuprate superconductors is generally attributed to doping a Mott insulator, <cit.> the origin of iron-based superconductivity appears barely related. <cit.> Interestingly, it was recently found that by Cu substitution the iron pnictide superconductor NaFe_1-xCu_xAs exhibits Mott-insulating-like behavior, <cit.> which provides a rare example bridging these two intriguing classes of superconductors. Indeed, scanning tunneling spectroscopy revealed striking similarities between the local electronic structure of NaFe_1-xCu_xAs and lightly doped cuprates. <cit.> More recently, the x=0.5 limit, i.e. NaFe_0.5Cu_0.5As, was reached, in which Cu atoms was found to form well-ordered nonmagnetic 1D chains while the Fe atoms form 1D antiferromagnetic (AFM) chains. <cit.> Such a stoichiometric insulating sample largely excludes an insulating phase orignated from the Anderson localization. Angle-resolved photoemission spectroscopy (ARPES) revealed a narrow band gap of the size ∼16 meV, which was further examined by density functional theory (DFT) calculation plus the onsite U correction (DFT+U). <cit.>Considering that the gap size is comparable to that in a narrow-gap semiconductor, charge excitations are expected to remain active at ambient temperature. In addition, the magnetically ordered quasi-1D Fe chains should support unique spin excitations, which might provide clues to understand the interplay between AFM magnetic order and superconductivity in Fe-based superconductors. <cit.> This article aims to provide a systematic description of the low-energy physics in NaFe_0.5Cu_0.5As within the DFT+U formalism. The paper is organized as follows. Section II describes the methodology. Section III reproduces the DFT+U results based on the experimentally determined chain-like structure, and further clarifies the charge-transfer nature of the energy gap and the spin state of each element. Section IV studies how the electronic structure changes when this chain structure is perturbed. This result indicates a close connection between the insulating phase and the formation of quasi-1D AFM chains. It also explains the robustness of this insulating phase when iron concentration increases. Section V quantifies the effective spin model and discusses its manifestation in experiment. In Section VI, we reveal unique orbital-dependent spin polarization of the hole bands due to its coupling to the AFM Fe chains. Section VII concludes this article.§ CALCULATION METHOD The experimentally determined structure of NaFe_0.5Cu_0.5As contains alternatively aligned AFM Fe and nonmagnetic Cu chains along the [100] direction [Fig.<ref>], as revealed by the high resolution TEM measurements and neutron scattering <cit.>. Starting from this lattice and magnetic structure, DFT+U calculations are performed using Vienna Ab initio Simulation Package (VASP) <cit.>. The +U correction follows the simplified (rotational invariant) approach introduced by Dudarev <cit.>:E_DFT+U=E_DFT+U_eff/2∑_σ,m[n_m,m^σ-(n̂^σn̂^σ)_m,m ],where m is the magnetic quantum number of the five Fe 3d-orbitals (for the present case), and n̂ is the onsite occupancy matrix. This +U correction can be understood as adding a penalty functional to the DFT total energy expression that forces the d-orbitals either fully occupied or fully empty, i.e.,n̂^σ = n̂^σn̂^σ.We set U_eff=2.8 eV, as used in the previous study to get the correct insulating gap size <cit.>.With respect to the DFT functional, electron exchange and correlation are treated by using the Perdew-Burke-Ernzerh functional <cit.> with the projector augmented wave method. <cit.> Plane wave basis sets with a kinetic energy cutoff of 300 eV is used to expand the valance electron wave functions. Monkhorst-Pack <cit.> k point grid of 8×8×4 are adopted to represent the first Brillouin zone. The electronic and spin ground state is determined self-consistently until the energy threshold 10^-5 eV is reached.§ THE INSULATING GROUND STATE In Fig.<ref> and <ref> we plot the gound state atomic and magneitc structure (structural data from ref. <cit.>) as well as the DFT+U band structure of NaFe_0.5Cu_0.5As, and mark each band with its chemical compositions. It is very different from the metallic iron pnictides as reflected by a small gap at the Fermi level [Fig.<ref>]. More importantly, the Fe bands split into two sets: one right above the Fermi level (upper Hubbard bands, UH), and the other deep inside the occupied states (lower Hubbard bands, LH). The occupied band edge consists of nearly pure As-orbitals free from strong correlation. This explains the surprisingly excellent agreement between DFT+U and ARPES results around the occupied band edge. <cit.> The sharp difference of the chemical component between the occupied and unoccupied band edge also explains the strongly asymmetric dI/dV spectral line shape observed by STM when the bias reverses. <cit.> We should point out that a previous DFT+U calculation attributed a large fraction of Cu-orbital contribution to the occupied band edge,<cit.> whereas our result indicates that the majority of Cu bands stay deep below the Fermi level. Both Fe- and Cu-dominated bands are narrow, distinct from the dispersive As bands, which reflects the localized nature of d-electrons. The valence states of Fe and Cu can then be determined by counting the number of occupied bands of each element. This analysis indicates a +1 valence state for Cu (3d^10) and a +3 valence state for Fe (3d^5), respectively, which is consistent with the experimental observation that only Fe atoms exhibit local magnetic moment. <cit.> The half filled Fe d-bands all have the same spin polarization. Therefore, the Fe^3+ ion is in the high-spin state, effectively forming a S=5/2 moment. In comparison, in NaFeAs iron is in the +2 valence state (d^6). Therefore, Cu substitution of Fe can be effectively considered as hole doping. For NaFeAs,the total spectral weight in the neutron scattering measurement suggests an effective S=1/2 local spin. <cit.> The same measurement indicates a much larger local moment in NaFe_0.5Cu_0.5As, but still less than S=5/2 <cit.>. There are many reasons that the experimentally determined value may differ from expected and the underlying reason is worth of further investigation.A schematic energy diagram is drawn in Fig.<ref>. Several key energy scales can be readily extracted from Fig. <ref>, which are summarized in Table <ref>. The energy gap around the Fermi level (E_g) is of a p-d charge transfer origin, just like in cuprates. <cit.>. The green region indicates the itinerant As p-bands, which extend from around -6 eV up to the Fermi level, despite intertwining with the Fe LH bands and the Cu bands in between. The splitting between the UH and LH bands is determined by the intra-orbital Hubbard repulsion of Fe 3d-orbitals (U).We note that the effective Coulomb repulsion U_eff=2.8 eV as set for the DFT+U calculation is defined as <cit.>:U_eff = ⟨ mm' | V_ee | mm'⟩ - ⟨ mm' | V_ee | m'm⟩_m≠ m'= U+4U'/5-J_H ,which is an average of the intra-orbital Coloumb repulsion (U for m=m') and inter-orbital repulsion (U' for m≠ m') minus the Hund's coupling J_H. It is possible to further determine U' and J_H, by considering that U, U' and J_H are not independent. We assume that the screened Coulomb potential (V_ee) is still spherically symmetric, and it is known that the relation U'+2J_H=U holds. <cit.> Then, in combination with Eq. (<ref>), the values of U' and J_H can be calculated (Table <ref>).§ FACTORS AFFECTING THE CHARGE-TRANSFER GAPThe small charge-transfer gap arises from a delicate separation between the Fe UH band and the As p-band. Fig. <ref> shows that the gap is closed by enforcing a ferromagnetic spin configuration. We have also artificially rearranged the Fe/Cu atoms into a checkerboard pattern [Fig.<ref>] or randomly [Fig.<ref>]. In all the cases, the charge-transfer gap no longer exists. These results indicate the importance of the quasi-1D AFM chain structure to the observed insulating ground state. A recent DFT+dynamical mean-field theory calculation also found that the correct insulating ground state could not be reproduced without the quasi-1D AFM magnetic order. <cit.> Nevertheless, the splitting of the UH and LH Fe bands, which signifies the Mott localization of the d-electrons, is largely independent of the magnetic or atomic structure.Another question is why this insulating phase appears much before the x=0.5 stoichiometric limit is reached. A recent work applying the real space Green's function method emphasized the role of disorder. <cit.> Here, we would like to point out that the interchain Fe is in a different valence state. Within the DFT+U formalism, we have constructed a 2×2×1 supercell and replaced one of the Cu atoms with Fe. The DFT+U band structure indicates that the charge-transfer gap indeed remains open. Fig. <ref> show the orbital-resolved bands by projecting the Bloch wavefunctions onto the in-chain Fe (Fe1) and the inter-chain Fe (Fe2) ions. We can observe that Fe1 [Fig.<ref>, left] is half-filled as in NaFe_0.5Cu_0.5As [Fig.<ref>], whereas two additional occupied bands dominated by Fe2 can be found below the Fermi level [Fig. <ref>, right]. Based on this observation, the robustness of the gap to the extra Fe atoms can be explained as follows. The key point is that the in-chain Fe (3d^5) structure is not perturbed. These inter-chain Fe atoms are roughly in a (3d^7) state, which nominally loses one electron each, the same as the replaced Cu^1+ ion, and thus do not introduce extra charge carriers. We propose that the existence of two types of Fe ions can be verified by X-ray adsorption spectroscopy measurement.§ SPIN EXCHANGE AND EXCITATION SPECTRUMAfter clarifying the Mott localization associated with the Fe^3+ (3d^5) electrons, the spin excitation in NaFe_0.5Cu_0.5As can be reasonably described by a S=5/2 spin model. We assume a Heisenberg-type model:H_d=J_1∑_i S_i ·S_i+a_1/2+J_2∑_i S_i ·S_i+a_2+J_3∑_i (S_i ·S_i+a_2/2+a_3/2+S_i ·S_i-a_2/2+a_3/2),where a_i=1,2,3 are the three lattice vectors as shown in Fig. <ref>. The in-chain exchange is expected to be the dominant spin-spin interaction. The inter-chain coupling is weak, yet important to forming the 3D magnetic order at finite temperature. For each dimension, we include the nearest-neighbor term only, as shown in Fig. <ref>.The three exchange parameters J_1,2,3 are quantified by calculating the DFT+U total energy increase after applying a perturbation to the ground-state spin configuration. The choice of the perturbation should be as small as possible to avoid insulator-to-metal transition, but still numerically stable. To extract the in-chain exchange, we therefore rotate a single spin in each chain by a small angle θ [Fig. <ref>]. By adding a penalty term to the standard Kohn-Sham potential, our noncollinear spin-polarized DFT calculations are able to obtain the total energy of these excited magnetic configurations.<cit.> We then assume a classical mapping between DFT+U total energy and Eq. (<ref>): Δ E_1(θ)=-zJ_1 S^2(cos θ-1), where z is the number of perturbed bonds within the unit cell. Finally, J_1 is determined by a linear fitting between Δ E_1 and cos θ [Fig.<ref>, left panel]. This method has been successfully implemented to study the spin excitation of other iron-based superconductors. <cit.> To extract the inter-chain exchange, we apply a global rotation to the spins in a single chain or a single Fe layer, and a similar linear fitting can be performed [Fig.<ref> and <ref>]. Due to the small magnitude of the inter-chain exchange, the rotation angle in these two cases should be much larger. Nevertheless, the perturbed states stay in the proximity of the magnetic ground state and the Mott physics does not change. For all the cases, the numerical data are found to be well reproduced by the linear fitting, which in turn justifies the Heisenberg-type exchange employed in Eq. (<ref>). We summarize the values of J_1,2,3 in Table <ref>. AFM exchange corresponds to a positive J, and FM exchange corresponds to a negative J.The magnon spectrum ω(k) can then be calculated by the standard spin-wave expansion <cit.>:ω(k) = √(A(k)^2-B(k)^2)A(k) = 2SJ_1-2SJ_2[1-cos(k_2a_2)]+4SJ_3 B(k) = 2SJ_1cos(k_1a_1/2)+4SJ_3cos(k_2a_2/2)cos(k_3a_3/2)Fig.<ref> shows the dispersion along in-plane high-symmetry directions in the momentum space. Along the chain, typical AFM spin wave can be observed. The band top is reached at (π/a_1, 0, 0) with the energy Δ_1=S(2J_1+4J_3)∼2SJ_1. The magnon energy does not return to zero at (±2π/a_1,0,0) due to the out-of-plane exchange J_3. The energy gap is Δ_3=4S√(2J_1J_3). The weak interchain exchange mixing with J_1 also leads to noticeable dispersion perpendicular to the chain. Around (0, ±π/a_2, 0), the magnon energy is Δ_2≈4S√(J_1(|J_2|+J_3)). According to the calculated values of J_1,2,3, Δ_2/Δ_1=2√(|J_2|+J_3/J_1)∼1/4. Above Δ_2, the constant energy contour as measured from inelastic neutron scattering experiment should display typical 1D features, in contrast to the low-energy anisotropic 2D topology. This energy scale reversely provides a way to determine the weak interchain exchange experimentally.§ ITINERANT HOLES AND THEIR COUPLING TO LOCAL SPINSThe previous section focuses on the localized Fe d-electrons. We now turn to the itinerant As p-electrons lying right below the Fermi level. Due to the small charge-transfer gap, charge fluctuation between the ground-state d^5p^6 configuration and the excited d^6p^5 configuration is possible at ambient temperature. The activated mobile holes associated with the itinerant p-bands are considered to dominate the charge transport.The hole valley centered at the M-point arises from the in-plane As p-orbitals, as shown in Fig. <ref> by the orbital- and spin-resolved band structure. The principle axes of the in-plane p-orbitals are chosen along the As-As bonding directions, which rotate by 45 degrees with respect to the a_1-a_2 axes. The interesting point is that under such a projection the p_x and p_y electrons are nearly decoupled around the band edge. Additionally, they carry the opposite spin [Fig.<ref>]. In other words, the hole carriers feature unique orbital-dependent spin polarization, as illustrated by a schematic plot in Fig. <ref>.To reveal the physical origin, we specify As p_xy orbitals as the starting point to construct the corresponding maximally localized Wannier functions out of the valence-band Bloch wave functions. We employ the “disentanglement” procedure introduced in Ref.<cit.> to separate out the Fe d-, Cu d- and As p_z-dominated bands. The resulted energy bands spanned by the As p_xy-like Wannier functions are plotted in Fig <ref>, which nicely reproduces the overall dispersion of the valence bands. Note that this optimal subspace consists of (2 orbitals/As )× (2 As/layers) × 4 layers = 16 bands in total. Those localized d-bands [c.f. Fig.<ref>] are automatically projected out. A minimal model can be written by neglecting the hopping terms between the p_x and p_y orbitals and coupling between the different As layers:H_p_x=μ∑_i c_x,i^† c_x,i+t_σ∑_i c_x,i^† c_x,i+a_1+a_2/2+t_π∑_i c_x,i^† c_x,i+a_1-a_2/2+H.c. ,where μ is the p_xy-orbital chemical potential, which rigidly shifts the band energy and determines the top of the hole bands from the Fermi level. H_p_y can be obtained by simply reversing t_σ and t_π. Fig. <ref> (bottom panel) shows the valence band dispersion from the minimal model with t_σ=-0.9 eV and t_π=0.3 eV. Notwithstanding the simplicity, the hole valley at the M point and the total p-band width (c.f. W_p in Table <ref>) are correctly described.From Eq.(<ref>), the difference between the p_x and p_y electrons becomes clear. Due to the orbital anisotropy, p_x and p_y electrons form strong σ bonds along the [11̅0] and [110] directions, respectively. Note that these two perpendicular directions cut different Fe atoms in the AFM spin chain [Fig.<ref>]. A crosscheck reveals that the spin direction of thep_xy holes are parallel to that of the intersecting Fe. Thus, the orbital-dependent spin polarization of holes can be explained by the directional Hund's coupling to different sublattices of the AFM chain, which can be described by:H_pd=-J_pd∑_γ,⟨ i,j ⟩_γs_γ i·S_j ,where γ=x,y, s_γ i=∑_αβc_γ iα^+σ⃗_αβc_γ iβ and ⟨ ij ⟩_γ denotes the nearest neighbor sites along the γ-direction. The Hund's coupling J_pd arises from the overlap between the As p_xy orbitals and the Fe d-orbitals. We roughly estimate J_pd∼0.5eV by referring to the energy splitting between the spin majority/minority valleys [See the horizontal lines marked in Fig. <ref>].It is reasonable to speculate that the spin-polarized charge current exists along the [11̅0] or [110] direction. The subtlety here is that the top and bottom As layers of a As-Fe-As sandwich have exactly the opposite orbital polarization, as dictated by the inversion symmetry. To obtain a net spin current, one needs to break this symmetry, e.g. by applying a perpendicular electric field. Breaking the degeneracy between the p_x and p_y orbitals, e.g. by applying an uniaxial strain along the [110] direction, should enhance the degree of spin polarization. Furthermore, due to the spin-hole coupling, magnetoresistance may also be observed.§ DISCUSSION AND CONCLUSIONBy combining all the results above, the complete low-energy model of NaFe_0.5Cu_0.5As can be written as:H=H_d+H_p+H_pd A fundamental difference between NaFe_0.5Cu_0.5As and cuprates is the Hund's coupling between the hole and the local spin. Recall that when a hole is doped into high-Tc cuprate superconductors, it goes predominantly into a 2p orbital of an oxygen site. Together with the hole on a Cu site, it forms a singlet state commonly named after Zhang and Rice,<cit.> which has been considered as a starting point to discuss the microscopic origin of the normal state and superconducting properties of cuprates. Here, in NaFe_0.5Cu_0.5As, due to the ferromagnetic couplng, the holes on the p-orbital of As do not bind with the Fe spin into singlets. This difference may be due to the nonplanar Fe-As bonding geometry and the large spatial extension of the As 4p-orbitals. In some sense, NaFe_0.5Cu_0.5As appears more like a narrow-gap magnetic semiconductor. The thermally activated mobile holes associated with the itinerant As p-bands carry charge current, and their spins can be polarized by the underlying magnetic ions. It will be interesting to see if this parent compound NaFe_0.5Cu_0.5Ascan be hole doped. By reducing the hole excitation energy to zero, the system becomes a typical “Hund's metal”, which has been extensively studied in the context of iron-based superconductors. <cit.> A similar two-fluid model as Eq.(<ref>) is considered to spawn an intricate interplay of nematicity, spin-density wave and superconductivity.<cit.> Our discussion so far does not take into account the charge fluctuation of Fe. We assume that when the temperature is not high, the small concentration of thermally excited electrons does not destroy the AFM order of the Fe chains. A rigorous study is, however, beyond the capability of the DFT+U formalism. This problem is equivalent to electron-doping the half-filled quasi-1D Fe chains (the As p-bands become irrelevant). We refer to a related density matrix renormalization group calculation, which reveals exotic magnetic order within the orbital-selective Mott regime. <cit.> This scenario in Cu-substituted iron-based superconductors is discussed in Ref. <cit.>.§ ACKNOWLEDGEMENTSWe thank Yu Song, Pengcheng Dai, Yayu Wang, Hong Yao and Zhengyu Weng for helpful discussion. Z.L., S.Z. and Y.H. acknowledge Tsinghua University Initiative Scientific Research Program. S.Z. is supported by the National Postdoctoral Program for Innovative Talents (BX201600091) and the Funding from China Postdoctoral Science Foundation (2017M610859). J. M. and F. L. acknowledge the support from US-DOE (Grant No. DE-FG02-04ER46148). The primary calculations were performed on the Tianhe-II supercomputer provided by The National Supercomputer Center in Guangzhou, China. PARATERA is also sincerely acknowledged for the continuous technical support in high-performnace computation .
http://arxiv.org/abs/1708.08065v1
{ "authors": [ "Shunhong Zhang", "Yanjun He", "Jia-Wei Mei", "Feng Liu", "Zheng Liu" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170827084006", "title": "Electronic and spin dynamics in the insulating iron pnictide NaFe$_{0.5}$Cu$_{0.5}$As" }
Novel Sensor Scheduling Scheme for Intruder Tracking in Energy Efficient Sensor NetworksRaghuram Bharadwaj D.^1, Prabuchandran K.J.^1, and Shalabh Bhatnagar^1 ^1 Authors are with the Department of Computer Science and Automation, Indian Institute of Science, Bangalore, India December 30, 2023 ================================================================================================================================================================================================================= We consider the problem of tracking an intruder using a network of wireless sensors. For tracking the intruder at each instant, the optimal number and the right configuration of sensors has to be powered.As powering the sensors consumes energy, there is a trade off between accurately tracking the position of the intruder at each instant and the energy consumption of sensors. This problem has been formulated in the framework of Partially Observable Markov Decision Process (POMDP). Even for the state-of-the-art algorithm in the literature, the curse of dimensionality renders the problem intractable. In this paper, we formulate the Intrusion Detection (ID) problem with a suitable state-action space in the framework of POMDP and develop a Reinforcement Learning (RL) algorithm utilizing the Upper Confidence Tree Search (UCT) method to solve the ID problem. Through simulations, we show that our algorithm performs and scales well with the increasing state and action spaces. § INTRODUCTIONThe problem of detecting an intruder (Intrusion Detection (ID) problem) using a network of sensors arises in various applications like tracking the movement of wild animals in the forest, house/shop surveillance for safety and security and so on. In this problem, the objective of the ID system is to track one or more intruders moving in the field of a wireless sensor network (WSN).Typically, WSNs operate on limited power supply. This imposes a limitation on the number of sensors (energy budget) that can be switched ON over a time period when tracking the intruders and thus, constraints the maximum achievable tracking accuracy.Hence, the problem focussed in this paper is to propose a novel ID algorithm that respects such resource constraints.We consider a variant of this problem in which an intruder is moving in a special network configuration, that is a sensor grid in which the battery operated sensors are placed in each block of the grid. At every time period, the intruder moves from one block to another according to a specified governing dynamics. The ID system decides to keep the sensors in some of these blocks ON in order to track the intruder.Whenever the intruder moves into a block where the sensor is ON, the position of the intruder gets recorded. Instead if he moves into a block where the sensor is OFF, then the intruder's position will not be recorded for that time period. The challenge here is to decide on the optimal number and the right configuration of sensors to be powered ON in order to balance the conflicting objectives of minimizing the energy consumption of the network and maximizing the intruder tracking accuracy. Significant body of research exists that proposes solutions to the ID problem. In <cit.>, a comprehensive survey of adaptive sensing methods and POMDP solution methodologies are provided.In <cit.>, a sensor scheduling algorithm has been proposed for detecting an intruder with a known trajectory. In <cit.>, mobile sensors along with the static sensors are deployed. They proposed an algorithm in which these mobile sensors move and detect the position of the intruder. In <cit.>, a method based on the extended Kalman Filter has been applied to the directional sensor network to select a minimum set of sensor nodes. In <cit.>, a POMDP for scheduling sensors has been formulated and Monte Carlo sampling along particle filters for belief state estimation has been proposed. This algorithm has also been applied in the context of autonomous UAV tracking <cit.>. In <cit.>, the ID problem has been formulated as a partially observable Markov decision process (POMDP) and two heuristic solutions have been proposed. Note that solving the POMDP for optimal solution is computationally intractable and one often resorts to heuristic POMDP solution. In the first method, they have proposed Q_MDP with an assumption of observation-after-control, i.e., position of the intruder at any time period will be known to the system from the immediate next and future time periods. Under this assumption, they show that the problem can be decomposed into separate sub-problems where individual decisions can be arrived at for each sensor. Each sub-problem can be solved using policy iteration <cit.>. As the policy obtained is myopic in nature, <cit.> developed a point-based approximation method based on the idea of Perseus <cit.>. The key idea behind this method is to find optimal action for a reachable set of simulated beliefs (see Section <ref>) and uses the assumption that the intruder moves in a special path, i.e.,the transition probability matrix has a special sparse structure rendering only a finite number of path configurations to be actually feasible. In many practical scenarios, the curse of dimensionality effect (exponential growth of the state and action space of POMDP) renders the problem even more challenging. Point-based techniques attempt to alleviate the curse of dimensionality effect in POMDPs by performing value function back-ups at specific belief points rather than at all the belief points. The efficiency of these techniques depends on these selected belief points. In <cit.>, an effective way of selecting the belief points has been discussed. This technique is then combined with value iteration to arrive at a Point-Based Value Iteration (PBVI) algorithm. However, this method is not scalable as it uses full-width computation accounting all possible actions, observations and the next state in the tree search <cit.>. In <cit.>, an algorithm VDCBPI that combines the Value Directed Compression (VDC) and Bounded Policy Iteration (BPI) techniques to tackle the state explosion problem has been proposed. Using VDC, the belief space is compressed into smaller subspace containing only the necessary information that is required to evaluate a policy. BPI is then employed to perform the policy improvement only on the reachable beliefs to determine an optimal policy. In <cit.>, a hybrid value iteration algorithm for POMDPs that combines the advantages of both the point-based techniques and the tree search methods has been proposed. In the first step of this hybrid algorithm, an offline computation is performed to obtain the upper bounds on the optimal value function. In the subsequent step, this computation is used in the online tree search for obtaining the optimal action. In <cit.>,a two time-scale Q-Learning algorithm with function approximation has been developed to mitigate the curse of dimensionality problem. In their algorithm, policy gradient update is carried out on the faster timescale and Q-values update on slower time-scale.The challenge however with function approximation primarily lies in choosing the right features for approximating the Q-values. While all the above mentioned algorithms mitigate the problem of state explosion, the problem of action explosion needs to be handled to obtain scalable solutions. In addition to the above mentioned POMDP model, many other frameworks have been considered in the literature. In <cit.>, a region prediction sensor activation algorithm (PRSA) has been proposed where a subset of active nodes based on the position and velocity of the intruder is selected.Among the selected subset of nodes, the lowest number of essential nodes will be switched ON. In <cit.>,a model in which sensor positions are arranged in the form of a polygon has been considered and an A-star algorithm was applied for selecting the optimal nodes in the polygon. Our contribution in this paper is to apply general RL algorithms to solve the POMDP formulated in <cit.> that tackle both the aspects of state space and action space explosion in anovel way. Our algorithms do not make any assumptions on the structure of the network or the movement of the intruder and also do not use additional information at the controller. The following are our contributions: * We solve the problem of state space explosion arising in the problem of Intrusion Detection (ID) by using the MCTS algorithm. Unlike the prior methods considered for ID problem to mitigate the state space explosion like Monte-Carlo sampling methods and the point-based value iteration methods <cit.>, the MCTS algorithm searches the belief space in a sequentially best first order. * We solve the problem of action space explosion in ID problem by suitably reformulating the action space of POMDP in <cit.>. This reduces the exponential number of actions to be considered in the ID problem to few finite actions. Furthermore, this makes it amenable to the application of the MCTS algorithm. * Our reformulation of the action space is inspired from a simple ID_TG algorithm that greedily selects the top sensors (which implicitly ignores the current belief value). The key idea in our ID_γ_MCTS algorithm is to choose the top sensors dynamically based on the current belief value. * OurID_γ_MCTS algorithm is a scalable solution to the ID problem withoutany extra assumption. This renders our solution very practical. § POMDP FRAMEWORK FOR THE ID PROBLEM Let us consider a sensor grid where sensors are placed, one on each block of the grid.The intruder moves from one block to another in the grid in each time period. The sensors that are kept ON in the grid send their observation whether the intruder was seen on that block to the central controller. Based on this information, the central controller finds the optimal action to be taken i.e., it decides how many and which sensors need to be switched ON in the next time period, and broadcasts this action to all the sensors in the network. This sequential decision making problem can be posed in the framework of Partially Observable Markov decision process (POMDP) <cit.>.An MDP is defined via the tuple <S,A,P,K>. Here, S is the set of states that represent the different block positions of the intruder, A is the set of actions corresponding to the number and the configuration of sensors that needs to be switched ON at a given time period, P is the transition probability matrix governing the state evolution where P(s_k = i, s_k+1=j, u_k=a) represents the probability dynamics ofthe intruder, i.e., probability of moving from the current block position, the state s_k=i at time k to the next state s_k+1=j at k+1 under action a, and K(s_k = i, s_k+1=j, u_k=a) corresponds to the single stage cost incurred by taking action a when intruder position at time k was i and the position at k+1 is j. The objective here is to obtain an optimal stationary policy π: S → A. This policy gives the optimal action to be chosen in the current block position of the intruder so that we could track the intruder in the next time period such that the long-term cost objective (constructed based on the single-stage cost function) is minimized. In order to determine the optimal stationary policy,the ID system should know the state or position of the intruder at all the time periods. However, in our setting, whenever intruder moves to a block in which the sensor is not turned ON the intruder cannot be tracked. In this case, the position of the intruder at the next time instant will remain unknown. Thus, it is not possible to solve this problem directly in the framework of MDP.We have to recast this problem as a partially observable MDP (POMDP) to capture the unknown state information. A POMDP <cit.> is formally defined as atuple <S,A,Z,P,O,R>. Here, the tuple <S,A,P,R> remains the same as in the MDP setup andZ corresponds to the set of observations. As we do not directly observe the state, an observation corresponding to the state and action chosen will be obtained. This is modeled using the observation function O, i.e., O:S × A →π(Z), where π(.) is the probability distribution over the space of observations. In simple terms, we obtain the partial information or observation about the state in POMDP as opposed to obtaining the complete information about the state in MDP. As described earlier, in the ID problem the state information can become unknown based on our decisions and thus our problem lies in the realm of POMDP.We now formally present the POMDP formulation for the ID problem <cit.>. * Let n denote the total number of sensors in the network. * Let the actual position of the intruder at time k be denoted as s_k. * The stochastic matrix P of dimension (n+1) × (n+1) models the actual transition probability of the intruder movement between various blocks. Note that the transition between states modeling the intruder movement for this POMDP does not depend on our decision or action, i.e., which sensors have been turned ON. * Action Space : At time k, let u_k denote the vector that indicates the decision on which sensors to be switched on for the next time period u_k = (u_k,l)_l=1,…,n∈{0,1}^n, where 0 denotes the action to keep the sensor OFF and 1 denotes the action to keep the sensor ON. Here, u_k,l denotes the decision for the lth sensor in the kth time period. Note that the number of actions is exponential in the number of sensors and thus leaving us with a large number of actions from which we need to decide. * Observations: There can be three possibilities. If we track the position of the intruder, then the observation is the state of the system s_k itself. In some cases, we may not be able to track the position of the intruder. We let ϵ to represent this situation. The final possibility is that the intruder may have moved out of the network. Letindicate this situation. We assume that if the intruder moves out of the network (i.e., observation =) this information gets immediately known to the controller. So, o_k+1 =s_k+1,ifs_k+1≠𝒯 and  u_k,s_k+1 = 1ε ifs_k+1≠𝒯 and  u_k,s_k+1 = 0𝒯 ifs_k+1 = 𝒯. * The history or information content available at time k is: I_k ={o_0,u_0,o_1,u_1,.....o_k,u_k}. * The optimal action is obtained from the optimal policy according to: u_k = μ_k(I_k),where μ_k denotes the optimal action selection function given the `information vector' at time k. * State Space: As the number of stages or the time slots increase, the size of history increases and thus, it becomes difficult to compute the optimal policy using finite memory. So we need to develop a different representation for the state space. We solve this problem with the help of belief vectors. Thebelief vector p is an (n+1) × 1 vector, in which p_k(l) indicates the probability of the intruder being at position l at time k. This evolves as follows: p_k+1 =e_𝒯1_{s_k+1 = 𝒯} + e_s_k+11_{u_k+1,s_k+1 = 1}+ [p_kP]_{j : u_k+1,j =1 }1_{u_k+1,s_k+1 = 0 }, where e_i is the vector that represents 1 at position i and 0 at other positions. The notation [V]_S represents the probability vector obtained by setting all components V_i, such that i ∈ S to 0 and then normalizing this vector so that the sum of the components in V is 1. This is to account that we did not track the position of intruder, so we are sure that intruder did not move to sensor positions which are switched on. * We define the single stage cost model for the ID problem using two cost components. The system incurs unit cost, if we do not track the position of intruder at a given time period and 0, if we track the position. Let us defineT(s_k,u_k,s_k+1) =1_{u_k,s_k+1 = 0},where 1(.) is the indicator function. This cost T(s_k,u_k,s_k+1) captures the fact whether intruder is tracked or not. The other cost component C(s_k,u_k,s_k+1) provides constraint on the number of sensors that can be kept awake, i.e.,C(s_k,u_k,s_k+1) = ∑_l =1^n1_{u_k,l = 1}. * The long-run objective is to minimize the average expected tracking error while satisfying the limit on the number of sensors to be ON (energy budget constraints): min_u_k, k ≥ 0lim_m →∞𝔼[∑_k = 0^m-1 T( s_k,u_k, s_k+1)]/m subject to lim_m →∞𝔼[∑_k=0^m-1 C(s_k,u_k,s_k+1)]/m≤ b, where𝔼[·] denotes the Expectation over state transitions and b specifies the energy budget (upper bound on the number of sensors that can be kept awake) in a time period. * Relaxed Cost Function: We modify the single-stage cost function to include the constraint as follows:g(s_k,u_k,s_k+1)= T(s_k,u_k,s_k+1)+ λ× C(s_k,u_k,s_k+1), where λ∈ [0,1] is a suitable threshold for tracking error and budget constraints. If we set λ to 0, it is similar to having unlimited budget, in which case all the sensors will be kept ON all the time. On the other hand, λ = 1 implies there is strict budget constraint, in which case, all the sensors will be switched OFF. In this way, by optimally tuning λ, the algorithm could support different levels of tracking error and average energy budget.Now we have modeled the ID problem using the POMDP framework. The uncertainty in the state can be removed by treating belief vector as our new state. We now have all the ingredients for an MDP and one would hope to utilize the standard dynamic programming methods like policy iteration and value iteration <cit.> for solving the MDP. However, these methods operate on finite state space whereas the possible belief vector, which forms the new state space for this problem is uncountably infinite. Therefore, finding an optimal policy for this problem by solving the MDP is intractable <cit.> leading us to focus on developing a suitable algorithm that can handle a large state and action space. This would ensure good tracking performance while meeting the energy budget constraints.§ OUR ID ALGORITHMSIn this section, we describe our RL algorithms.§.§ Greedy Algorithm (ID_TG)This is a simple greedy algorithm in which the top probable sensor positions the intruder might move to will be turned ON for tracking during each time period. The number of sensors that will be kept ON is decided based on a chosen parameter γ, where γ∈(0,1). The first step of this algorithm involves obtaining the approximate belief vector for the next time period. Note the belief vector gives the probability of the intruder reaching various possible positions in the next time period. In order to compute the belief vector, we use (<ref>). Then we select those positions in the belief vector that sum up to γ starting from the highest probability value. We could view γ as a minimum confidence index on tracking the position of intruder. For example, consider the approximate belief vector for the next time period to be[ 0.3,  0.3,   0.2,  0.2]. Let γ = 0.6. Then the sensors at positions 1 and 2 will be switched ON for the next time period and with confidence of 0.6, we know that intruder will be present in either position 1 or 2. We can see that this algorithm is very easy to implement and at the same time solves the problem of state and action space explosions. However, at every instance, γ chosen is constant and independent of the number of non-zero values in belief vector. Consider a scenario in which the belief vector is `dense'. By `dense', we mean that there are many non-zero probability positions in the belief vector whose values are very close to each other. Suppose we did not track the position of intruder in that time slot. Then, the belief vector grows further dense (see (<ref>)). In this case, we would like to use a higher γ so that we could have more number of sensors ON and track the position of the intruder. Otherwise, the belief vector would grow more dense resulting in bad tracking accuracy. On the other hand, if the belief vector is sparse, then even a smallerγ would give better tracking. Also, we do not know, the best choice of γ for a given budget and the current belief. In summary, having a constant gamma value at every time period is not the right choice for optimal performance. However, the idea of usingγ as a handle for deciding the actions instead of directly searching over the number and configurations of sensors will play a crucial role in solving the action space explosion problem in our proposed algorithm ID_γ_MCTS. §.§ Monte Carlo Tree Search (MCTS)In this section for completeness, we first describe the idea of MCTSfor determining the optimal decisions for an MDP and then we adapt this algorithm to the POMDP setting. For more details about the algorithm the reader is referred to <cit.> . MCTS has gathered a lot of attention in recent times due to its success in playing strategic games like GO <cit.>. The main idea behind this algorithm is to run multiple simulations (with the help of simulator placed atthe controller) from the current state to determine the best action in each iteration. We begin with a single node where the node represents the current state. Then, we select an action for the given state using the Upper Confidence Bound for Trees (UCT) rule. The general UCT rule is given in <cit.> as follows: UCT_action=max_jX̂_j + C * √(log N / N_j), where X̂_j denotes the estimate of the average/discounted long-run reward (negative of the long-term cost) one obtains by selecting action j starting from the current state, N denotes the total number of runs so far and N_j corresponds to the total number of runs by selecting action j.In the beginning, all actions have 0 count, i.e., N_j = 0. Therefore, we can see that an unexplored action will have a higher probability of being selected on each simulation run. After all the actions have been explored, the action that has led to the highest reward collected, i.e, the term X̂_j gets selected. In this manner it balances both exploration and exploitation. When an action is selected, we obtain the single stage reward/cost and a next state. Then, a new node and a new edge is added to the tree, where the edge represents the action we have chosen and the new node represents the next state obtained. We continue this construction, adding new nodes and edges to the tree till the desired depth. In this manner, we obtain a Monte-Carlo sample trajectory and a sample for theestimate of the long-run reward/cost.We could obtain multiple trajectories by trying out different actions (including the tried action) in the current state for getting better estimates of the long-run reward. This procedure is now repeated again from the root node till the timeout <cit.>. At the end of the timeout, we pick the action that has maximum long-run reward.This method helps in breaking the curse of dimensionality of the state space by sampling the state transitions instead of considering all possible state transitions to estimate long-run reward. If the exploration is done in an optimal manner, <cit.> has shown that the algorithm converges to the optimal policy.§.§.§ ID_MCTS AlgorithmIn this algorithm, we run the MCTS <cit.> on our setting. That is, for the belief vector at time k, we run the MCTS algorithm and obtain the action to be executed for the next time instant using the UCT action selection (<ref>). At each iteration of the algorithm, the position of the intruder needs to be known for the simulator to generate the next state. We can estimate it from the belief vector in two ways. The first method involves sampling a position from the belief vector according to its distribution at each iteration. The second approach selects the position with maximum probability at every iteration. The second approach is very natural and we employ it in our algorithm. From the experiments, we observe that the algorithm increases the number of sensors that are kept ON in the subsequent intervals if it does not detect the position of the intruder for a large number of contiguous time periods. Similarly, as the intruder gets tracked continuously, the number of sensors that are kept ON in the subsequent time periods reduces. As discussed earlier, this algorithm solves the problem of state explosion. However, the algorithm requires all the (exponential) number of actions (the different configuration of sensors out of n block positions) to be tried out sufficient number of times before the timeout. There can be 2^n actions as n is the total number of sensor position actions possible. As the size of network increases, the action space exponentially increases and it takes large amounts of time to execute all the actions due to which some actionsmight not be tried. Thus, the action we finally obtain at the end of the timeout will be `sub-optimal'.We overcome the main difficulty due to large action space by appropriately changing the action space (all possible sensor positions) of the POMDP to a discretized parameter γ∈ [0,1]that was fixed in the ID_TG algorithm and develop the final algorithm ID_γ_MCTS that solves the problems of both state and action explosions.§.§ ID_γ_MCTS AlgorithmIn the ID_γ_MCTS algorithm, we let the action space to be a predefined set of γ values in the range 0 to 1 (with a uniform gap of 0.05). Note that the number of possible actions for this algorithm is constant unlike the exponential actions in the earlier algorithms in the literature. We run MCTS to obtain the optimal γ value for the present belief vector. As we have discretized the action space, all the actions are chosen a sufficient number of times. The idea behind this construction is that it suffices to know how many sensors need to be kept awake in terms of γ during each time period instead of explicitly knowing the exact configuration of the sensors. We could then use this value of γ to select the top probable positions in the belief vectors as described in Algorithm <ref>. These sensor positions will be kept ON for the next time period. This process is repeated until the intruder moves out of the network. The complete ID_γ_MCTS algorithm is described in Algorithm <ref>.As described above, Algorithm 3 imbibes the ideas of both ID_TG and ID_MCTS and totally solves the problems of state and action explosions. Moreover, we observe that the γ value at each time period is selected dynamically and is not kept constant, which is a significant drawback for the ID_TG algorithm. From the experiments, we observe that in few cases the actual path of the intruder and the path that we are estimating diverge. This happens whenever the belief vector gets too dense and γ selected is not very high. Then, we lose the position of the intruder and in this case, the belief vector grows more dense in the subsequent time periods. This results in the intruder not getting tracked for many contiguous time intervals and which affects the tracking accuracy. To overcome this problem, we introduce the restart mechanism. This involves switching on all the sensor positions that have non-zero probability whenever the belief vector has more than the threshold number of non-zero values. We observe this divergence problem occurs rarely and thus the restart cost can be practically ignored.§ EXPERIMENTS AND RESULTSWe considered three different configurations of the sensor network. In our first setting, we considered a 1-dimensional sensor network with 41 sensors with probability transition matrix similar to the one considered in <cit.>. At the start of the experiment, the intruder is placed at the center of the network with the movement constraint that he could either move 3 positions left or 3 positions right from the current position. The restart threshold for this setting is set to 14. In the second setting, we ran our algorithms on a two-dimensional sensor grid of dimension 8× 8. The feasible movements of the intruder are left, right, down, up and along all the diagonals. In our final experiment,we ran our algorithms on 2-D grid with dimension 16× 16.In both the second and third settings, we generated the transition probability matrix randomly and the restart threshold is set to 20.We averaged the results of our simulations over 30 time periods. The MCTS algorithm was run for 500 iterations in every time period. We computed the long-term cost in the experiments as the discounted sum of single-stage costs for learning the decisions. Note that by choosing the discount factor close to 1, the actions learned for the long-run discounted cost setting will be similar to the actions learned for the long-run average cost setting. We have set the discount factor α = 0.9. For the ID_γ_MCTS algorithm, we chose the γ parameter corresponding to the actions by discretizing the interval [0,1] in steps of 0.05. Thus, there are totally 20 discretized actions in the action space. The Q_MDP method is implemented by solving the policy iteration in <cit.> and making the assumption of observation-after-control, i.e., p_k+1 = e_b_k+1 at the controller. In the plots, the X-axis corresponds to the average number of sensors awake which is computed as the ratio of the total number of sensors switched ON during the run of the algorithm and the total number of time periods and the Y-axis corresponds to the average tracking error which is obtained as the ratio of the number of time periods in which the intruder is not tracked and the total number of time periods.In our experiments, we run our algorithms for different values of λ and choose the λ value that meets the given budget constraints (the average number of sensors awake) and also has the lowest tracking error. We plot the average number of sensors awake and its corresponding tracking error. The λ value for which we obtained results for our ID_γ_MCTS on all the 3 settings is shown in Table (<ref>),(<ref>),(<ref>). Note that for different algorithms different values of λ could lead to the same average number of sensors.As the size of the sensor network is large in the second and third settings, our ID_MCTS algorithm doesn't scale up owing to the action space explosion. Thus, we do not show results for ID_MCTS algorithm in the second and third plots and compare mainly our ID_γ_MCTS algorithm against Q_MDP given in <cit.>. All the results for ID_γ_MCTS algorithm are averaged across 10 simulation runs of the experiment. In the Figure 2, we observe that ID_γ_MCTS outperforms ID_MCTS. This is because the cardinality of the action space in ID_MCTS is 127 where as in ID_γ_MCTS the cardinality is 20. However, we can see that Q_MDP performs better than both our proposed algorithms. This is mainly because of the observation after control assumption (that we do not impose in our setting) and partly due to the small size of the sensor network. As noted earlier, this assumption imposes a severe constraint and we don't make this assumption. Thus, it is noteworthy that our algorithm performs well even without this assumption. In the Figure 3, we observe that the performance of ID_γ_MCTSin comparison with Q_MDP has improved over a 2D sensor network. In fact, it performs better than Q_MDP until the average number of sensors is 3. In the Figure 4, we see the performance of the ID_γ_MCTS algorithm has further improved. To conclude, the ID_γ_MCTS performs better than other algorithms when the sensor network become very large which is typical in most of the practical applications.As discussed earlier, since ID_TG, uses a constant γ its performance is poor (see Figs. 2 and 4). In summary, we can conclude that ID_γ_MCTS without making any strong assumptions performs and scales well.§ CONCLUSIONIn this work, we proposed three algorithms for solving the problem of intruder detection under energy budget constraints. Our first algorithm ID_ TG is greedy, simple and easy to implement. However, due to its static nature, it does not yield good performance. Our second (ID_MCTS) algorithm is a suitable adaptation of the MCTS algorithm for the ID problem. This solves the problem of state space explosion problem, however, the action space is still large and remains to be handled. Our final algorithm (ID_γ_ MCTS) combines ideas from our earlier two algorithms and totally solves both the state and action space explosion problem and this algorithm is the first of its kind for the ID problem. In our MCTS based algorithms, as we only need to account for the belief vectors that are encountered during the run, which are finite, holds the key to mitigating the state space explosion. Further, as we suitably modify the action space using predefined set of γ values the action space explosion is also effectively handled. From simulations, we observed that the ID_γ_MCTS algorithm is our best algorithm when the network size becomes large and performs well without making any assumption on the intruder movement or the intruder position information. In the future, we would like to extend this problem when multiple intruders are moving in the sensor network. In this scenario, we could consider the possibility of more than one intruder moving in the network with possibly different transition probabilities and their movements could be correlated as well. Our objective would be to track the positions of as many intruders as possible satisfying energy budget constraints.§ ACKNOWLEDGMENT The above work is supported by Joint DRDO-IISc Programme to Advance The Frontiers of Communications, Control, Signal Processing and Computation. IEEEtran
http://arxiv.org/abs/1708.08113v3
{ "authors": [ "Raghuram Bharadwaj Diddigi", "Prabuchandran K. J.", "Shalabh Bhatnagar" ], "categories": [ "cs.AI", "cs.SY" ], "primary_category": "cs.AI", "published": "20170827171917", "title": "Novel Sensor Scheduling Scheme for Intruder Tracking in Energy Efficient Sensor Networks" }
[email protected] Laboratoire Univers et Théories, Observatoire de Paris – PSL Research University – CNRS – Université Paris Diderot – Sorbonne Paris Cité, 5 place Jules Janssen, 92195 Meudon CEDEX, FranceIn order to keep pace with the increasing data quality of astronomical surveys the observed source redshift has to be modeled beyond the well-known Doppler contribution. In this letter I want to examine the gauge issue that is often glossed over when one assigns a perturbed redshift to simulated data generated with a Newtonian N-body code. A careful analysis reveals the presence of a correction term that has so far been neglected. It is roughly proportional to the observed length scale divided by the Hubble scale and therefore suppressed inside the horizon. However, on gigaparsec scales it can be comparable to the gravitational redshift and hence amounts to an important relativistic effect. Perturbed redshifts from N-body simulations Julian Adamek December 30, 2023 ===========================================§ INTRODUCTIONWhile the standard analysis of redshift space perturbations <cit.> only contains the leading Doppler term, recent and future galaxy surveys have or will have sufficient statistical power to detect subleading terms such as gravitational redshift, weak lensing, transverse Doppler shift, time delays etc. These include general relativistic effects that are interesting targets for testing gravity at cosmological scales. For instance, a first detection of gravitational redshift in galaxy clusters has been claimed in <cit.>. Perturbation theory can be used to understand these effects analytically (e.g. <cit.>), and many results have been derived in longitudinal gauge. However, their application to N-body simulations requires due care with respect to how the data are mapped to the relativistic spacetime.One possibility is to run relativistic simulations directly in longitudinal gauge <cit.>, but the use of Newtonian N-body codes is by far the more common practice. Coming from the Newtonian world it is not immediately clear how their results should be interpreted in a relativistic context (e.g. <cit.>). In <cit.> it was finally realized that one can specify a gauge, the so-called “N-body gauge,”[In fact, <cit.> had already discovered “half of” the necessary coordinate transformation, leading to what they called “Newtonian matter gauge.” The spatial part of the transformation is however different from the N-body gauge, leading to a non-vanishing volume perturbation.] in which the relativistic equations are formally identical to the Newtonian ones, therefore providing a framework for a fully relativistic interpretation of Newtonian simulations. The same authors went on to analyze initial conditions in this framework, showing that the usual recipes are consistent with general relativity as well <cit.>.The issue has so far mostly been discussed in relation to the matter power spectrum, and the implications for the perturbed redshift and some other observables have therefore not yet been fully appreciated. In the recent literature the redshift formula of longitudinal gauge is often directly applied to N-body simulation data, thereby disregarding the fact that these are not provided in the appropriate coordinate system. For instance, <cit.> do not account for this aspect, but I will show that the correction is fortunately very small on the scales they are interested in. While the gauge issue is noted in <cit.> where a correction term is applied to the density perturbation (see their equation 17), the redshift of individual sources is still computed without such a correction (see their equations 9 and 10). In the following I wish to clarify this issue by studying the coordinate transformations involved, and finally by deriving the correct formulas for the perturbed redshift in N-body gauge. I show that a small correction term due to the spatial coordinate transformation appears and should in principle be included in the analysis. However, this term only becomes relevant on extremely large scales.§ WEAK-FIELD METRICAstrophysical objects with high compactness exist on scales ≲ 0.01 parsec (the largest known supermassive black holes), while on extremely large scales ≳ 100 megaparsec the Universe can be described entirely in terms of linear equations. In between these two extremes there lies a vast range of scales where the distribution of matter can be very inhomogeneous but the gravitational fields are weak, and the geometry is therefore only weakly perturbed. This empirical fact is confirmed every time we point a telescope at the sky and observe that most rays seem to propagate along almost straight paths.It is therefore possible to describe the dynamics on those scales in terms of the nonlinear evolution of matter on a geometry with linearly perturbed metric. Of course, as for any geometry, there also exist many coordinate systems in which the metric takes a wildly nonlinear form. A particularly well-known coordinate system in which the smallness of the geometric perturbation is carried into effect is the one of the longitudinal gauge. For the purpose of a more general discussion I adopt the notation introduced in <cit.> and write the generic line element for a metric with linear scalar perturbations on top of a Friedmann-Lemaître model asds^2 = a^2(τ) [-(1 + 2 A) dτ^2 + (1 + 2 H_L) δ_ij dx^i dx^j - 2 ∇_i B dx^i dτ - 2 (∇_i ∇_j - 1/3δ_ijΔ) H_T dx^i dx^j] ,where a(τ) is the scale factor, τ and x^i are conformal time and comoving coordinates on the spacelike hypersurface, respectively, and A, B, H_L, H_T are scalar functions describing perturbations. Vector and tensor perturbations shall not be discussed here. A setting is said to be “weak-field” if there exists a coordinate system for which all the above perturbation variables are small, i.e. |A|, |H_L|, √(∇_i B ∇^i B), √(∇_i ∇_j H_T ∇^i ∇^j H_T - (Δ H_T)^2/3)≪ 1. One can then find other such coordinate systems by making a small change of coordinates, generated by two scalar fields T and L, so that τ→τ + T and x^i → x^i + ∇^i L. This shows that the four scalar perturbations only contain two physical modes. The longitudinal gauge is obtained by choosing T and L such that B = H_T = 0, leaving only the lapse perturbation A = Ψ and the volume perturbation H_L = Φ, where the two potentials Ψ and Φ denote precisely the two physical modes when written as first-order gauge-invariant expressions. The typical amplitude of these perturbations is ∼ 10^-5 in our Universe, except for the vicinity of black holes or neutron stars where the weak-field description breaks down.In longitudinal gauge the Hamiltonian constraint reads-ΔΦ + 3 ℋΦ' - 3 ℋ^2 Ψ = 4 π G a^2 δρ ,where ℋ is the conformal Hubble rate, a prime denotes partial derivative with respect to τ, and the equation is linearized in Φ and Ψ but not in δρ. In fact, the matter perturbation δρ can be very large, but it is computed and evolved on a linearly perturbed geometry. This equation is however not the one used in a Newtonian N-body code. First, such a code uses counting densities that are not corrected for perturbations of the volume element – Newtonian theory assumes Euclidean geometry. Second, the Poisson equation lacks some of the terms featured on the left-hand side.From now on I restrict the discussion to the case where gravitational fields are sourced exclusively by nonrelativistic matter. In this case anisotropic stress can be neglected, implying Φ = -Ψ.§ PERTURBED REDSHIFT IN LONGITUDINAL GAUGEThe effect of geometry (and perturbations thereof) on observables can be understood by studying the geodesics of photons that reach the observation event. Let me denote the tangent vector of such a geodesic as k^μ. The condition g_μν k^μ k^ν = 0 implies that k^i/k^0 = n^i (1 + 2 Ψ), where n^i is the unit vector (δ_ij n^i n^j = 1) pointing in the direction the photon is traveling. The geodesic equation can then be expressed as an evolution equation for the energy,dln k^0/dτ = - 2 ℋ - 2 n^i ∇_i Ψ ,and an equation describing the deflection of the ray,dn^i/dτ = -2 (δ^ij - n^i n^j) ∇_j Ψ .A reference clock is specified through a unit timelike vector u^μ = a^-1 (1-Ψ, v^i) where v^i = dx^i / dτ is the peculiar (coordinate) velocity of the clock's rest frame. In such a frame, the measured photon energy is -g_μν u^μ k^ν. A first integral of eq. (<ref>) yields the following expression for the observed redshift between source (src) and observer (obs):1+z = g_μν u^μ k^ν|_src/g_μν u^μ k^ν|_obs =a_obs/a_src(1 + n_i v^i_obs - n_i v^i_src + Ψ_obs - Ψ_src - 2∫_src^obsΨ' dχ)Here dχ is a conformal distance element along the photon path. The redshift perturbations are easily identified as the Doppler shift due to peculiar motion, the gravitational redshift due to time dilation, and the Rees-Sciama effect (also known as integrated Sachs-Wolfe effect in the context of linear theory).The boundary terms in this expression have to be evaluated at the coordinate time at which the photon geodesic actually intersects the world lines of source and observer. The time of flight of the photon is affected by the Shapiro delay, which for a coordinate distance χ between source and observer is given by the following relation:χ = ∫_src^obs dχ = ∫_src^obs(1 + 2 Ψ) dτ = τ_obs - τ_src + 2 ∫_src^obsΨ dχ§ FROM LONGITUDINAL TO N-BODY GAUGEGiven a linearly perturbed geometry specified in longitudinal gauge I now make a small change of coordinates to set H_L = 0. Considering the Lie derivative of the metric tensor the required transformation is generated by T, L that satisfyℋ T + 1/3Δ L = Ψ .I furthermore choose L' = 0 such that velocities are not transformed. In this new coordinate system one finds B = T and H_T = -L. It is important to verify that these new perturbations do not violate the weak-field conditions[For instance, had I made a change of coordinates that sets A = B = 0 instead, like in a synchronous gauge, I would have found L' = T = -a^-1∫^a (Ψ/ℋ)dã and that H_L receives a contribution ∼ΔΨ/ℋ^2 ∼δρ/ρ. The volume perturbation therefore does not remain small everywhere in such a coordinate system, rendering the weak-field treatment inconsistent.]. As explained below, L shall be chosen such that Δ L is of order Ψ, and hence the same is true for the perturbations generated by H_T. One can then easily convince oneself that ∇_i T is of the order of a peculiar velocity — in fact, it coincides with the Zel'dovich approximation thereof — and therefore √(∇_i B ∇^i B)∼ v ≲ 10^-3 at low redshift. The shift perturbation in the new coordinate system is therefore substantially larger than Ψ, but still comfortably within the weak-field regime. The lapse perturbation becomesA = Ψ + ℋ T + T' ,and as explained shortly I arrange that it vanishes at leading order.The Hamiltonian constraint becomesΔ(ℋB - 1/3Δ H_T) + 3 ℋ^2 A = 4 π G a^2 δρ ,and as a consequence of H_L = 0 the density perturbation δρ can be obtained simply by counting the mass elements per coordinate volume, in accordance with the procedure relevant for Newtonian codes. For nonrelativistic particles the geodesic equation reduces todv^i/dτ + ℋ v^i = ∇^i(ℋ B + B' - A) .Since I assume that matter is nonrelativistic and hence the pressure perturbation can be neglected, the spatial trace of Einstein's equations yieldsℋ A' - (ℋ^2 - 2a”/a) A = 0 ⇒ A ∝1/aℋ^2 .This shows that an appropriate choice of boundary conditions will set A = 0. Furthermore, with such a choice eqs. (<ref>), (<ref>) are formally identical to the ones of Newtonian gravity if eqs. (<ref>), (<ref>) are used,ΔΨ = 4 π G a^2 δρ ,dv^i/dτ + ℋ v^i = -∇^i Ψ . Keeping in mind that the above equations remain valid even if δρ/ρ becomes large I now want to set the boundary conditions at early times when matter perturbations are still linear. According to eq. (<ref>) the condition A = 0 is satisfied whenT = -1/a∫^a Ψ/ℋ dã .The linear solution of Ψ is constant in matter domination, and with eq. (<ref>) the corresponding choice of L is given by Δ L = 5 Ψ_in. Here I introduce Ψ_in to denote the linear initial condition for Ψ in matter domination.With this choice one can see that ∇^i B = v^i in the linear regime, which (together with H_L = 0) is the original gauge condition used in <cit.> for the N-body gauge. So even though my gauge condition H_T' = -L' = 0 is different, the resulting coordinate system is the same whenever only nonrelativistic matter is present. The advantage of my condition is that it does not explicitly refer to a matter perturbation and can hence be easily extended into the nonlinear regime of matter as long as gravitational fields remain weak. In a slight abuse of terminology I shall therefore always call this system of coordinates the one of N-body gauge, as the state of a Newtonian N-body simulation is given in precisely these coordinates.I shall now discuss the repercussions of this change of coordinates from longitudinal to N-body gauge. The null condition is solved in N-body gauge by k^i/k^0 = n^i + ∇^i B + n^j (∇_j ∇^i - δ^i_j Δ/3)H_T, and the photon geodesic equation can be written asdln k^0/dτ = -2ℋ - n^i n^j ∇_i ∇_j B ,anddn^i/dτ = -(δ^ij - n^i n^j)∇_j (n^k ∇_k B - 1/3Δ H_T) .Considering how the coordinate transformation acts on u^μ one sees that u^μ = a^-1 (1, v^i) in the new coordinates. Thus, a first integral of eq. (<ref>) gives the following new expression for the observed redshift:z+1 = a_obs/a_src(1 + n_i v^i_obs - n_i v^i_src + Ψ_obs - Ψ_src +ℋB|_obs - ℋB|_src - 2∫_src^obsΨ' dχ)In order to recover all the terms of eq. (<ref>) I used (ℋ B)' = Ψ' and ℋB + B' = -Ψ, but evidently a new boundary term ℋB appears. Noting that B = T this boundary term can be understood as the result of the change of coordinates acting on the background term a_obs/a_src. In other words, the term has to appear because the equal-time hypersurfaces in N-body gauge do not coincide with the ones of longitudinal gauge. The coordinate time of a Newtonian N-body simulation is the one of N-body gauge, and hence this boundary term needs to be taken into account.Let me now inspect the time of flight for the photon in N-body gauge,χ = ∫_src^obs(1 + n^i ∇_i B + n^i n^j ∇_i ∇_j H_T - 1/3Δ H_T)dτ = τ_obs - τ_src + B_obs - B_src + n^i ∇_i H_T |_obs - n^i ∇_i H_T |_src + 2 ∫_src^obsΨ dχ ,where I again use the gauge conditions to recover the terms known from longitudinal gauge. Compared to eq. (<ref>) there are two new boundary terms. These are expected from the gauge transformation, since the coordinate time transforms as τ→τ + T = τ + B, and the coordinate distance transforms as χ→χ + n^i ∇_i L|_obs - n^i ∇_i L|_src = χ - n^i ∇_i H_T|_obs + n^i ∇_i H_T|_src. For a photon trajectory with fixed endpoints, the coordinate time of emission and observation therefore transforms such that the boundary terms due to the shift perturbation in eqs. (<ref>) and (<ref>) cancel exactly. The other boundary term in eq. (<ref>) gives precisely the change in the coordinate distance due to the spatial transformation between longitudinal and N-body gauge. Therefore, the perturbed redshift for the trajectory remains invariant.§ DISCUSSIONThe precedent analysis clarifies that in order to use the longitudinal gauge for computing the perturbed redshifts with N-body simulation data one should transform the coordinates appropriately. In particular, the coordinate distance between observer and sources changes according to a spatial transformation that is independent of time. This has already been pointed out in <cit.>, and I explicitly show how to recover this result in the relativistic framework provided by the N-body gauge. Alternatively the computation of the perturbed redshift can also be carried out directly in N-body gauge. In this case the coordinates of sources are directly taken from the simulation, and the effect appears as a modification of the Shapiro delay.In order to estimate the amplitude of the correction, let me compute the typical change δχ = n^i ∇_i L|_obs - n^i ∇_i L|_src of the coordinate distance. Using the relation Δ L = 5 Ψ_in the variance of δχ is given by⟨δχ^2⟩ = 50 ∫_0^∞dk/k^3[1/3 - 1/k χj_1(kχ) + j_2(kχ)]Δ_k^Ψ ,where Δ_k^Ψ is the dimensionless power spectrum of Ψ_in. Unfortunately the integral has an infrared divergence for nearly scale invariant spectra, but in practice this divergence is regulated by the finite size of a simulation. Imposing a cutoff close to the Hubble scale one finds that δχ / χ∼ 10^-4 almost independent of scale χ and precise value of the cutoff. Considering how a change in the coordinate distance affects the time of flight one sees that the typical correction to the redshift due to this coordinate effect is δ z / (1 + z) ∼ℋδχ. Therefore the effect becomes of the order of the gravitational redshift for trajectories at or above the gigaparsec. At these extreme scales the gauge correction is of the same order as all other relevant terms and should be taken into account.As suggested in <cit.> the gravitational redshift may be measured statistically by looking for “excess redshift” of the brightest galaxies at the center of clusters when compared to the fainter galaxies in the outskirts. For simulating such a measurement the relevant correction is given by the difference in time of flight between the different sources, and therefore the effect is suppressed by the small ratio between the scale of the cluster and the Hubble scale. One expects that in this case the correction is typically less than 1% of the signal and can therefore safely be neglected. I thank R Durrer, C Rampf and Y Rasera for comments on the manuscript as well as C Fidler and T Tram for many insightful discussions aboutthe N-body gauge. I further enjoyed a valuable correspondence with D Bertacca and C Porciani about <cit.>. Prior to submission, C Fidler et al. also kindly shared their manuscript <cit.> on a closely related topic. utcaps
http://arxiv.org/abs/1708.07552v2
{ "authors": [ "Julian Adamek" ], "categories": [ "gr-qc", "astro-ph.CO" ], "primary_category": "gr-qc", "published": "20170824204336", "title": "Perturbed redshifts from N-body simulations" }
We investigate the distribution of the resonances near spectral thresholds of Laplace operators on regular tree graphs with k-fold branching, k ≥ 1, perturbed by nonself-adjoint exponentially decaying potentials. We establish results on the absence of resonances which in particular involve absence of discrete spectrum near some sectors of the essential spectrum of the operators.Robust two-qubit gates in a linear ion crystal using a frequency-modulated driving force Kenneth R. Brown December 30, 2023 ======================================================================================== § INTRODUCTION A great interest has been focused in the last decades on spectral analysis of Laplaceoperators on regular trees. This includes local perturbations <cit.>, random settings <cit.> (see also the references therein), and quantum ergodicity regimes<cit.>. For complementary results, we refer the reader to the papers <cit.>,and for the relationships between the Laplace operator on trees and quantum graphs, see <cit.>.However, it seems that resonances have not been systematically studied in the context of(regular) trees.In this paper, we use resonance methods to obtain better understanding of local spectralproperties for perturbed Schrödinger operators on regular tree graphs with k-fold branching,k ≥ 1, as we describe below (cf. Section <ref>). Our techniques are similar tothose used in <cit.> (and references therein), where self-adjoint perturbations are considered. Actually, these methods can be extended to nonself-adjoint models, see forinstance <cit.>. Here, we are focused on some nonself-adjoint perturbations of the Laplaceoperator on regular tree graphs. In particular, we shall derive as a by-product, a descriptionof the eigenvalues distribution near the spectral thresholds of the operator. Since a nonself-adjoint framework is involved in this article, it is convenient to clarify thedifferent notions of spectra we use. Let T be a closed linear operator acting on a separableHilbert space ℋ, and z be an isolated point of σ(T) the spectrum of T.If γ is a small contour positively oriented containing z as the only point of σ(T),we recall that the Riesz projection P_z associated to z is defined by P_z := 1/2iπ∮_γ (T - ζ)^-1 dζ.The algebraic multiplicity of z is then defined by (z) := rank (P_z),and when it is finite, the point z is called a discrete eigenvalue of the operator T.Note that we have the inequality m(z) ≥dim ( Ker(T - z) ), which is the geometric multiplicity of z. The equality holds if T = T^∗. So, we define the discrete spectrum of T asσ_ disc(T) := { z ∈σ(T) : z }.We recall that if a closed linear operator has a closed range and both its kernel andcokernel are finite-dimensional, then it is called a Fredholm operator. Hence, we definethe essential spectrum of T asσ_ ess(T) := { z ∈ : }. Note that σ_ ess(T) is a closed subset of σ(T).The paper is organized as follows. In Section <ref>, we present our model. In section <ref>, we state our main results Theorem <ref> and Corollaries <ref>, <ref>. Section <ref> is devoted to preliminary results we need dueto Allard and Froese. In Section <ref>, we establish a formula giving a kernelrepresentation of the resolvent associated to the operator we consider and which iscrucial for our analysis. In Section <ref>, we define and characterizethe resonances near the spectral thresholds, while in Section <ref> we give theproof of our main results. Section <ref> gathers useful tools on the characteristicvalues concept of finite meromorphic operator-valued functions. § PRESENTATION OF THE MODELWe consider an infinite graph𝔾 = (𝒱,ℰ) with vertices 𝒱 and edges ℰ,and we let ℓ^2(𝒱) be the Hilbert spaceℓ^2(𝒱) := {ϕ : 𝒱→ :‖ϕ‖^2 := ∑_v ∈𝒱|ϕ(v)|^2 < ∞}, with the inner product ⟨ϕ,ψ⟩ := ∑_v ∈𝒱ϕ(v) ψ(v).On ℓ^2(𝒱), we consider the symmetric Schrödinger operatordefined by(ϕ)(v) := - ∑_w : w ∼ v( ϕ(w) - ϕ(v) ),where w ∼ v means that the vertices w and v are connected by an edge. Ifwe define on ℓ^2(𝒱) the symmetric operator L by(L ϕ)(v) := ∑_w : w ∼ vϕ(w),then it is not difficult to see that the operatorcan be written as= -L + d,where d is the multiplication operator by the function (also) notedd : 𝒱→, with d(v) denoting the number of edges connected with the vertex v. Note that when d is bounded, then so is the symmetric operators and L, hence self-adjoint. In a regular rooted tree graph with k-fold branching, k ≥ 1,(see Figure 2.1 for a binary tree graph), we have d = k + 1 - d_0 withd_0(v) = 1ifvcoincides with the root of the tree,0otherwise.This is the same model described in <cit.> and we refer tothis paper for more details.In (<ref>), d can be viewed as a perturbation of the operator L. It is well know(see Lemma <ref>) that the spectrum of the operator L is absolutely continuous,coincides with the essential spectrum and is equal toσ (L) = σ_ (L) = σ_ (L) =[ -2√(k),2√(k)].On ℓ^2(𝒱), we define the perturbed operator_M :=+ M,where M is identified with the multiplication operator by the bounded potential function(also) noted M. In a regular rooted tree graph with k-fold branching, according toabove, the operator _M can be written as_M = -L + k + 1 - d_0 + M.In (<ref>), the degree term d_0 can be included in the potential perturbation sothat _M can be viewed as a perturbation of the operator -L + k + 1. Hence, fromnow on, the operator _M will be written as_M = -L + k + 1 + M withM := - d_0 + M.In the sequel, we set t_±(k) := ± 2√(k) + k + 1,and we shall simply write t_± when no confusion can arise. Then, from (<ref>),it follows that the spectrum of the operator -L + k + 1 satisfiesσ (-L + k + 1) = σ_ (-L + k + 1) = σ_ (-L + k + 1) =[ t_-(k),t_+(k) ],where the t_±(k) play the role of thresholds of this spectrum.Now, let us choose some vertex v_0 = 0 ∈𝒱 as the origin of the graph𝔾 = (𝒱,ℰ). For v ∈𝒱, we define| v | as the length of the shortest path connecting 0 to v. Hence,| v | defines in the graph the distance from 0 to v. For r > 0,let S_r be the sphere of radius r in the graph defined byS_r := { v ∈𝒱 : | v | = r }. In this case, we have𝒱 = _0^∞ S_r,wheremeans a disjoint union, so that we haveℓ^2(𝒱) = ⊕_0^∞ℓ^2(S_r).In this paper, we are interested in the case of regular rooted tree graphs (<ref>)with k-fold branching, k ≥ 1. Moreover, the potential M will be assumed to satisfythe following assumption:Assumption (A): For v ∈𝒱, we have| M(v)|≤ Const.e^-δ| v |,withδ > 0if k = 1,δ≥ 6 ln (k) otherwise.We point out that in Assumption (A) above, there is no restriction on theperturbation potential M concerning its self-adjointness or not. The case k = 1 includes in particular the case of the Laplacian onℓ^2(,) without any boundary condition at 0. As mentioned above, in this article we investigate the resonances (or eigenvalues)distribution for the operator _M near the spectral thresholds t_±(k) given by(<ref>). As this will be observed, the work of Allard and Froese <cit.> willplay an important role in our analysis (cf. Section <ref> for more details).More precisely, in order to establish a suitable representation of the resolventassociated to the operator -L + k + 1, k ≥ 1 (cf. Theorem <ref>).Under Assumption (A), the perturbation potential M satisfies thedecay assumption of M. So, if we let Λ_n to denote the orthogonal projectiononto ⊕_r=0^n ℓ^2(S_r), then with the aid of the Schur lemma, it can be shown that ‖M - MΛ_n ‖n →∞⟶ 0.Since ℓ^2(S_r) is a k^r-dimensional space, then (<ref>) implies thatthe operatorM is the limit in norm of a sequence of finite rank operators.Therefore, M is a compact operator and in particular it is relativelycompact with respect to the operator -L + k + 1. Thus, since the operator-L + k + 1 is self-adjoint, then by <cit.> we have adisjoint unionσ(_M) = σ_ ess(_M) σ_ disc(_M). Moreover, Weyl's criterion on the invariance of the essential spectrum implies thatσ_ (_M) = σ_(-L + k + 1) = [ t_-(k),t_+(k) ].However, the (complex) discrete spectrum σ_ disc(_M) generated by the potential M can only accumulate at the points ofσ_ (_M). When M = M^∗,σ_ (_M) is just the set of real eigenvaluesof _M respectively from the right and the left of t_±(k).Exploiting the exponential decay of the potential M, we extend(cf. Section <ref>) meromorphically in Banach weighted spaces the resolvent of theoperator _M near z = t_±(k), in some two sheets Riemann surfacesℳ_t_± respectively. The first main difficulty to overcomeis to establish a good representation of the kernel of the resolvent associated to the operator -L + k + 1, k ≥ 1 (cf. Section <ref>). We thus define the resonances ofthe operator _M near z = t_±(k) as the poles of the abovemeromorphic extensions. Notice that this set of resonances contains the eigenvaluesof the operator _M localized near the spectral thresholds t_±(k).Otherwise, in the two sheets Riemann surfaces ℳ_t_±, the resonances will be parametrized by z_t_±(λ) for λsufficiently small for technical reasons. Furthermore, the point λ = 0 corresponds to the thresholdz = t_±(k) (cf. Section <ref> for more details). Actually, the resonancesverifyingz_t_±(λ) ∈ℳ_t_±, (λ) < 0,live in the non physical plane while those verifying z_t_±(λ) ∈ℳ_t_±, (λ) ≥ 0, coincide with the discrete and the embedded eigenvalues of the operator _Mnear t_± and are localized in the physical plane.We state Theorem <ref> where we establish an absence of resonances of the operator_M near the spectral thresholds t_±(k). In particular, this impliesresults on the absence of discrete spectrum and embedded eigenvalues near t_±(k)(cf. Corollaries <ref> and <ref>). To prove these results, we first reduce the analysisof resonances near the thresholds t_±(k) to that of the noninvertibility of some nonself-adjoint compact operators near λ = 0 (cf. Propositions <ref> and<ref>). This can be seen as a Birman-Schwinger principle in a nonself-adjoint context. Afterwards, the reduction made on the problem is reformulated in terms of characteristic values problems (cf. Propositions <ref> and <ref>). This allows us to applypowerful results (cf. Section <ref>) on the theory of characteristicvalues of finite meromorphic operator-valued functions to conclude. § STATEMENT OF THE MAIN RESULTS Let us first fix some notations. If λ∈, as usual|λ| <<1 means that λ is chosen small enough. The setof resonances of the operator _M near the spectral thresholdst_±(k) given by (<ref>) will be respectively denoted byRes_t_±(_M).We also recall that near z = t_±(k), theresonances are defined in some Riemann surfaces ℳ_t_± and coincidewith z_t_±(λ), 0 < |λ| <<1. More precisely,they are parametrized respectively byz_t_±(λ) := t_±(k) ∓λ^2√(k)∈ℳ_t_±.Furthermore, the embedded eigenvalues and the discrete spectrum of the operator_M near t_±(k) are the resonancesz_t_±(λ) ∈ℳ_t_± with (λ) ≥ 0. Now, for 0 < r <<1, let us introduce the punctured neighborhood of λ = 0Ω_r^∗ := {λ∈ : 0 < |λ| < r }.We then can state our first main result that gives an absence of resonances ofthe operator _M near the thresholds t_±(k), in small domainsof the form t_±(k) ∓√(k) Ω_r^∗^2.[Absence of resonances] Assume that the potential M satisfies Assumption (A). Then, for any r > 0 small enough and any punctured neighborhood Ω_r^∗, we have#{ z_t_±(λ) ∈ Res_t_±(_M) : λ∈Ω_r^∗} = 0,the resonances being counted with their multiplicity given by(<ref>) and (<ref>). Notice that Theorem <ref> just says that the operator _M has noresonances in a punctured neighborhood of t_±(k) in the two-sheets Riemann surfaces ℳ_t_± where they are defined. Since near t_±(k) the discrete spectrum of the operator _Mcorresponds to resonance points z_t_±(λ) ∈ℳ_t_±with (λ) ≥ 0, then a first consequence of Theorem <ref> is thefollowing result giving a non cluster phenomena of real or non real eigenvaluesnear t_±(k). Assume that the potential M satisfies Assumption (A). Then, there is no sequence(ν_j)_j of non real or real eigenvalues of the operator _Maccumulating at t_±(k). Now, thanks to the parametrizations (<ref>), the embedded eigenvalues of theoperator _M near t_±(k) respectively from the left and theright are the resonances z_t_±(λ) = t_±(k) ∓λ^2√(k)with λ∈_+ sufficiently small. Therefore, as a second consequence ofTheorem <ref> together with <cit.>, we have the following: Assume that the potential M satisfies Assumption (A). Then, for any r > 0 small enough, the operator _M has no embedded eigenvalues in ( t_-(k),t_-(k) + r^2 ) ∪( t_+(k) - r^2,t_+(k) ). In particular, for M = M^∗, the set of embedded eigenvalues of the operator_M in ( t_-(k),t_+(k) ) is finite.§ ON A DIAGONALIZATION OF THE OPERATOR L FOR A REGULAR TREE GRAPH In this section, we summarize some results and tools we need and which are developed in <cit.>. We shall essentially follow <cit.> and we refer to the cited papers for more details.Define the operator Π on ℓ^2(𝒱) by(Πϕ)(v) := ∑_w : w → vϕ(w),where for two vertices v and w, v → w means that they are connected by an edge with | w | = | v | + 1. Using the inner product defined on the Hilbert space ℓ^2(𝒱) by (<ref>), it can be easily checkedthat the adjoint operator Π^∗ is given by(Π^∗ϕ)(v) := ∑_w : v → wϕ(w).If we let L_S be the spherical Laplacian defined on ℓ^2(𝒱) by(L_S ϕ)(v) := ∑_w : w ∼ v| w | = | v |ϕ(w),then the operator L given by (<ref>) can be written asL = Π + Π^∗ + L_S.In a regular rooted tree graph with k-fold branching, since there areno edges connecting vertices within each sphere, then L_S = 0 so thatL = Π + Π^∗.To diagonalize the operator L given by (<ref>), invariant subspaces M_n,n ≥ 0, for Π are firstly constructed in <cit.>. More precisely, we have thefollowing lemma:<cit.> The Hilbert space ℓ^2(𝒱) can be decomposed as an orthogonal direct sumℓ^2(𝒱) = ⊕_n = 0^∞ M_n =⊕_n = 0^∞⊕_r = n^∞ Q_n,r,where the subspaces M_n areL-invariant. By construction in this Lemma <ref>, we have Q_0,0 := ℓ^2(S_0) and ℓ^2(S_r) = ⊕_ℓ = 0^r Q_ℓ,r. A schematic interpretation yields a triangular diagram as in Figure 4.1 below.According to Lemma <ref>, for any n ≥ 0, the subspace M_n is invariant for theoperator L. Thus, L can be decomposed asL = ⊕_n = 0^∞ L_n,the operators L_n, n ≥ 0, being the restriction of L to M_n. So, in order todiagonalize the operator L, it suffices to do it for each operator L_n for n ≥ 0.Consider a vector ϕ∈ M_n, i.e.ϕ = ⊕_j = 0^∞ϕ_n,n+j,with ϕ_n,n+j∈ Q_n,n+j for any j ≥ 0. The idea is to construct an isomorphism between the subspace M_n and the space of Q_n,n-valued sequences,namely the space ℓ^2(^+,Q_n,n). By construction of the Q_n,r, n ≥ 0, r ≥ 0 (see for instance <cit.>), for any j ≥ 0, the operator( 1/√(k)Π)^j defines an isometry between Q_n,n+j and Q_n,n, and (<ref>) can be written asϕ = ⊕_j = 0^∞( 1/√(k)Π)^j χ_n + j,where χ := (χ_n + j)_j ≥ 0 defines a sequence of vectors lying inQ_n,n. Therefore, under the above considerations, the operatorW : M_n ⟶ℓ^2(^+,Q_n,n),W ϕ = χ,defines an isomorphism between the spaces M_n and ℓ^2(^+,Q_n,n). Indeed,we have⟨ϕ,ϕ⟩ = ∑_j = 0^∞⟨ϕ_n + j,ϕ_n + j⟩ = ∑_j = 0^∞⟨χ_n + j,χ_n + j⟩ = ⟨χ,χ⟩_ℓ^2(^+,Q_n,n).Now, let :=/2π be the torus and define the unitary operator U : M_n ≅ℓ^2(^+,Q_n,n) ⟶ L^2_ odd (𝕋,Q_n,n)acting asU ( (χ_n,χ_n + 1, … ) ) := 1/√(π)∑_j = 0^∞χ_n + jsin( (j + 1) θ).Notice that the inner product in L^2_ odd (𝕋,Q_n,n) is defined by⟨ f,g ⟩_L^2_ odd := ∫_𝕋⟨ f(θ),g(θ)⟩dθ.Hence, a direct computation shows that ⟨ U χ,U χ⟩_L^2_ odd =⟨χ,χ⟩_ℓ^2(^+,Q_n,n) = ⟨ϕ,ϕ⟩,where the last equality corresponds to (<ref>). Moreover, we have the following lemma:<cit.> For any n ≥ 0, we haveU L_n U^∗ = 2√(k)cos (θ).In particular, Lemma <ref> shows that for any n ≥ 0, the spectrum of theoperator L_n is equal to [ -2√(k),2√(k)] and is absolutelycontinuous. Hence, this implies (<ref>). § REPRESENTATION OF THE WEIGHTED RESOLVENT A (-L + K + 1 - Z)^-1 B^∗ In this section, we give a suitable representation of the weighted resolventA (-L + k + 1 - z)^-1 B^∗ which turns to be useful in our analysis, where Aand B are bounded operators on ℓ^2(𝒱).For n ≥ 0, let P_n be the orthogonal projection of ℓ^2(𝒱)onto M_n, the subspace defined in Lemma <ref>. Since M_n can be decomposed asM_n = ⊕_j = 0^∞ Q_n,n + j,then if we let (E_m^n,n+j)_0 ≤ m ≤ N_j denote an orthonormal basis of the finite-dimensional space Q_n,n + j for any j fixed, we haveP_n = ∑_j≥0∑_0 ≤ m ≤ N_j⟨·,E_m^n,n+j⟩E_m^n,n+j.Notice that for any j ≥ 0 fixed, we have 1 + N_j = Q_n,n+j < k^n + j =ℓ^2(S_n + j),since Q_n,n+j⊂ℓ^2(S_n + j). Furthermore, according to Section<ref>, for any 0 ≤ m ≤ N_j fixed, there exists a unique vectorχ_m^n,n+j∈ Q_n,n such thatE_m^n,n+j = ( 1/√(k)Π)^j χ_m^n,n + j.Our goal in this section is to prove the following result: Let A and B be two bounded operators on ℓ^2(𝒱). Then, for any zin the resolvent set of the operator -L + k + 1 and any φ∈ℓ^2(𝒱),we haveA (-L + k + 1 - z)^-1 B^∗φ (v) = ∑_v' ∈𝒱 K(v,v') φ (v'),where the kernel K(v,v') is given byK(v,v') :=1/2 √(k)√(2/π)∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓ( B E_m^n,n+j) (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩×( - i e^i(j + ℓ + 2)Φ/√(u)√(4 - u) + i e^i| j - ℓ|Φ/√(u)√(4 - u)) ( A E_q^n,n + ℓ) (v) ,throughout the double change of variablesz + 2√(k) - (k + 1)/√(k) = u = 4 sin^2 ( Φ/2), (Φ) > 0. Let φ∈ℓ^2(𝒱) and z ∈ρ (-L + k + 1) the resolvent set ofthe operator -L + k + 1. Thanks to (<ref>), we have(-L + k + 1 - z)^-1= ∑_n≥0 P_n (-L_n + k + 1 - z)^-1 P_n= ∑_n≥0 P_n U^∗ U (-L_n + k + 1 - z)^-1 U^∗ U P_n,where U is the unitary operator defined by (<ref>). Thus, for any vector ψ∈ℓ^2(𝒱), we have⟨A (-L + k + 1 - z)^-1 B^∗φ,ψ⟩ = ∑_n≥0⟨ U (-L_n + k + 1 - z)^-1 U^∗ U P_n B^∗φ,U P_n A^∗ψ⟩_L^2_ odd,⟨·,·⟩_L^2_ odd being the inner product defined by (<ref>). Together with Lemma <ref>, this gives⟨ A (-L + k + 1 - z)^-1 B^∗φ,ψ⟩ = ∑_n≥0∫_𝕋( -2√(k)cos (θ) + k + 1 - z )^-1⟨ U P_n B^∗φ (θ),U P_nA^∗ψ (θ) ⟩dθ.From (<ref>), it follows that for any ϕ∈ℓ^2(𝒱) and any bounded operator W on ℓ^2(𝒱), we haveU P_n W ϕ (θ) = ∑_j≥0∑_0 ≤ m ≤ N_j⟨ϕ,W^∗ E_m^n,n+j⟩ U E_m^n,n+j (θ).Then, combining (<ref>) and (<ref>), we obtain⟨A (-L + k + 1 - z)^-1 B^∗φ,ψ⟩ = ∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓ⟨φ,B E_m^n,n+j⟩⟨ψ,A E_q^n,n+ℓ⟩×∫_𝕋( -2√(k)cos (θ) + k + 1 - z )^-1⟨ U E_m^n,n+j (θ),U E_q^n,n+ℓ (θ)⟩dθ = ∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓ⟨⟨φ,B E_m^n,n+j⟩A E_q^n,n+ℓ,ψ⟩×∫_𝕋( -2√(k)cos (θ) + k + 1 - z )^-1⟨ U E_m^n,n+j (θ),U E_q^n,n+ℓ (θ)⟩dθ.According to the construction of the unitary operator U and (<ref>), forp = m, q and p' = j, ℓ respectively, we haveU E_p^n,n+p' (θ) = 1/√(π)χ_p^n,n+p'sin( (p' + 1) θ).Putting this together with (<ref>), we obtain⟨ A (-L + k + 1 - z)^-1 B^∗φ,ψ⟩ = 1/π∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓ⟨⟨φ,B E_m^n,n+j⟩⟨χ_m^n,n+j,χ_q^n,n+ℓ⟩ A E_q^n,n+ℓ,ψ⟩×∫_𝕋( -2√(k)cos (θ) + k + 1 - z )^-1sin( (j + 1) θ) sin( (ℓ + 1) θ) dθ.Now, (<ref>) implies that the action of the operator A (-L + k + 1 - z)^-1 B^∗ on ℓ^2(𝒱) can be described byA (-L + k + 1 - z)^-1 B^∗φ (v)= 1/π∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓ⟨φ,B E_m^n,n+j⟩( A E_q^n,n+ℓ) (v)⟨χ_m^n,n+j,χ_q^n,n+ℓ⟩×∫_𝕋( -2√(k)cos (θ) + k + 1 - z )^-1sin( (j + 1) θ) sin( (ℓ + 1) θ) dθ.Since we have⟨φ,W E_m^n,n+j⟩ =∑_v' ∈𝒱φ (v') ( W E_m^n,n+j) (v'),W being as above, then it follows from (<ref>) thatA (-L + k + 1 - z)^-1 B^∗φ (v) = ∑_v' ∈𝒱 K(v,v') φ (v'),withK(v,v') :=1/π∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓ( B E_m^n,n+j) (v')( A E_q^n,n+ℓ) (v)⟨χ_m^n,n+j,χ_q^n,n+ℓ⟩×∫_𝕋( -2√(k)cos (θ) + k + 1 - z )^-1sin( (j + 1) θ) sin( (ℓ + 1) θ) dθ.Then, to complete the proof of the theorem, it remains only to show that1/π∫_𝕋 ( -2√(k)cos (θ) + k + 1 - z )^-1sin( (j + 1) θ) sin( (ℓ + 1) θ) dθ = 1/2√(k)√(2/π)( - i e^i(j + ℓ + 2)Φ/√(u)√(4 - u) + i e^i| j - ℓ|Φ/√(u)√(4 - u)),where the relation between z, u and Φ is given by (<ref>). To do this,we have to deal with the discrete Fourier transform ℱ : ℓ^2(,)→ L^2(), defined for any x ∈ℓ^2(,) and f ∈ L^2() by(ℱx)(θ) := (2π)^-1/2∑_n∈e^-inθ x(n), ( ℱ^-1 f )(n) := (2π)^-1/2∫_ e^inθ f(θ) dθ.Let u ∈∖ [0,4] and introduce the following change of variablesu = 4 sin^2 ( Φ/2), (Φ) > 0.Then, it can be proved (cf. e.g. <cit.>) that we have( ℱ^-1( 2 - 2 cos (·) - u )^-1)(n) = ie^i | n |Φ/2 sin(Φ) =ie^i | n |Φ/√(u)√(4 - u),or equivalently1/π∫_( 2 - 2 cos(θ) - u )^-1 e^inθdθ = √(2/π)ie^i | n |Φ/√(u)√(4 - u).Now, (<ref>) with the help of the transformationssin(nθ) = 1/2i( e^inθ - e^-inθ) and( -2√(k)cos(θ) + k + 1 - z )^-1 = 1/√(k)( 2 - 2 cos(θ) - u )^-1,where u = z + 2√(k) - (k + 1)/√(k) give immediately (<ref>). This completes the proof of the theorem.§ RESONANCES NEAR Z = T_±(K) §.§ Definition of the resonances In this subsection, we define the resonances of the operator _Mnear the spectral thresholds t_±(k) given by (<ref>). As preparation,preliminary lemmas will be proved firstly.From now on, the potential perturbation M is assumed to satisfy Assumption (A).Moreover, the following determination of the complex square root∖ (-∞,0] √(·)⟶^+ := { z ∈ : (z) > 0 }will be adopted throughout this paper. For ε > 0 such that0 < ε < δ/4, we let D(0,ε)^∗ be the punctured neighborhood of 0 defined byD(0,ε)^∗ := {λ∈ : 0 < |λ| <ε}.Thanks to the first change of variables in (<ref>), to define and to study theresonances of the operator _M near the spectral thresholds t_±(k),it suffices to define and to study them respectively near u = 0 and u = 4. However,in practice, there is a simple way (see the comments just after Definition <ref>)allowing to reduce the analysis of the resonances near the second threshold t_+(k) tothat of the first one t_-(k). For further use, let e_± be the multiplicationoperators by the functionsv ⟼ e_± (v) := e^±δ/2| v |.We have the following lemma: Let z_t_-(λ) be the parametrization defined by (<ref>). Then, there exists0 < ε_0 ≤δ/8 small enough such that the operator-valued functionλ↦ e_- ( -L + k + 1 - z_t_-(λ) )^-1 e_-,admits an extension from D(0,ε_0)^∗∩^+ to D(0,ε_0)^∗, with values in ( ℓ^2(𝒱)) the set of compact linear operators on ℓ^2(𝒱). Moreover, this extension is holomorphic. By Theorem <ref>, for λ∈ D(0,ε)^∗∩^+, 0 < ε < δ/4 small enough, the operator e_- ( -L + k + 1 - z_t_-(λ) )^-1 e_-admits the kernelK(λ,v,v') = 1/2 √(k)√(2/π)( K_1(λ,v,v')+ K_2(λ,v,v') ),where K_1(λ,v,v') : = ∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓe^-δ/2| v' | E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩f_1(j,ℓ,λ) e^-δ/2| v | E_q^n,n + ℓ (v),andK_2(λ,v,v') := ∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓ e^-δ/2| v' | E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩f_2(j,ℓ,λ) e^-δ/2| v | E_q^n,n + ℓ (v),with f_1(j,ℓ,λ) := - i e^i(j + ℓ + 2) 2arcsinλ/2/λ√(4 - λ^2) , f_2(j,ℓ,λ) := i e^i| j - ℓ| 2arcsinλ/2/λ√(4 - λ^2). a) We want to prove the convergence of∑_v, v' ∈ 𝒱| K(λ,v,v') |^2 forλ∈ D(0,ε_0)^∗ for some 0 < ε_0 ≤δ/8 small enough. We point out that constants are generic, i.e. can change from an estimate to another. By(<ref>)–(<ref>), we have∑_v, v' ∈ 𝒱| K(λ,v,v') |^2 = 1/2kπ∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'| K_1(λ,v,v')+ K_2(λ,v,v') |^2 ≤1/kπ∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'( |K_1(λ,v,v') |^2 + | K_2(λ,v,v') |^2 ).Let us first prove that ∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'| K_1(λ,v,v') |^2converges accordingly to the above claim. Thanks to (<ref>), the properties(in Section <ref>) of the vectors E_m^n,n+j, χ_m^n,n+j and (<ref>), we have ∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'| K_1(λ,v,v') |^2= ∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'|∑_n = 0^min(r,r')∑_j : n + j = r'ℓ : n + ℓ = r∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓe^-δ/2 r'× E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩( - i e^i(j + ℓ + 2) 2arcsinλ/2/λ√(4 - λ^2)) e^-δ/2r E_q^n,n + ℓ (v) |^2 ≤ C ∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'( ∑_n = 0^min(r,r')∑_j : n + j = r'ℓ : n + ℓ = r∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓe^-δ/2 r'e^-(j + ℓ + 2) ( 2arcsinλ/2)/|λ√(4 - λ^2)| e^-δ/2r)^2 ≤ C ∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'( ∑_n = 0^min(r,r')∑_j : n + j = r'ℓ : n + ℓ = rk^n + j k^n + ℓ e^-δ/2 r'e^-(j + ℓ + 2) ( 2arcsinλ/2)/|λ√(4 - λ^2)| e^-δ/2r)^2 ≤ C ∑_r≥0 r' ≥0 k^3r k^3r' e^-δ re^-δ r'( ∑_n = 0^min(r,r')∑_j : n + j = r'ℓ : n + ℓ = re^-(j + ℓ + 2) ( 2arcsinλ/2)/|λ√(4 - λ^2)|)^2,for some constant C > 0. Since for 0 < |λ|≪ 1 we have2 arcsinλ/2 = λ + o(|λ|), then there exists0 < ε_0 ≤δ/8 small enough such that for each0 < |λ|≤ε_0, we havee^-(j + ℓ + 2) ( 2arcsinλ/2)/|λ√(4 - λ^2)|≤e^-((λ) - δ/8) (j + ℓ + 2)/|λ√(4 - λ^2)|.Then, it follows from (<ref>) that for each λ∈ D(0,ε_0)^∗,we have∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'| K_1(λ,v,v') |^2 ≤ C ∑_r≥0 r' ≥0 k^3r k^3r'e^-δ r e^-δ r'( ∑_n = 0^min(r,r')∑_j : n + j = r'ℓ : n + ℓ = re^-((λ) - δ/8) (j + ℓ + 2)/|λ√(4 - λ^2)|)^2.Clearly, if F = F(j,ℓ) is a function of the variables j and ℓ, then ∑_n = 0^min(r,r')∑_j : n + j = r'ℓ : n + ℓ = r F(j,ℓ) = ∑_n = 0^min(r,r') F(r'-n,r-n).This together with (<ref>) imply that∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'| K_1(λ,v,v') |^2≤ C ∑_r≥0 r' ≥0 k^3r k^3r'e^-δ r e^-δ r'( ∑_n = 0^min(r,r')e^-((λ)- δ/8) (r + r' + 2) e^2( (λ) - δ/8)n /|λ√(4 - λ^2)|)^2.Since (λ) - δ/8≤ 0, then it follows from (<ref>) that∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'| K_1(λ,v,v') |^2 ≤ C ∑_r≥0 r' ≥0 k^3r k^3r'e^-δ r e^-δ r'( ∑_n = 0^min(r,r')e^-((λ)- δ/8) (r + r' + 2)/|λ√(4 - λ^2)|)^2 ≤ C ∑_r≥0 r' ≥0 k^3r k^3r' e^-δ re^-δ r'e^-2((λ) - δ/8) (r + r' + 2)/|λ^2 (4 - λ^2) | (r + 1)(r' + 1)= C ∑_r≥0 r' ≥0 (r + 1) k^3r e^-δ re^-2((λ) - δ/8) (r + 1) e^-2((λ) - δ/8) (r' + 1)/|λ^2 (4 - λ^2) |(r' + 1) k^3r' e^-δ r'≤ C ∑_r≥0 r' ≥0 (r + 1) k^3(r+1) e^-δ (r+1)e^-2((λ) - δ/8) (r + 1) e^-2((λ) - δ/8) (r' + 1)/|λ^2 (4 - λ^2) |× (r' + 1) k^3(r'+1) e^-δ (r'+1) = C ∑_r≥0 r' ≥0 (r + 1) e^-( δ/2- 3 ln(k) ) (r+1)e^-2 ( (λ) + δ/8)(r + 1) e^-2( (λ) + δ/8) (r' + 1)/|λ^2 (4 - λ^2) |× (r' + 1) e^-( δ/2 - 3 ln(k) ) (r'+1).Assumption (A) implies that δ/2 - 3 ln(k) ≥ 0.Thus, the r.h.s. and then the l.h.s. of (<ref>) is convergent for any λ∈ D(0,ε_0)^∗.Similarly let us prove that ∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'| K_2(λ,v,v') |^2converges. As in (<ref>), we can show that ∑_r≥0 r' ≥0 ∑_v∈S_r v' ∈S_r'| K_2(λ,v,v') |^2= ∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'|∑_n = 0^min(r,r')∑_j : n + j = r'ℓ : n + ℓ = r∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓe^-δ/2 r' E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩( i e^i | j - ℓ| 2arcsinλ/2/λ√(4 - λ^2)) e^-δ/2r E_q^n,n + ℓ (v) |^2 ≤ C ∑_r≥0 r' ≥0 k^3r k^3r' e^-δ re^-δ r'( ∑_n = 0^min(r,r')∑_j : n + j = r'ℓ: n + ℓ = re^- | j - ℓ|( 2arcsinλ/2)/|λ√(4 - λ^2)|)^2.Thus, similarly to (<ref>), for each λ∈ D(0,ε_0)^∗, we have∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'| K_2(λ,v,v') |^2 ≤ C ∑_r≥0 r' ≥0 k^3r k^3r'e^-δ r e^-δ r'( ∑_n = 0^min(r,r')∑_j : n + j = r'ℓ : n + ℓ = re^-((λ) - δ/8) | j - ℓ|/|λ√(4 - λ^2)|)^2.In this case, we use (<ref>) to write ∑_r≥0 r' ≥0∑_v∈S_r v' ∈S_r'| K_2(λ,v,v') |^2 ≤ C ∑_r≥0 r' ≥0 k^3r k^3r'e^-δ r e^-δ r'( ∑_n = 0^min(r,r')e^-((λ)- δ/8) | r - r' |/|λ√(4 - λ^2)|)^2 ≤ C ∑_r≥0 r' ≥0 k^3r k^3r' e^-δ re^-δ r'e^-2((λ) - δ/8) | r - r' |/|λ^2 (4 - λ^2) | (r + 1)(r' + 1)= C ∑_r≥0 r' ≥0 (r + 1) e^-( δ/2- 3 ln(k) )re^- δ/2r e^-2((λ) - δ/8) | r - r' |) e^- δ/2r'/|λ^2 (4 - λ^2) |× (r' + 1) e^-( δ/2 - 3 ln(k) )r'.Assumption (A) implies that δ/2 - 3 ln(k) ≥ 0. Thus, the r.h.s.and then the l.h.s. of (<ref>) is convergent for any λ∈ D(0,ε_0)^∗.Now, since ∑_v, v' ∈ 𝒱| K(λ,v,v') |^2is convergent for λ∈ D(0,ε_0)^∗,then the operator given by (<ref>) belongs in ( ℓ^2(𝒱) ) for λ∈ D(0,ε_0)^∗, the class of Hilbert-Schmidt operatorson ℓ^2(𝒱). Consequently, the operator-valued function defined by(<ref>) can be extended from D(0,ε_0)^∗∩^+ toD(0,ε_0)^∗, with values in ( ℓ^2(𝒱) ).It remains to prove that this extension is holomorphic.b) To simply notation, let us denote this extension by D(0,ε_0)^∗∋λ↦ T(λ).Since the kernel of the operator T(λ) is given by K(λ,v,v') defined by (<ref>), then to show the claim, it is sufficient to prove it for the maps D(0,ε_0)^∗∋λ↦ T_s(λ),where for s = 1, 2, T_s(λ) is the operator with kernel given byK_s(λ,v,v') in (<ref>) and (<ref>). We give the proof only for the case s = 1, the case s = 2 being treated in a similar way. So, for λ∈ D(0,ε_0)^∗, let f_1(j,ℓ,λ) be the function defined by (<ref>) and D_1(λ) be the operator whose kernel is∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓe^-δ/2| v' | E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩∂_λ f_1(j,ℓ,λ) e^-δ/2| v | E_q^n,n + ℓ (v)= ∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓe^-δ/2| v' | E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩×( - i e^i(j + ℓ + 2) 2arcsinλ/2/λ^2 (4 - λ^2)) e^-δ/2| v | E_q^n,n + ℓ (v) ( 2i (j + ℓ + 2) - 4 - 2λ^2/√(4 - λ^2)).As in a) above, we can show that D_1(λ) ∈( ℓ^2(𝒱) ). Therefore, for λ_0 ∈ D(0,ε_0)^∗, the kernel of the Hilbert-Schmidt operator T_1(λ) - T_1(λ_0)/λ - λ_0- D_1(λ_0) is given byK_1(λ,λ_0,v,v') := ∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓe^-δ/2| v' | E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩×( f_1(j,ℓ,λ) - f_1(j,ℓ,λ_0)/λ - λ_0 -∂_λ f_1(j,ℓ,λ_0) ) e^-δ/2| v | E_q^n,n + ℓ (v).Thus, to conclude the proof of the lemma, we have just to justify that‖T_1(λ) - T_1(λ_0)/λ - λ_0- D_1(λ_0) ‖_(ℓ^2(𝒱))⟶ 0 as λ→λ_0. Since we have‖T_1(λ) - T_1(λ_0)/λ - λ_0- D_1(λ_0) ‖_(ℓ^2(𝒱))≤∑_v, v' ∈ 𝒱| K_1(λ,λ_0,v,v') |^2,then it suffices to prove that the r.h.s. of (<ref>) tends to zero as λ→λ_0. The Taylor-Lagrange formula applied to thefunction [0,1] ∋ t ↦ g(t) := f_1 (j,ℓ,tλ + (1-t)λ_0 )asserts there exists θ∈ (0,1) such thatf_1(j,ℓ,λ) = f_1(j,ℓ,λ_0) + (λ - λ_0)∂_λ f_1(j,ℓ,λ_0) + (λ - λ_0)^2/2∂_λ^(2) f_1 (j,ℓ, θλ + (1-θ)λ_0 ).Then, it follows from (<ref>) and (<ref>) that K_1(λ,λ_0,v,v')can be represented asK_1(λ,λ_0,v,v') = λ - λ_0/2∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓe^-δ/2| v' | E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩×∂_λ^(2) f_1 (j,ℓ, θλ + (1-θ)λ_0 ) e^-δ/2| v | E_q^n,n + ℓ (v).Now, easy but fastidious computations allow to see that there exists a familyof holomorphic functions F_p,q, 0 ≤ p ≤ q, on D(0,ε_0)^∗ such that∂_λ^(q) f_1(j,ℓ,λ) = ie^i(j+ℓ+2) 2arcsinλ/2∑_p=0^q F_p,q(λ) (j + ℓ + 2)^q-p.In particular, for q = 2, we have∂_λ^(2) f_1(j,ℓ,λ) = ie^i(j+ℓ+2) 2arcsinλ/2( F_0,2(λ)(j+ℓ+2)^2 + F_1,2(λ) (j+ℓ+2) + F_2,2(λ) ).Putting this together with (<ref>), we obtain| K_1(λ,λ_0,v,v') |≤|λ - λ_0 |/2| F_0,2( θλ + (1-θ)λ_0 ) |∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓ| e^-δ/2| v' | E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩× e^i(j+ℓ+2) 2arcsinθλ + (1-θ)λ_0/2 (j+ℓ+2)^2 e^-δ/2| v | E_q^n,n + ℓ (v) | + |λ - λ_0 |/2| F_1,2( θλ + (1-θ)λ_0 ) ||∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓ| e^-δ/2| v' | E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩× e^i(j+ℓ+2) 2arcsinθλ + (1-θ)λ_0/2 (j+ℓ+2) e^-δ/2| v | E_q^n,n + ℓ (v) | + |λ - λ_0 |/2| F_2,2( θλ + (1-θ)λ_0 ) |∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓ| e^-δ/2| v' | E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩× e^i(j+ℓ+2) 2arcsinθλ + (1-θ)λ_0/2e^-δ/2| v | E_q^n,n + ℓ (v) | =: ∑_p=0^2 Q_p(λ,λ_0,v,v').Thus, we have∑_v, v' ∈ 𝒱| K_1(λ,λ_0,v,v') |^2 ≤ Const.( ∑_v, v' ∈ 𝒱 Q_1(λ,λ_0,v,v')^2 + ∑_v, v' ∈ 𝒱 Q_2(λ,λ_0,v,v')^2 +∑_v, v' ∈ 𝒱 Q_3(λ,λ_0,v,v')^2 ).For p ∈{ 0,1,2 }, let us show that ∑_v, v' ∈ 𝒱 Q_p(λ,λ_0,v,v')^2 ⟶ 0 as λ→λ_0.Sinceθλ + (1-θ)λ_0 belongs in D(0,ε_0)^∗ forλ, λ_0 ∈ D(0,ε_0)^∗, then similarly to (<ref>) we havee^-(j + ℓ + 2) ( 2arcsinθλ + (1-θ)λ_0/2) ≤ e^-((θλ + (1-θ)λ_0) - δ/8) (j + ℓ + 2)≤ e^δ/4(j + ℓ + 2).Therefore, by arguing as in a) above, we obtain that there exists a uniformconstant Const. in λ and λ_0 such that for q ∈{ 0,1,2 },∑_v, v' ∈ 𝒱 Q_p(λ,λ_0,v,v')^2 ≤ Const.|λ - λ_0 |^2/4| F_q,2(θλ + (1-θ)λ_0) |^2 λ→λ_0⟶ 0,implying by (<ref>) and (<ref>) that ‖T_1(λ) - T_1(λ_0)/λ - λ_0 - D_1(λ_0) ‖_(ℓ^2(𝒱))⟶ 0 as λ→λ_0. Thus, the operator-valued functionD(0,ε_0)^∗∋λ↦ T_1(λ) is holomorphic with derivative∂_λ T_1(λ) = D_1(λ). Similarly,D(0,ε_0)^∗∋λ↦ T_2(λ) is holomorphic, and then D(0,ε_0)^∗∋λ↦ T(λ). This concludes the proof of the lemma.It follows from the identities( _±M - z )^-1( I ±M (-L + k + 1 - z)^-1) = (-L + k + 1 - z)^-1, thate_- (_±M - z )^-1 e_-= e_- (-L + k + 1 - z)^-1 e_- ( I ± e_+ M(-L + k + 1 - z)^-1e_- )^-1.Assumption (A) on the potential perturbation M implies that e_+ M = ℳ e_-for some bounded operator ℳ on ℓ^2(𝒱). Thus, combining (<ref>) and Lemma <ref>, we obtain that the operator-valued functionsλ⟼± e_+ M( -L + k + 1 - z_t_-(λ) )^-1 e_-are holomorphic in D(0,ε)^∗, with values in ( ℓ^2(𝒱)). Therefore, by the analytic Fredholm extension theorem, the operator-valued functionsλ⟼( I ± e_+ M( -L + k + 1 - z_t_-(λ))^-1 e_- )^-1admit meromorphic extensions from D(0,ε_0)^∗∩^+ toD(0,ε_0)^∗. Defining the Banach spaces ℓ_±δ^2(𝒱) := e^±δ/2| v |ℓ^2(𝒱),we then get the following proposition: The operator-valued functionsλ⟼( _±M - z_t_-(λ) )^-1∈ℒ( ℓ_-δ^2(𝒱),ℓ_δ^2(𝒱) ), admit meromorphic extensions from D(0,ε_0)^∗∩^+ toD(0,ε_0)^∗. These extensions will be denoted by R_±M( z_t_-(λ) ) respectively. As in (<ref>), Assumption (A) on M implies that there exists a bounded operatorℬ on ℓ^2(𝒱) such that √(|M|) = ℬ e_-. Together with Lemma <ref>, this gives the following lemma:Let J be defined by the polar decomposition M = J |M|of the potential perturbation M. Then, the operator-valued functionsλ⟼𝒯_±M( z_t_-(λ) ) := ± J√(|M|)( -L + k + 1 - z_t_-(λ) )^-1√(|M|), admit holomorphic extensions from D(0,ε_0)^∗∩^+ to D(0,ε_0)^∗, with values in ( ℓ^2(𝒱) ). We are now in position to define the resonances of the operator _M nearthe spectral thresholds z = t_±(k). Note that in the next definitions, the quantityInd_γ(·) is defined in the appendix by (<ref>).We define the resonances of the operator _M near t_-(k) as the polesof the meromorphic extension R_M(z), of the resolvent( _M - z )^-1 in ℒ( ℓ_-δ^2(𝒱),ℓ_δ^2(𝒱) ).The multiplicity of a resonance z_t_- := z_t_-(λ) = t_-(k) + λ^2√(k), is defined by ( z_t_-) := Ind_γ ( I +𝒯_M( z_t_-(·) ) ),where γ is a small contour positively oriented containing λ as the onlypoint satisfying that z_t_-(λ) is a resonance of _M.As mentioned previously, to define the resonances of the operator _M neart_+(k), there exists a specific reduction which exploits a simple relation between thetwo thresholds t_±(k). Indeed, define the self-adjoint unitary operator Θon ℓ^2(𝒱) by (Θφ)(v) := (-1)^| v |φ(v).We thus have * Θ^2 = I,* Θ L Θ^-1 = -L,* ΘMΘ^-1 = M.In the last point, we have used the fact that M is the multiplication operatorby the function M. Thus, it can be easily verified that we haveΘ( -L + k + 1 + M - z ) Θ^-1 =-(-L + k + 1) + M + 2(k + 1) - z,so thatΘ e_- ( _M - z )^-1 e_- Θ^-1 = - e_- ( _-M - ( 2(k + 1) - z ) )^-1 e_-.Set ω := 2(k + 1) - z.Since ω is near t_-(k) for z near t_+(k), then using relation (<ref>),we can define the resonances of the operator _M near t_+(k) asthe poles of the meromorphic extension of the resolvent- ( _-M - ω)^-1 : ℓ_-δ^2(𝒱)→ℓ_δ^2(𝒱),near ω = t_-(k), similarly to Definition <ref>. More precisely, we have thefollowing definition: We define the resonances of the operator _M near t_+(k) as the polesof the meromorphic extension R_-M(ω), of the resolvent( _-M - ω)^-1 in ℒ( ℓ_-δ^2(𝒱),ℓ_δ^2(𝒱) ), for ω given by (<ref>) near t_-(k).The multiplicity of a resonance z_t_+ := z_t_+(λ) = 2(k+ 1) - ( t_-(k) + λ^2√(k))= t_+(k) - λ^2√(k), is defined by (z_t_+) := Ind_γ ( I +𝒯_-M( 2(k + 1) - z_t_+(·) ) ),where γ is a small contour positively oriented containing λ as the onlypoint satisfying that 2(k + 1) - z_t_+(λ) is a pole of R_-M(ω).Notice that the resonances z_t_±(λ) near the spectral thresholds t_±(k)are defined in some two-sheets Riemann surfaces ℳ_t_± respectively. Otherwise, the discrete eigenvalues of the operator _M near t_±(k)are resonances. Moreover, the algebraic multiplicity (<ref>) of a discrete eigenvaluecoincides with its multiplicity as a resonance near t_±(k) respectively given by(<ref>) and (<ref>). Let us give the proof only for the equality(<ref>) = (<ref>), the equality (<ref>) = (<ref>) could betreated in a similar fashion. Let z_t_- := z_t_-(λ) ∈∖[t_-(k),t_+(k)] be a discrete eigenvalue of _M near t_-(k).Firstly, observe that Assumption (A) on M implies that M is oftrace-class. In this case, it is is well know (see e.g. <cit.>) that z_t_-∈σ_( _M) if andonly if h(z_t_-) = 0, where for z ∈∖ [t_-(k),t_+(k)], h is theholomorphic function defined byh(z) := ( I + M( -L + k + 1 - z )^-1)= ( I + J √(|M|)( -L + k + 1 - z )^-1√(|M|)).Moreover, the algebraic multiplicity (<ref>) of z_t_- is equal to itsorder as zero of the function h. Namely, by the residues theorem,(z_t_-) = ind_γ' h :=1/2iπ∫_γ'h'(z)/h(z) dz,where γ' is a small circle positively oriented containing z_t_- as the onlyzero of h. Then, the claim follows directly from the equality ind_γ' h = Ind_γ ( I + 𝒯_M( z_t_-(·) ) ),see for instance <cit.> for more details. §.§ Characterization of the resonancesIn this subsection, we give a simple characterization of resonances of_M near the spectral thresholds t_±(k).The first one concerns the resonances near z = t_-(k). The following assertions are equivalent:(a) z_t_- = z_t_-(λ) ∈ℳ_t_- is a resonance,(b) z_t_- is a pole of R_M(z), (c) -1 is an eigenvalue of 𝒯_M( z_t_-(λ) ). (a) ⟺ (b) is just the Definition <ref>, while(b) ⟺ (c) is a consequence of the identity( I + J √(|M|) (-L + k + 1 - z)^-1√(|M|))( I - J √(|M|)( _M - z )^-1√(|M|)) = I,coming from the resolvent equation. Similarly, we have the following proposition: The following assertions are equivalent:(a) z_t_+ = z_t_+(λ) ∈ℳ_t_+ is a resonance,(b) 2(k + 1) - z_t_+(λ) is a pole of R_-M(ω)for ω given by (<ref>) near t_-(λ),(c) -1 is an eigenvalue of𝒯_-M( 2(k + 1) - z_t_+(λ) ). § PROOF OF THEOREM <REF> This section is devoted to the proof of Theorem <ref>. It will be divided intotree steps. §.§ A preliminary resultThe first step consists on refining the representations of the sandwiched resolvents 𝒯_M( z_t_-(λ) ) and𝒯_-M( 2(k + 1) - z_t_+(λ) ) near the spectral thresholds z = t_±(k).Notice that2(k + 1) - z_t_+(λ) = z_t_-(λ), so that our analysis will be just reduced to the operators𝒯_±M( z_t_-(λ) ). Recall that 𝒯_±M( z_t_-(λ) ) = ± J√(|M|)( -L + k + 1 - z_t_-(λ) )^-1√(|M|), and let us setγ(λ) := ( e^i (j + ℓ + 2) 2arcsinλ/2- 1 )/λ√(4 - λ^2)andβ(λ) := ( e^i | j - ℓ|2arcsinλ/2 - 1 )/λ√(4 - λ^2),j, ℓ≥ 0.By construction, as shows the proof of Lemma <ref>, forλ∈ D(0,ε_0)^∗ the operator √(|M|)( -L + k + 1 - z_t_-(λ) )^-1√(|M|)admits the integral kernel ±1/2 √(k)√(2/π)( 𝒦_1^(λ)(v,v') + 𝒦_2^(λ)(v,v') ),where 𝒦_1^(λ)(v,v') : =∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓ√(|M|) (v')E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩× i ( - γ(λ) - 1/λ√(4 - λ^2)) √(|M|) (v) E_q^n,n + ℓ (v),and𝒦_2^(λ)(v,v') : =∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓ√(|M|) (v')E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩× i ( β(λ) + 1/λ√(4 - λ^2)) √(|M|) (v) E_q^n,n + ℓ (v).Since γ and β can be extended to holomorphic functions on the open diskD(0,ε_0)^∗∪{ 0 }, then by combining identities(<ref>)-(<ref>), we get the following result:For λ∈ D(0,ε_0)^∗∪{ 0 }, we have𝒯_±M( z_t_-(λ) ) =± J √(2/π)Hol(λ),where Hol(λ) defines a holomorphic operator onD(0,ε_0)^∗∪{ 0 } with values in( ℓ^2(𝒱) ), and with kernel given byi/2 √(k)∑_n≥0∑_j≥0ℓ ≥0∑_0 ≤ m ≤ N_j 0 ≤ q ≤ N_ℓ√(|M|) (v')E_m^n,n+j (v')⟨χ_m^n,n + j,χ_q^n,n + ℓ⟩×( β(λ) - γ(λ) ) √(|M|) (v) E_q^n,n + ℓ (v). §.§ Reformulation of the problem Let ℋ be a separable Hilbert space, 𝒟⊆ be adomain containing 0, and (ℋ) denote the set of compactlinear operators in ℋ. For a holomorphic operator-valued function K : 𝒟∖{ 0 }⟶(ℋ),and a subset Ω⊆𝒟∖{ 0 }, a complexnumber λ∈Ω is said to be a characteristic value of theoperator-valued functionλ⟼ I + K(λ), if the operator I + K(λ) is not invertible (cf. Section <ref> for moredetails about the concept of characteristic value). By abuse of language, we shall sometimes say that λ is a characteristic value of the operator I + K(λ).Once there exists λ_0∈Ω such that I + K(λ_0) is invertible,then by the analytic Fredholm theorem, the set of characteristic valuesλ∈Ω of I + K(·) is discrete. Moreover, according to Definition <ref> and (<ref>), the multiplicity of a characteristic value λ isdefined by(λ) := Ind_γ( I + K(·) ),γ being a small contour positively oriented which contains λ as the onlypoint satisfying I + K(z) is not invertible, and with I + K(·) not vanishingon γ. We then can reformulate Propositions <ref> and <ref> in the following way: For λ∈ D(0,ε_0)^∗, the following assertions areequivalent:(a) z_t_- = z_t_-(λ) ∈ℳ_t_- is a resonance,(b) λ is a characteristic value of I + 𝒯_M( z_t_-(·) ). Moreover, thanks to (<ref>), the multiplicity of the resonance z_t_-(λ)coincides with that of the characteristic value λ.For λ∈ D(0,ε_0)^∗, the following assertions areequivalent:(a) z_t_+ = z_t_+(λ) ∈ℳ_t_+ is a resonance,(b) λ is a characteristic value ofI + 𝒯_-M( 2(k + 1) - z_t_+(·) ).Moreover, thanks to (<ref>), the multiplicity of the resonance z_t_+(λ)coincides with that of the characteristic value λ.§.§ End of the proof of Theorem <ref> From Propositions <ref>, <ref> and <ref> together with the identity(<ref>), it follows that z_t_±(λ) is a resonance of _Mnear t_±(k) if and only if λ is a characteristic value of I + 𝒯_±M( z_t_-(λ) ) = I ±J √(2/π)Hol(λ).Since the operator Hol(λ) is holomorphic in the open diskD(0,ε_0)^∗∪{ 0 } with values in ( ℓ^2(𝒱)), then Theorem <ref> holds by applying Proposition <ref> with* 𝒟 = Ω_r^∗∪{ 0 }, Ω_r^∗⊆ D(0,ε_0)^∗,* Z = { 0 },* F = I + 𝒯_±M( z_t_-(·) ).This concludes the proof of Theorem <ref>.§ APPENDIXWe recall some tools we need on characteristic values of finite meromorphic operator-valued functions. For more details on the subject, we referfor instance to <cit.> and the book <cit.>. The content of thissection follows <cit.>. Let ℋ be separable Hilbert space, and let ℒ(ℋ)(resp. GL(ℋ)) denote the set of bounded (resp. invertible) linearoperators in ℋ. Let 𝒰 be a neighborhood of a fixed point w ∈, andF : 𝒰∖{ w }⟶ℒ(ℋ)be a holomorphic operator-valued function. The function F is said to be finitemeromorphic at w if its Laurent expansion at w has the formF(z) = ∑_n = m^+∞ (z - w)^n A_n,m > - ∞,where (if m < 0) the operators A_m, …, A_-1 are of finite rank. Moreover, if A_0 is a Fredholm operator, then the function F is said to be Fredholmat w. In that case, the Fredholm index of A_0 is called the Fredholm index of Fat w. We have the following proposition:<cit.> Let 𝒟⊆ℂ be a connected open set, Z ⊆𝒟be a closed and discrete subset of 𝒟, and F : 𝒟⟶ℒ(ℋ) be a holomorphic operator-valued function in 𝒟\Z. Assume that: * F is finite meromorphic on 𝒟 (i.e. it is finite meromorphic near eachpoint of Z),* F is Fredholm at each point of 𝒟,* there exists w_0 ∈𝒟\ Z such that F(w_0) is invertible. Then, there exists a closed and discrete subset Z' of 𝒟 such that: * Z ⊆ Z',* F(z) is invertible for each z ∈𝒟\ Z',* F^-1 : 𝒟\ Z' ⟶ GL(ℋ) is finite meromorphic and Fredholm at each point of 𝒟.In the setting of Proposition <ref>, we define the characteristic values of F and their multiplicities as follows: The points of Z' where the function F or F^-1 is not holomorphic are called the characteristic values of F. The multiplicity of a characteristic value w_0 isdefined bymult(w_0) := 1/2iπ∮_| w - w_0 | = ρF'(z)F(z)^-1 dz,where ρ > 0 is chosen small enough so that { w ∈ : | w -w_0 |≤ρ}∩ Z' = { w_0 }. According to Definition <ref>, if the function F is holomorphic in 𝒟, then the characteristic values of F are just the complex numbers w where the operatorF(w) is not invertible. Then, results of <cit.> and <cit.> imply thatmult(w) is an integer.Let Ω⊆𝒟 be a connected domain with boundary ∂Ωnot intersecting Z'. The sum of the multiplicities of the characteristic values of the function F lying in Ω is called the index of F with respect to thecontour ∂Ω and is defined by Ind_∂Ω F := 1/2iπ∮_∂Ω F'(z)F(z)^-1 dz = 1/2iπ∮_∂Ω F(z)^-1 F'(z) dz. Acknowledgements: O. Bourget is supported by the Chilean FondecytGrant 1161732. D. Sambou is supported by the Chilean Fondecyt Grant 3170411.The authors express their gratitude to S. Golénia for bringing to their attentionthe paper <cit.>, and to V. Bruneau and S. Kupin for their helpful discussionsand valuable suggestions. 99 [AW11]aw M. Aizenman, S. Warzel, Absence of mobility edge for the Anderson random potential on tree graphs at weak disorder, EPL 96 37004 (2011).[AW13]ai M. Aizenman, S. Warzel, Resonant delocalization for random Schrödinger operators on tree graphs, J. Eur. Math. Soc. 15 (2013) 1167-1222.[Al97]al C. Allard, Asymptotic Completeness via Mourre Theory for a Schrödinger Operator on a Binary Tree Graph,Master's thesis, University of British Columbia, April 1997.[AF00]af C. Allard, R. Froese, A Mourre estimate for a Schrödinger operator on a binary tree, Rev. in Math. Phys. 12 (12) (2000), 1655-1667.[AL15]am N. Anantharaman, E. Le Masson, Quantum ergodicity on large regular graphs, Duke Math. J. 164 (4) (2015), 723-765. [BBR07]bbr1 J.-F. Bony, V. Bruneau, G. Raikov, Resonances and Spectral Shift Function near the Landau levels, Ann. Inst. Fourier, 57 (2) (2007), 629-671.[BBR14]bbr2 J.-F. Bony, V. Bruneau, G. Raikov, Counting function of characteristic values and magnetic resonances, Commun. PDE. 39 (2014), 274-305. [Br07]br J. Breuer, Singular continuous spectrum for the Laplacian on certain sparse trees, Commun. Math. Phys. 269 (2007), 851-857.[BK13]bk J. Breuer, M. Keller, Spectral analysis of certain spherically homogeneous graphs, Operators and Matrices 7 (4) (2013), 825-847. [FLSSS]fr R. Froese, D. Lee, C. Sadel, W. Spitzer, G. Stolz, Localization for transversally periodic random potentials on binary trees, To appear in Journal of Spectral Theory, arXiv:1408.3961 [FHH12]fr1 R. Froese, F. Halasan, D. Hasler, Absolutely continuous spectrum for the Anderson model on a product of a tree with a finitegraph, J. Func. Anal., 262 (3) (2012), 1011-1042. [GS71]goI. Gohberg, E. I. Sigal,An operator generalization of the logarithmic residue theorem and Rouché's theorem,Mat. Sb. (N.S.) 84 (126) (1971), 607-629. [GGK90]gohs I. Gohberg, S. Goldberg, M. A. Kaashoek, Classes of Linear Operators, Operator Theory, Advances and Applications, vol. I Birkhäuser Verlag, Bassel, 1990. [GL09]gohI. Gohberg, J. Leiterer,Holomorphic operatorfunctions of one variableand applications,Operator Theory, Advances and Applications, vol. 192 Birkhäuser Verlag, 2009, Methodsfrom complex analysis in several variables.[GGK00]gohbI. Gohberg, S. Goldberg, N. Krupnik,Traces and Determinants of Linear Operators,Operator Theory, Advances and Applications, vol. 116 Birkhäuser Verlag, 2000.[IJ15]itK. Ito, A. Jensen,A complete classification of threshold properties for one-dimensional discrete Schrödinger operators,Rev. in Math. Phys. 27 (1) (2015), 1550002 (45 pages). [Kl98]klA. Klein,Extended states in the Anderson model on the Bethe lattice,Adv. Math. 133 (1) (1998), 163–184.[KMNE17]kmA. S. Kostenko, M. M. Malamud, H. Neidhardt, P. Exner,Infinite Quantum Graphs,Doklady Mathematics 95 (2017) (1), 31-36.[Sa17]sa D. Sambou, On eigenvalue accumulation for non-self-adjoint magnetic operators,J. Maths Pures et Appl. 108 (2017), 306-332. [Ro06a]ro1 O. Rojo, On the spectra of certain rooted trees,Linear Agebra Appli. 414 (2006), 218-243.[Ro06b]ro2 O. Rojo, The spectra of some trees and bounds for the largest eigenvalue of any tree,Linear Agebra Appli. 414 (2006), 199-217.[RR07]rr O. Rojo, M. Robbiano, An explicit formula for eigenvalues of Bethe trees andupperboundsonthe largesteigenvalueofany tree,Linear Agebra Appli. 427 (2007), 138-150. [Sh15]sh M. Shamis, Resonant delocalization on the Bethe strip,Ann. Henri Poincaré 15 (8) (2014), 1549-1567. [Si79]si B. Simon, Trace ideals and their applications, Lond. Math. Soc. Lect. Not. Series, 35 (1979), Cambridge University Press.
http://arxiv.org/abs/1708.08032v1
{ "authors": [ "Olivier Bourget", "Diomba Sambou", "Amal Taarabt" ], "categories": [ "math-ph", "math.MP", "math.SP" ], "primary_category": "math-ph", "published": "20170827013808", "title": "Resonances on regular tree graphs" }
F. Finocchiaro F. Guinea P. San-Jose ^1Materials Science Factory, ICMM-CSIC, Sor Juana Ines de La Cruz 3, 28049 Madrid, Spain ^2IMDEA Nanociencia, Calle de Faraday 9, 28049 Madrid, Spain ^3Department of Physics and Astronomy, University of Manchester, Manchester M13 9PL, United Kingdom We consider a two-dimensional electron gas (2DEG) in the Quantum Hall regime in the presence of a Zeeman field, with the Fermi level tuned to filling factor ν=1. We show that, in the presence of spin-orbit coupling, contacting the 2DEG to a narrow strip of an s-wave superconductor produces a topological superconducting gap along the contact as a result of crossed Andreev reflection (CAR) processes across the strip. The sign of the topological gap, controlled by the CAR amplitude, depends periodically on the Fermi wavelength and strip width and can be externally tuned. An interface between two halves of a long strip with topological gaps of opposite sign implements a robust π-junction, hosting a pair of Majorana zero modes that do not split despite their overlap. We show that such a configuration can be exploited to perform protected non-Abelian tunnel-braid operations without any fine tuning. Topological π-junctions from crossed Andreev reflection in the Quantum Hall regime F. Finocchiaro^1,2, F. Guinea^2,3 and P. San-Jose^1 December 30, 2023 ================================================================================== During the last decade we have witnessed a surge in both theoretical and experimental progress towards the realisation of Majorana-based quantum computation.<cit.> Majorana zero modes (MZMs) are zero-energy bound quasiparticles of topological origin that are their own self-adjoint and obey non-Abelian anyon statistics. As a result, the adiabatic exchange (or `braiding') of a pair of MZMs rotates the wavefunction of the degenerate ground state in a non-commutative fashion.<cit.> Such a process or its generalisations<cit.> can be viewed as a coherent manipulation of qubit states realised by pairs of MZMs. The interest in Majorana-based topological quantum computation stems from the fact that, as a result from the non-locality of the MZMs, local sources of noise do notaffect the fidelity of the braiding operation, nor do they induce decoherence of the ground state manifold. This property has inspired implementations of fault-tolerant computation schemes able in principle to beat decoherence at the hardware level.<cit.>The fundamental ingredient needed to create MZMs is topological superconductivity, either intrinsic, like in p-wave superconductors,<cit.> or artificially designed, like in proximitised superconducting wires with strong spin-orbit coupling (SOC) in an external magnetic field.<cit.>More recently, two dimensional electron gases (2DEGs) with induced superconductivity are being actively investigated as platforms for topological superconductivity.<cit.> In addition to the increased freedom afforded by the planar geometry, these systems allow for the formation of a new type of topological quasi-one dimensional (1D) system, confined on both sides by two different superconductors with a phase difference π.For transparent enough contacts, such π junctions can greatly reduce the magnetic fields required for MZMs to emerge.<cit.>In this work we show that planar junctions allow for yet another implementation of 1D topological superconductivity, with a geometry dual to the above. It is achieved by contacting a long and narrow strip of a conventional superconductor to a 2DEG in the Quantum Hall (QH) regime at filling factor ν=1. The proximitised region acquires a superconducting gap Δ, and as a results develops gapless QH edge states along each side. Due to local Andreev reflection (LAR) processes, these edge states are a mixture of electrons and holes,<cit.> see Fig. <ref>. Assuming that spin-orbit coupling (SOC) is present in the system, and that the strip width is comparable with or smaller than the superconducting coherence length, the QH edge states may become Cooper-paired through additional crossed Andreev reflection (CAR) processes<cit.> across the strip. We show that a topologically non-trivial superconducting gap Δ^* then opens in the edge state dispersion, and MZMs emerge at either end of the strip. This possibility was suggested by Lee et al. in Ref. Lee:NatPhys17, where the requisite CAR processes were experimentally demonstrated in the case of graphene, although they concentrated on the ν=2 regime and not on the ν=1 condition required for the formation of MZMs. Here we theoretically investigate the conditions for CAR-induced topological superconductivity at ν=1. [We note that for ν=2 <cit.> (or even fillings in general), pairs of Majoranas will be generated which will not be protected against hybridisation into conventional fermions. Odd fillings, however, will always generate one protected unpaired Majorana zero mode.] (Related approaches have been explored in fractionalized QH systems supporting parafermions<cit.>.) We find that both the magnitude and, more importantly, the sign of the topological gap depends on the amplitude of the CAR processes. As a result, the sign of Δ^* can be controlled by adjusting the width of the strip and/or the electronic density of the proximitised region, which in turn determine the CAR amplitude. Reeg et al. anticipated such a possibility while studying a related system of two parallel nanowires coupled through a superconductor.<cit.> We show that this effect may be used to induce a sign change Δ^*→ -Δ^* along the strip by e.g. electrostatic gating. This situation corresponds to a one-dimensional topological π-junction along the strip which is host to two degenerate MZMs that do not hybridise despite their spatial overlap.<cit.> Since the original induced Δ does not change sign (only the edge state gap Δ^* does), no external fine-tuning is required to mantain the π phase difference, and the MZMs remain protected at zero energy. As we will show, this allows for a powerful generalisation of tunnel-braiding strategies (originally proposed by Flensberg <cit.>) on the two MZMs in the junction, without the need to carefully control external parameters in the process.Consider a normal (N) 2DEGwith a proximitised superconducting strip (S) of width W_S along the x direction, see Fig. <ref>a. The N region is in the QH regime and is subject to a Zeeman field along x allowing the electron density to be tuned to an odd filling factor ν=1. (Other mechanisms such as interaction-induced spin instabilities may play the role of the Zeeman field in some systems<cit.>). The S region has uniform superconducting pairing Δ induced by proximity to the parent superconductor. We also assume that SOC is present in the system, either in the N region and/or in the S region (e.g. inherited from a superconductor made of heavy elements, such as NbN or NbTiN). The electronic structure of this system, obtained using a tight-binding approximation on a square lattice (see Supplementary Information<cit.> for details), is studied in the following. Since the N region is in the ν=1 QH regime and the S strip is trivially gapped, each of the two NS interfaces hosts a single spin-polarized edge state. These states travel in opposite directions at opposite interfaces (see Fig. <ref>a). Local Andreev reflections at each interface transform the edge states into coherent superpositions of electrons and holes,<cit.> but do not open a gap because of the chiral nature of the carriers. However, in our geometry with two parallel NS interfaces at either side of the strip, another type of Andreev reflection process can take place, wherein an electron on one interface is scattered as a hole into the other interface. This crossed Andreev reflection process has a significant amplitude only for strips narrower than the coherence length ξ≈ħ v_F/Δ. Unlike local Andreev reflection, CAR processes may open a superconducting gap Δ^* in the presence of SOC, since electron and hole edge states at opposite interfaces propagate in opposite directions at the same wave vector.The role of the SOC is to cant the spin away from the Zeeman field in opposite directions in the two edge states, so that they can pair to form a spin singlet. At ν=1 the gap resulting from CAR is topological, as can be seen by a direct mapping of the two spin-canted edge states plus pairing into an Oreg-Lutchyn model<cit.> [Eq. (B4) in Supplementary Information]. Fig. <ref>a shows the gapped bandstructure of an infinite strip with significant CAR processes (left, W_S ≃ξ) and the gapless case without CAR (right, W_S ≫ξ).The topological nature of Δ^* manifests in the appearance of MZMs when the strip is terminated inside the 2DEG (Fig. <ref>d).The value of the topological gap Δ^* is entirely determined by the CAR amplitude, that in turn depends on the strip width W_S, the Fermi wavelength λ_F and the singlet amplitude governed by the proximity gap Δ and the SOC strength α. We have performed tight-binding simulations which show, specifically, that Δ^* is a real periodic function of the W_S/λ_F with alternating sign, see Fig. <ref>b. This behavior is confirmed by an analytical calculation in terms of Green's functions, which yieldsΔ^*≈4π^2t'^2 a^3/W_Sλ̃_F^2 μ̃× Im(z z)sinθwhere θ is the spin canting angle due to spin-orbit coupling, λ̃_F=2π/√(2m μ̃), μ̃=μ-k_F^2/2m, μ is the strip Fermi energy, k_F is the edge-state Fermi wavevector, and z=2π√(1+iΔ/μ̃)× W_S/λ̃_F (See Supplementary Information<cit.> for details). This formalises the central finding of our work. The sign of Δ^* follows the change in the number of normal modes in the strip, given by ⌊ 2W_S/λ̃_F⌋. It is therefore likely to be realistically tuneable with electrostatic gating of the strip region that may modify both its effective width W_S and electronic density, or by adjusting the width lithographically.<cit.> The possibility of changing the sign of the topological gapalong the strip opens a new opportunity for the generation of MZMs. A long strip with uniform induced gap Δ but edge-state gaps of opposite sign in its two halves (Δ_1^*Δ_2^*<0) forms a topological π-junction, similar to a topological Josephson junction tuned to phase difference ϕ=π. Such a system then develops two MZMs localised at the junction [see Fig. <ref>(c,d)], that stay at zero energy despite their spatial overlap as long as the phase difference across the junction remains π. The π phase difference between the two halves of the strip is robust. Since Δ^* on both sides is finite and real, its sign does not depend on perturbations. The CAR π-junction is furthermore stabilised by the phase rigidity of the strip order parameter Δ.<cit.> Unlike in ϕ=π Josephson junctions, it does not require fine tuning of any external parameter such as the flux across the superconducting circuit or the strip parameters.As a result, CAR-induced topological superconductivity enables the creation of MZMs that remain decoupled regardless of their overlap. This offers great advantages in the context of coherent Majorana qubit manipulation and braiding, as outlined in the following.We now present a possible application of the CAR π-junction to the challenge of non-Abelian Majorana braiding. Plenty of proposals for the demonstration of the non-Abelian statistics of Majorana excitations have been presented which hinge on the physical exchange (or braiding) in real space of pairs of Majoranas.<cit.> Some other approaches, however, rest upon schemes that involve rotation of the wavefunction without the need for actual MZMs to move spatially.<cit.> Among these, it has been suggested<cit.> that adiabatic tunnel processes of single electrons from a quantum dot into pairs of Majorana zero modes can result in arbitrary non-Abelian rotations of the ground-state manifold. These so-called tunnel-braid operations are extremely versatile as they allow a universal set of single-qubit gates, in contrast to braiding that only allows a limited set of operations.Unfortunately, tunnel-braiding has the drawback of requiring a precise, typically fine-tuned, phase difference of π between the MZMs involved throughout the operation. If the phase deviates from this value, the result of the operation becomes time-dependent and is no longer protected against decoherence.The robustness and lack of fine-tuning of CAR π-junctions promises to overcome this problem.In Fig. <ref>a we present a possible geometry to implement a CAR-protected tunnel-braiding scheme. We deposit two narrow superconducting strips on a ν=1 2DEG such that two independent CAR-induced topological gaps Δ_1^* and Δ_2^* open on each. One end of each strip terminates inside the 2DEG, so that the corresponding MZMs γ_1,2 lie within a finite distance of each other. The MZMs γ̃_1,2 on far end of the strips are assumed sufficiently far from the junction so as to become decoupled from γ_1,2. We control the Fermi level of the two strips, μ_1 and μ_2, by means of two independent gates, in order to tune the magnitude and sign of the topological gaps Δ^*_1,2. The two `inner' MZMs γ_1 and γ_2 are then coupled to a quantum dot through two tunnel barriers that may be tuned externally. The tunnelling couplings t_1,2 control the specific non-Abelian opearation to perform. The dot is in the Coulomb-blockade regime, with occupation N. We adiabatically tune the dot level ε_D across an N→ N-1 transitions between two adjacent Coulomb valleys. This transfers a single electron to the composite state of the two Majorana modes. Figure <ref>b shows the evolution of the low-energy single-particle Bogoliubov spectrum of the full dot-2DEG-strip system across this process, with dashed lines corresponding to mostly-dot states, and solid lines to MZMs states in the strip. The two cases with equal (blue, ϕ=0) and opposite (red, ϕ=π) signs for Δ^*_1,2 show markedly different structure. The conventional ϕ=0 case splits the MZMs away from zero close to the N→ N-1 transition, as they become resonantly coupled via the dot state.<cit.> Such an operation is not protected against noise and its result depends on timing. In contrast, the ϕ=π case shows MZMs that remain exactly at zero energy throughout the operation, as their hybridisation across the dot is forbidden by the opposite sign of Δ^*_1,2. The state after emptying the dot is then independent of timing and insensitive to noise in ε_D. As shown by Flensberg,<cit.> the transformation P within the degenerate ground state manifold associated to this process is a rotation by an angle π around an axis in the xy plane, controlled by the tunnel couplings t_1,2. If the couplings are then changed to t'_1,2, and the reverse adiabatic transition N - 1 → N on the dot is performed, the composite operation P'P rotates the quantum state of the Majoranas by an arbitrary angle around the z axis. In comparison, braiding two MZMs can only rotate the wavefunction about the z axis by an angle of π / 2. As no fine-tuning is required to maintain the ϕ=π condition in the CAR π-junction, the tunnel-braiding process should enjoy similar topological protection as a standard spatial-braiding. In Fig. <ref>c we show the MZM splitting across a resonant dot as we vary the Fermi energy under one of the strips, while the other is kept fixed. As expected, we find alternating ϕ=0 (red) and ϕ=π (blue) regions, in which the MZM splitting is finite and zero, respectively. The width in parameter space of the ϕ=π regions with MZMs pinned to zero is finite, unlike in topological Josephson junctions. In essence, we have presented here a scheme towards one-dimensional topological superconductivity that extendsprevious approaches that are based on the proximity effect, i.e. local Andreev reflections, of spinless helical electronic phases coupled to superconductors. While such approaches indeed produce a topological order parameter, its phase is fixed by the parent superconductor. In contrast, crossed Andreev reflections, relevant in geometries as those discussed here, also produces a topological order parameter, but its sign may be either the same as or opposite to that of the parent, depending on the CAR amplitude itself. Controlling the sign of the topological gap in a stable way has many ramifications. We have shown how it may be exploited to produce stable, self-tuned π-junctions, wherein sizeable Majorana overlaps, which are problematic in more conventional Majorana devices, are no longer a concern, at least for pairs of MZMs at the junction. As a result, parametric braiding of Majoranas through e.g tunnel-braiding schemes becomes significantly more realistic. The specific implementation of the CAR-induced topological gap described here is just one conceptually simple possibility, but it is not unique. Other phases, such as quantum anomalous Hall states, could also exhibit the requisite ν=1 spin-singlet states.The temperature requirements for using our protocol are limited by both the Zeeman splitting and Δ^*, which gives a conservative estimate between 0.1 K and 1 K, well within reach of current experiments on this type of systems. CAR-induced topological superconductivity is thus proposed as a promising road forward towards the next landmark in the field, the realisation of protected non-Abelian operations in the lab. §For the numerical calculations, we consider a two-dimensional square lattice that extends from -L/2 to L/2 along the x axis, and from -W/2 to W/2 along the y axis. The central superconducting strip, oriented along the x axis, occupies the area that goes from y = -W_S/2 to y = W_S/2. We use either periodic boundary conditions (PBC) or open boundary conditions (OBC) along both directions, as specified in the main text. The tight binding Hamiltonian that we use for all the calculations in the paper is given by H = H_0 + H_Z + H_S + H_SOCwhereH_0 = -∑_mnμ_n c_mn^†c_mn - t ∑_⟨ m n,m'n'⟩ c_mn^†c_m'n' e^-iϕ_mn,m'n' H_Z = ∑_mn V_n^Z c_mn^†σ_x c_mn H_S = ∑_mnΔ_n[ c_mn,↓ c_mn,↑ + c_mn,↑^† c_mn,↓^†] H_SOC = i∑_⟨ m n,m'n'⟩ (α_n / a^2)c_mn^†(σ×r_mn)_z c_m'n' Where * r_mn=(ma, na), with a the lattice parameter of the square lattice. * ⟨ m n,m' n'⟩ indicates restriction to nearest neighboring sites. * μ_n=μ_N for n ∈ [-W/2,-W_S/2] and n ∈ [W_S/2,W/2], and μ_n=μ≠μ_N for n ∈ [-W_S/2,W_S/2]. * ϕ_mn,m'n' is the Peierls phase acquired by the electrons under an external magnetic field, defined as ϕ_mn,m'n'=∫_r_m'n'^r_mnA· dr if n ∈ [-W/2,-W_S/2] and n ∈ [W_S/2,W/2] and that is 0 if n ∈ [-W_S/2,W_S/2] due to the Meissner effect. Under the choice of the Gauge A=(A_x(na),0,0), withA_x(na) = {[B(na + W_S/2)n ∈ [-W/2,-W_S/2];0 n ∈ [-W_S/2,W_S/2];B(na - W_S/2)n ∈ [W_S/2,W/2] ].and performing the integral, ϕ_mn,m'n' becomes ϕ_mn,m'n' = {[ Ba(m-m')[a(n+n')/2 + W_S/2] n ∈ [-W/2,-W_S/2]; 0n ∈ [-W_S/2,W_S/2]; Ba(m-m')[a(n+n')/2 - W_S/2] n ∈ [W_S/2,W/2] ].* V_n^Z=V_Z≠ 0 for n ∈ [-W/2,-W_S/2] and n ∈ [W_S/2,W/2] and V_n^Z=0 for n ∈ [-W_S/2,W_S/2]. * Δ_n=0 for n ∈ [-W/2,-W_S/2] and n ∈ [W_S/2,W/2] and Δ_n=Δ≠ 0 for n ∈ [-W_S/2,W_S/2]. * α_n=α≠ 0 for n ∈ [-W/2,W/2]. * The creation and annihilation operators are two-component vectors in spin spacec_mn^†=(c^†_mn,↑,c^†_mn,↓)The parameters used for the simulations that are common to all the results presented in the main text are m^*=0.015m_e, B=0.34 T, Δ=0.38 meV, α = 3 · 10^-11 eV m, V_Z = 0.3 meV. In addition, in Fig. <ref>a we have employed a chemical potential of the proximitized region of μ=4 meV for both panels, while changing the width from W_S=300 nm (left panel) to W_S=2 μm (right panel). The chemical potential employed in Fig. <ref>b is fixed to μ=0.74 meV, while the width W_S varies from 0 to 950 nm. The ratio μ/Δ is therefore equal to 1.95. In Fig. <ref>c we have used a strip 3 μm long with PBC. The blue points represent the lowest eigenvalues corresponding to a uniform chemical potential of μ_1=10.5 throughout the strip, whereas the red ones represent the case of a strip that is cut in two halves, one with μ_1=10.5 meV and the other with μ_2=13.1 meV, characterized by gaps Δ^* of opposite sign. In Fig. <ref>d we have used W_S=220 nm, μ_1=13.3 meV and μ_2=16.5 meV. The total length of the strip is of 3.4 μm, and the length of the strip is of 2.4 μm. The same width and chemical potentials have been used in Figures <ref>b and <ref>c, except for the fact that the strips are now spatially separated by 1 μm and long 2 μm each. We have considered a system with PBC and excluded the eigenvalues associated to the external MZMs (identically zero) that are present in the case of Δ^*_1Δ^*_2<0 (cfr Fig. <ref>a). The hopping amplitudes from the MZMs to the dot are t_1=0.32 meV and t_2=0.51 meV.§To derive the analytical dependence of the gap Δ^* on the parameters of the system, we consider an infinite system with a superconducting strip coupled to its surrounding 2DEG by a real hopping t' that could in principle be different from t in Eq. (<ref>). For t'=0 gapless edge states circulate along the 2DEG surface. A finite t' couples edge states at either side of the strip, opening a gap Δ^* in their spectrum. This can be understood by considering the effective Hamiltonian of the 2DEG H_eff = H_0 + H_Z + H_SOC + Σ(ω) once the strip is integrated out, which introduces a self-energy Σ(ω) that pairs opposite edge states. The induced superconducting pairing Δ̃ is given by the off-diagonal (pairing) elements of the self-energy at ω=0 between opposite edges (the actual gap Δ^* depends on this pairing Δ̃, but also on the singlet amplitude of the 2DEG edge states, determined by H_SOC and to be discussed later).[We may neglect the frequency dependence of Σ(ω) for the purpose of computing Δ^* as long as Δ^*≪Δ, which is the physically relevant situation.] The self-energy from the strip readsΣ(ω) = .t'^*G^(0)_tb(ω;y,y')t'|_y=0, y'=W_S.Here G^(0)_tb is the tight-binding Nambu-Green function of the decoupled strip, evaluated above at y, y' in opposite edges. A given k_x wavevector is implicit here, as we assume x translation symmetry. For simplicity we have shifted the strip to y ∈ [0,W_S] here. In the continuum limit a→ 0 the Green's function is G^(0) = lim_a→ 0 G^(0)_tb/a. We may decompose G^(0) in terms of the continuum eigenvalues ϵ_λ and eigenvectors φ_λ as G^(0)(ω;y,y')=∑_λφ_λ(y)⊗φ_λ^†(y')/ω-ϵ_λThis is a 4× 4 matrix, as φ contains both spin and electron/hole amplitudes. The continuum Green's function, evaluated at the boundaries of the decoupled strip, vanishes by definition. One cannot, therefore, simply replace G^(0)_tb with G^(0) in Eq. (<ref>).As shown in [Prada:EPJB04], the Green's function at the outermost sites of a system described by a simple tight binding model can be written, in the limit where the lattice constant is the smallest length scale in the problem,as:G^(0)_tb(ω;y,y') = - a^3∂_y∂_y'G^(0)(ω,y,y')Hence, the pairing induced on the 2DEG edge state through crossed-Andreev reflection (CAR) processes readsΔ̃≈ -a^3 t'^2[∂_y∂_y'F^(0)(ω=0;y,y')]_y=0,y'=W_Swhere F^(0)=1/4Tr(τ_yσ_yG^(0)) is the off-diagonal (pairing, or anomalous) component of the continuum Green's function and τ,σ are Pauli matrices in the particle-hole and spin sectors, respectively.To compute G^(0) analytically we assume spin-orbit to be negligible inside the strip (it is assumed finite in the 2DEG only). Hence G^(0) is spin degenerate, and can be obtained by diagonalising the 2× 2 continuum Hamiltonian of the stripH_S=(k_y^2/2m-μ̃)τ_z +Δτ_x, μ̃=μ-k_x^2/2mwhere m is the effective mass, μ is the chemical potential and Δ the pairing potential. The τ matrices are now Pauli matrices acting in a Cooper-pairing sector of a given spin, defined by the basis ψ=(ψ_↑,ψ_↓^†)^T. The eigenvalues of this Hamiltonian are given byϵ_η=η√((k_y^2/2m-μ̃)^2 + Δ^2)=η√(ξ^2 + Δ^2), η=± 1where we have defined ξ=k_y^2/2m-μ̃. The associated normalized spinors areφ_η=( [ u_η; v_η; ]) = 1/√(2ϵ_η)( [ η√(ϵ_η+ξ);√(ϵ_η-ξ); ])For a given eigenvalue of the problem, the most general eigenstate solution is given byφ_η(y)=( [ u_η; v_η; ]) [ A_ηe^i k_y y + B_ηe^-i k_y y]The coefficients A_η and B_η are found by imposing the boundary conditions that the wavefunction of the isolated strip needs to vanish at the boundaries:φ_η(y=0)=φ_η(y=W_S) = 0that yields A_η=-B_η and the quantization of the wavevector along the y directionk_y^n=nπ/W_SThe eigenvalues ϵ_λ and eigenvectors φ_λ of the isolated strip, indexed by λ=(n,η) quantum numbers, therefore readϵ_η^n=ημ̃√((n^2λ̃_F^2/4W_S^2-1)^2 + (Δ/μ̃)^2)andφ_η^n(y)= 1/√(2ϵ^n_η)( [ η√(ϵ^n_η+ξ_n);√(ϵ^n_η-ξ_n); ]) √(%s/%s)2W_Ssin(nπ y/W_S),where λ̃_F=2π/√(2mμ̃) and ξ_n=μ̃[(nλ̃_F/2W_S)^2 -1]. The Green's function of the isolated system isG^(0)(ω;y,y')=∑_n=1^∞ G_n^(0)(ω;y,y')whereG_n^(0)(ω;y,y')= ∑_η=± 1φ_η^n(y) ⊗[φ_η^n(y')]^†/ω-ϵ_η^n The off-diagonal component of this matrix, F_n^(0)=1/2Tr(τ_x G_n^(0)), evaluated at ω=0, reads F_n^(0)(ω=0; y,y')=2sin(k_y^n y) sin(k_y^n y')/W_S∑_η=± 1√((ϵ_η^n)^2-ξ_n^2)/2(ϵ_η^n)^2The double derivative evaluated at the boundaries is[∂_y∂_y'F^(0)_n(ω=0;y,y')]_y=0,y'=W_S= 1/W_S^3Δ(nπ)^2 cos(nπ)/1+(μ̃/Δ)^2[1-(nλ̃_F/2W_S)^2]^2Performing the sum over n we get that the effective pairing induced by the strip in the external edge states isΔ̃ = -a^3 t'^2∑_n=1^∞[∂_y∂_y'F^(0)_n(ω=0;y,y')]_y=0,y'=W_S=a^3 t'^21/Δ W_S^34π^2/(λ̃_F/W_S)^2μ̃/Δ × Im[2π√(i+μ̃/Δ)/√(μ̃/Δ) λ̃_F/W_S(2π√(i+μ̃/Δ)/√(μ̃/Δ) λ̃_F/W_S)]. Related expressions were derived in Ref. 24 corresponding to various limiting cases of the general result above. Recall that all the dependence on the k_x wavevector is inside μ̃=μ-k_x^2/2m and λ̃_F=2π/√(2mμ̃). At k_x=0 these quantities become the actual Fermi energy μ and Fermi wavelength λ_F=2π/√(2mμ) of the superconducting strip, respectively. The ratio ⌊ 2W_S/λ̃_F⌋ represents the total number of open modes in the quasi-1D strip with a given k_xin the absence of superconductivity. Equation <ref> then shows that the sign of the CAR-induced pairing is given by the parity of the number of open modes, see Fig. <ref>. We now consider how the presence of spin-orbit coupling in the 2DEG allows for the pairing Δ̃ to open a gap Δ^* in the edge state spectrum. We employ a simplified low-energy description of the edge states. Given that the 2DEG bulk is insulating, we consider just the 1D chiral channels generated at the two sides of the strip in the ν=1 QH regime. These can be modelled by the 4x4 continuum HamiltonianH=(k^2/2m-μ_N+V_Zσ_x +α kσ_y)τ_z -Δ̃τ_yσ_ywhere k=k_x, μ_N is the chemical potential, α is the spin-orbit coupling and Δ̃ is the CAR-induced pairing derived above (evaluated at k_x=k_F, i.e. at the wavevector for which the edge states cross zero energy). We recall that the σ matrices act in spin space and the τ matrices in particle/hole space. This Hamiltonian is akin to the Oreg-Lutchyn model<cit.>, and is a valid description of the edge states at either side of the strip at low energies. For Zeeman fields V_Z<μ_N the model has two carrier species propagating along each direction, which corresponds to filling ν=2 of the QH state. The model is then in a topologically trivial phase. For strong enough Zeeman fields V_Z>√(Δ̃^2+μ_N^2), however, it can be driven into a topologically non-trivial phase. One spin sector is thus depleted, so that the corresponding filling is ν=1 in the absence of the superconducting strip (one mode propagating along each direction on each side of the strip). The pairing Δ̃ then creates a topologically non-trivial gap Δ^* that leads to Majorana bound states. The wavefunction satisfying the Schroedinger equation is now a 4-component spinor ψ=(ψ_↑,ψ_↓,ψ_↑^†,ψ_↓^†)^T. At μ_N=0, the system opens a gap ofΔ^*(k)= √(k^4 + 4 m (m γ_k - Γ_k))/mwhereγ_k = V_Z^2 +α^2 k^2+ Δ̃^2andΓ_k=√(α^2 k^6 + V_Z^2 k^4 + 4 m^2 V_Z^2 Δ̃^2)If we work within the limit in which Δ̃ is the smallest scale of the problem, then the wavevector at which the gap opens is well approximated by the Fermi wavevector at zeroth order in Δ̃. Thusk=k_F ≈√(2 m^2 α^2 + √(m^2 (V_Z^2 + m^2 α^4)))and, therefore,Δ^*=Δ^*(k_F)= 2 √(β[1 + Δ̃^2/β - √(1 + Δ̃4 V_Z^2/β^2)])whereβ = 2 V_Z^2 + 4 m^2 α^4 +4 m α^2 √(V_Z^2 + m^2 α^4)= 2 (V_Z^2 + α^2 k_F^2)Now one can expand Δ^* in series as a function of Δ̃ up to first order, obtainingΔ^*≈√(1 - 2 V_Z^2/β)Δ̃=α k_F/√(V_Z^2 + α^2 k_F^2)Δ̃By writing the Zeeman and Rashba part of the Hamiltonian asH_Z+SOC=h·σwhere h=(V_Z, α k_F,0). This expression allows to define a canting angle θ such thatθ=arcsin(α k_F/√(V_Z^2 + α^2 k_F^2))andΔ^*=sinθΔ̃This angle represents the spin-orbit-induced deviation of the edge state spins away from the Zeeman axis σ_x. Now, plugging in the values that we used in the main text for the numerical calculations, we obtain the behaviour of Δ^* as a function of W_S/λ̃_F shown in Fig. <ref>a, in excellent agreement with the full numerics shown in the main text. (Note that in the main text W_S is normalized to λ_F in spite of the λ̃_F used in Fig. <ref>).§In this section, we study the stability of the π-junction to perturbations in the phase difference between Δ^*_1 and Δ^*_2. To confirm that a π-junction in Δ^* is indeed a stable solution for the system, one must demonstrate that a phase difference ϕ^*=π between Δ^*_1 and Δ^*_2 corresponds to a minimum in the Josephson free energy under variations of ϕ^*. Bardeen et al.<cit.> and Beenakker and van Houten<cit.> demonstrated, using complementary approaches, that the free energy of a generic Josephson junction may be written, at finite temperature T and up to a phase-independent constant, asE_J(ϕ)= -k_B T ∑_ϵ_n<0ln( 2 coshϵ_n(ϕ)/2k_B T)where the sum is performed over both spin flavours and over both particle- and hole-like levels.In the low temperature limit E_J(ϕ) reduces tolim_T → 0E_J(ϕ)= - 1/2∑_ϵ_n<0ϵ_n(ϕ)Variations in the phase of the induced gap ϕ^* can be generated through variations in the phase ϕ of the left and right portions of the parent superconductor on top of the two half-strips, which is a controllable parameter in the model. Hence, computing the free energy E_J(ϕ) and knowing the relation between ϕ^* and ϕ, one may establish whether the π-junction is stable. Intuitively the total free energy, which includes the energy of the parent superconductor, will be a competition between the phase rigidity of ϕ and the phase rigidity of ϕ^*, which in the π-junction configuration are out of phase.We have performed numerical calculations of the Josephson free energy of the junction as a function of parent phase difference ϕ ∈ [- 2 π: 2 π] in two cases. In case (a) the ratio W_S/λ_F on the two sides of the junction is such that for ϕ=0 the induced gap has also ϕ^*=0 (panel a of Fig. <ref>), and the π-junction case (b) which has ϕ^*=π at ϕ=0 (panel b of Fig. <ref>). The results for the free energy (panels c,d, respectively) show that the minimum is obtained in both configurations when ϕ=0. In other words, the phase rigidity of the parent superconductor is stronger than that of the induced gap, and hence determines the equilibrium configuration. It is more expensive to generate a phase difference of π in ϕ than in ϕ^*. Thus, in case (b), the ϕ^*=π configuration is thermodynamically stable.We are grateful to L. Chirolli, E. Prada, C. Reeg, J. Klinovaja and D. Loss for fruitful discussions. F. F. and F. G. acknowledge the financial support by Marie-Curie-ITN Grant No. 607904-SPINOGRAPH. F. F. and P.S-J. acknowledge financial support from the Spanish Ministry of Economy and Competitiveness through Grant No. FIS2015-65706-P (MINECO/FEDER).
http://arxiv.org/abs/1708.08078v3
{ "authors": [ "Francesca Finocchiaro", "Francisco Guinea", "Pablo San-Jose" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170827113339", "title": "Topological $π$-junctions from crossed Andreev reflection in the Quantum Hall regime" }
Understanding and Comparing Deep Neural Networksfor Age and Gender Classification Sebastian LapuschkinFraunhofer Heinrich Hertz Institute 10587 Berlin, [email protected] BinderSingapore University of Technology and DesignSingapore 487372, [email protected] MüllerBerlin Institute of Technology10623 Berlin, [email protected] SamekFraunhofer Heinrich Hertz Institute10587 Berlin, [email protected] 30, 2023 =======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================empty Recently, deep neural networks have demonstrated excellent performances in recognizing the age and gender on human face images. However, these models were applied in a black-box manner with no information provided about which facial features are actually used for predictionand how these features depend on image preprocessing, model initialization and architecture choice. We present a study investigating these different effects.In detail, our work compares four popular neural network architectures, studies the effect of pretraining, evaluates the robustness of the considered alignment preprocessings via cross-method test set swapping and intuitively visualizes the model's prediction strategies in given preprocessing conditions using the recent Layer-wise Relevance Propagation (LRP) algorithm.Our evaluations on the challenging Adience benchmark show that suitable parameter initialization leads to a holistic perception of the input, compensating artefactual data representations. With a combination of simple preprocessing steps, we reach state of the art performance in gender recognition. § INTRODUCTIONSince SuperVision <cit.> entered the ImageNet <cit.> challenge in 2012 and won by a large margin, much progress has been made in the field of computer vision with the help of Deep Neural Networks (DNN). Improvements in network architecture and model performance have been steady and fast-paced since then <cit.>. The use of artificial neural networks also has revolutionized learning-based approaches in other research directions beyond classical computer vision tasks, by learning to read subway plans <cit.>, understanding quantum many-body systems <cit.>, decoding human movement from EEG signals <cit.> and matching or even exceeding human performance in playing games such as Go <cit.>, Texas hold'em poker <cit.>, various Atari 2600 games <cit.> or Super Smash Bros. <cit.>.Automated facial recognition and estimation of gender and age using machine learning models has held a high level of attention for more than two decades <cit.> and has become ever more relevant due to the abundance of face images on the web, and especially on social media platforms. The introduction of DNN models to this domain has largely replaced the need for hand crafted facial descriptors and data preprocessing considerably increased possible prediction performances at an incredible rate. DNN models have been not only successfully applied for age and gender recognition, but also for the classification of emotional states <cit.>. In the previous three years alone, age recognition rates increased from 45.1% <cit.> to 64% <cit.> and gender recognition rates from 77.8% to reportedly 91% <cit.> on the recent and challenging Adience benchmark <cit.>, mirroring the overall progress on other available benchmarks such as the Images of Groups data set <cit.>, the LFW data set <cit.> or the Ghallagher Collection Person data set <cit.>.Next to the indisputable performance gains across the board, the probably most important factor for the popularity of DNN architectures is the low entry barrier provided by intuitive and generic (layer) building blocks, the one-fits-all applicability to many learning problems and most importantly the availability of highly performing and accessible software for training, testing and deployment,Caffe <cit.>, Theano <cit.>, and Tensorflow <cit.>, to name a few,supported by powerful GPU-Hardware.However, until recently, DNNs and other complex, non-linear learning machines have been used in a black-box manner, providing little information about which aspect of an input causes the actual prediction. Efforts to explaining such complex models in the near past have resulted in several approaches and methods <cit.> allowing for insights beyond the performance ratings obtainable on common benchmarks. This is a welcome development, as in critical applications such as autonomous driving or in the medical domain, it is often of special importance to know why a model decides the way it does, given a certain input, and whether it can be trusted outside laboratory settings <cit.>.In this paper, we compare the influence of model initialization with weights pretrained on two real world data sets to random initialization and analyze the impact of (artefactual) image preprocessing steps to model performance on the Adience benchmark dataset for different recent DNN architectures. We can show that suitable pretraining can yield a robust set of starting model weights, compensating artefactual representation of the data, via cross-method test set swapping. Using Layer-wise Relevance Propagation <cit.>, we visualize how those choices made prior to training affect how the classifier interacts with the input on pixel level, how the provided input is used to make a decision, and what parts of it. We rectified the performance of <cit.> on gender recognition referred to in <cit.> with a more likely result and report our own result, slightly exceeding that baseline. Via a combination of simple preprocessing steps, we can reach state of the art performance on gender recognition from human face images on the Adience benchmark dataset.§ RELATED WORKOne of the more recent face image data sets is the Adience benchmark <cit.>, which has been published in 2014, containing 26,580 photos across 2,284 subjects with a binary gender label and one label from eight different age groups[(0-2, 4-6, 8-13, 15-20, 25-32, 38-43, 48-53, 60-)], partitioned into five splits. The key principle of the data set is to capture the images as close to real world conditions as possible, including all variations in appearance, pose, lighting condition and image quality, to name a few. These conditions provide for an unconstrained and challenging learning problem: The first results on the Adience benchmark achieved 45.1% accuracy for age classification and 77.8% accuracy for gender classification using apipeline including a robust, (un)certainty based in-plane facial alignment step, Local Binary Pattern (LBP) descriptors, Four Patch LBP descriptors and a dropout-SVM classifier <cit.>. For reference, the same classification pipeline achieves 66.6% accuracy for age classification and 88.6% accuracy for gender classification on the Ghallagher data set. The authors of <cit.> introduce a 3D landmark-based alignment preprocessing step, which computes frontalized versions of the unconstrained face images from <cit.>, which slightly increases gender classification accuracy to 79.3% on the Adience data set, otherwise using the same classification pipeline from <cit.>.The first time a DNN model was applied to Adience benchmark was with <cit.>. The authors did resort to an end-to-end training regime, the face frontalization preprocessing from <cit.> was omitted and the model was completely trained from scratch, in order to demonstrate the feature learning capabilities of the neural network type classifier. The architecture used in <cit.> is very similar to the BVLC Caffe Reference Model <cit.>, with the fourth and fifth convolution layers being removed. The best reported accuracy ratings increased to 50.7% for age classification and 86.6% for gender classification, using an over-sampling prediction scheme with 10 crops taken from a sample (4 from the corners and the center crop, plus mirrored versions) instead of only the sample by itself <cit.>.To the best of our knowledge, the current state of the art results for age and gender predictions are reported in <cit.> and <cit.> with 64% and 91% accuracy respectively. The model from <cit.> was the winner of the ChaLearn Looking at People 2015 challenge <cit.> and uses the VGG-16 layer architecture <cit.>, which has been pretrained on the IMDB-WIKI face data set. This data set was also introduced in <cit.> and is comprised of 523,051 labelled face images collected from IMDb and Wikipedia. Prior to pretraining on the IMDB-WIKI data, the model was initialized with the weights learned for the ImageNet 2014 challenge <cit.>. The authors attribute the success of their model to large amounts of (pre)training data, a simple yet robust face alignment preprocessing step (rotation only), and an appropriate choice of network architecture. The 91% accuracy achieved by the commercial system from <cit.> is supposedly backed by 4,000,000 carefully labelled but non-public training images. The authors identify their use of landmark-based facial alignment preprocessing as a critical factor to achieve the reported results. Unfortunately no details are given about the model architecture in use. The authors of <cit.> compare their results to <cit.> and other systems, yet only selectively list the age estimation of competing methods, such as <cit.>. The authors of <cit.> also report the gender recognition performance of <cit.> as only 88.75%, which is rather low given the early results from <cit.>, the performance of <cit.> on age recognition and our own attempts to replicate the models of referenced studies.Recapitulating, we can identify three major factors contributing to the performance improvements among the models listed in Table <ref>: (1) Changes in architecture. (2) Prior knowledge via pretraining. (3) Optional dataset preparation via alignment preprocessing.In the following sections, this paper will briefly describe a selection of DNN architectures and investigate the influence of random weight initialization against pretraining on generic(ImageNet) or task-specific (IMDB-WIKI) real world data sets, as well as the impact of data preprocessing by comparing affine reference frame based alignment techniques to coarse rotation-based alignment. Due to its size and the unconstrained nature of the data and the availability of previous results, we use the Adience benchmark data set as an evaluation sandbox. The dataset is available as only rotation aligned version, and as a version with images preprocessed using the affine in-plane alignment <cit.>, putting the shown faces closer to a reference frame of facial features. We then use Layer-wise Relevance Propagation (LRP) <cit.> to give a glimpse into the model's prediction strategy, visualizing the facial features used for prediction on a per-sample basis in order to explain major performance differences.§ ARCHITECTURES, PREPROCESSING AND MODEL INITIALIZATIONThis section provides an overview about the evaluated DNN architectures, data preprocessing techniques and weight initialization choices. All models are trained using the Caffe Deep Learning Framework <cit.>, with code based on <https://github.com/GilLevi/AgeGenderDeepLearning>, containing the configurations to reproduce the results from<cit.>.§.§ Evaluated ModelsWe compare the architectures of the model used in <cit.> (in the following referred to as AdienceNet), the BVLC Caffe Reference Model <cit.> (or short: CaffeNet), the GoogleNet <cit.> and the VGG-16 <cit.>, on which state of the art performance on age classification has been reported in <cit.>. The AdienceNet is structurally similar to the CaffeNet, with the main difference lying in smaller convolution masks learned in the input layer (7× 7 vs 11× 11) and two less convolution layers being present. The number of hidden units composing the fully connected layers preceding the output layer is considerably lower (512 vs 4096) for AdienceNet. The VGG-16 consists of 13 convolution layers of very small kernel sizes of 2 and 3, which are interleaved withsimilarly small pooling operations, followed by two fully connected layers with 4096 hidden units each, and a fully connected output layer. The fourth model we use and evaluate is the GoogleNet, which connects a series of inception layers. Each inception layer realizes multiple convolution/pooling sequences of different kernel sizes (sizes 3× 3 to 7× 7 in the input inception module) in parallel, feeding from the same input tensor, of which the outputs are then concatenated along the channel axis. Compared to the VGG-16 architecture, the GoogleNet is fast to train and evaluate, while slightly outperforming the VGG-16 model on the ImageNet 2014 Challenge with 6.6% vs 7.3% top-5 error in the classification task <cit.>.§.§ Data PreprocessingOne choice to be made for training and classification is regarding data preprocessing. The SVM-based system from <cit.> improves upon <cit.> by introducing a 3D face frontalization preprocessing step, with the goal of rendering the inputs to the pipeline invariant to changes in pose. Landmark-based preprocessing also is identified in <cit.> as an important step for obtaining the reported model performances. Both <cit.> and <cit.> only employ simple rotation based preprocessing, which roughly aligns the input faces horizontally, trusting the learning capabilities of neural networks to profit from the increased variation in the data and learn suitable data representations.The Adience benchmark data set provides both a version of the data set with images roughly rotated to horizontally aligned faces, as well as an affine 2D in-plane aligned version for download. We prepare training and test sets from both versions using and adapting the original splits and data preprocessing code for <cit.> available for download on github. We also create a mixed data set from a union of both previous data sets, which has double the number of training samples and allows the models to be trained on both provided alignment techniques simultaneously. §.§ Weight InitializationAn invaluable benefit of DNN architectures is the option to use pretrained models as a starting point for further training. Compared to random weight initialization, using a pretrained models as starting points often results in faster convergence and overall better model results, due to initializing the model with meaningful filters.In this paper, we compare models initialized with random weights to models starting with weights trained on other data sets, namely the ImageNet data set and the IMDB-WIKI data sets, whenever model weights are readily available. That is, we try to replicate the results from <cit.> and train an AdienceModel only from scratch, since no weights for either pretraining data set are available. Instead, we use the comparable CaffeNet to estimate the results obtainable when initialzing the model with ImageNet weights. We also train the GoogleNet from scratch and initialized with ImageNet weights. Due to the excessive training time required for the VGG-16 model, we only try to replicate the results from <cit.> and train models both initialized with available ImageNet and IMDB-WIKI weights.§ VISUALIZING MODEL PERCEPTIONWe complement our quantitative analysis in Section <ref> with qualitative insights on the perception and reasoning of the models by explaining the predictions made via the importance of features for or against a decision at input level. Following the success of DNNs, the desire to understand the inner workings of those black box models has vitalized research efforts dedicated to increasing the transparency of complex models. Several methods for explaining individualpredictions have emerged since then, with robust yet computationally expensive occlusion-based <cit.> and sampling-based analysis <cit.>, (gradient-based) sensitivity analysis <cit.> and backpropagation-type approaches <cit.> among them.In an intensive study <cit.>, Layer-wise Relevance Propagation (LRP) was found to outperform considered competing approaches in computing meaningful explanations for decisions made by DNN classifiers. Further, the method is in contrast to sampling or occlusion-based approaches computationally inexpensive and applicable to a wide range of architectures and classifier types <cit.>. We therefore use LRP to supportively complement the quantitative results shown in Section <ref> and visualize the perception of the model and its interaction with the input under the evaluated training conditions. For our experiments, we use the current version[<https://github.com/sebastian-lapuschkin/lrp_toolbox/tree/caffe-wip>] of the toolbox <cit.> provided by the authors.We refer the interested reader to <cit.> for a tutorial on methods for understanding and interpreting deep neural networks.§.§ Layer-wise Relevance Propagation for DNNsLRP is a principled and general approach to decompose the output of a decision function f, given an input x, into so-called relevance values R_p for each component p of x such that ∑_p R_p = f(x). The method operates iteratively from the model output to its inputs layer-by-layer in a backpropagation-style algorithm, computing relevance scores R_i for hidden units in the interim. Each R_i corresponds to the contribution an input or hidden variable x_i has had to the final prediction, such that f(x) = ∑_i R_i is true for all layers. The method assumes that the decision function of a model can be decomposed as a feed-forward graph of neurons,x_j = σ(∑_i x_iw_ij + b_j),where σ is some monotonically increasing nonlinear function (a ReLU), x_i are the neuron inputs, x_j is the neuron output and w_ij and b_j are the learned weight and bias parameters. The behaviour of LRP can be described by taking as example a single neuron j: That neuron receives a relevance quantity R_j from neurons of the upper layer, which is to be redistributed to its input neurons i in the lower layer, proportionally to the contribution of i in the forward pass:R_i← j = z_ij/z_j R_jHere, z_ij is a quantity measuring the contribution of neuron i to the activation of neuron j and z_j is the aggregation of all forward messages z_ij over i at j.The relevance score R_i at neuron i is then consequently obtained by pooling all incoming relevance quantities R_i← j from neurons j to which i contributes:R_i = ∑_j R_i ← jBoth the above relevance decomposition and pooling steps satisfy a local conservation property,R_i = ∑_j R_i ← j   and   ∑_i R_i← j = R_jensuring f(x) = ∑_i R_i for i iterating over the neurons of any layer of the network.The relevance redistribution obtained from Equations <ref> and <ref> is a very general one, with exact definitions depending on a neuron or input's type and position in the pipeline <cit.>. All DNN models considered in this paper consist in one part of ReLU-activated (convolutional) feature extraction layers towards the bottom, followed by inner product layers serving as classifiers <cit.>. We therefore apply to inner product layers the ϵ-decompositionR_i← j = x_iw_ij/b_j + ∑_i x_iw_ij R_jwith small epsilon (ϵ=0.01) of matching sign added to the denominator for numeric stability, to truthfully represent the decisions made via the layers' linear mappings consistently.Since the ReLU activations of the convolutional layers below serve as a gate to filter out weak activations, we apply the αβ decomposition formula with β=-1 <cit.>R_i← j = (αz^+_ij/∑_iz^+_ij + βz^-_ij/∑_iz^-_ij)R_j,which handles the activating and inhibiting parts of z_ij separately as z^+_ij and z^-_ij and weights them with α and β respectively <cit.>. Since z_ij = z^+_ij + z^-_ij, enforcing α+β=1 ensures the conservation property from Equation <ref>. Theoretical insights into above decomposition types can be found in <cit.>.Once relevance scores are obtained on (sub)pixel level, we sum-pool the relevance values over the color channel axis. This leaves us with only one value R_p per pixel p. We visualize the results using a color map centered at zero, since R_p ≈ 0 indicates neutral or no contribution of input component p to f(x) and R_p > 0 and R_p < 0 identify components locally speaking for or against the global prediction. All models use vastly different filter sizes (from 2 to 11) in the bottom layers. We follow <cit.> in distributing R_j for all neurons of some of the lower layers uniformly across their respective inputs, such that the granularity of the visualizations for all models are comparable.§ EVALUATION AND RESULTSWe score all trained models using the oversampling evaluation scheme <cit.>, by using the average prediction from ten crops (four corner and one center crop, plus mirrored versions) per sample. Results for age and gender prediction are shown in Tables <ref> and <ref> respectively. The columns of both tables correspond to the described models; the AdienceNet, CaffeNet, Googlenet and VGG-16. Following previous work we also report 1-off accuracy results – the accuracy obtained when predicting at least the age label adjacent to the correct one – for the age prediction task.The row headers describe the training and evaluation setting: A first value of [i] signifies the use of [i]n-plane face alignment from <cit.> as a preprocessing step for training and testing, [r] stands for [r]otation based alignment and [m] describes results obtained when both rotation aligned and in-plane aligned images have been [m]ixed for training and images from the [r] test set have been used for evaluation. Second values [n] or [w] describe weight initialization using Image[n]etand IMDB-[w]IKI respectively. No second value means the model has been trained from scratch with random weight initialization.The results in above tables list the measured performance after a fixed amount of training steps. Intermediate models which might have shown slightly better performance are ignored in favour of comparability. With our attempt to replicate the results from <cit.> based on the code provided by the authors, we managed to exceed the reported results in both accuracy by (+1.2%) and 1-off accuracy (+2.7%) for age prediction and accuracy (+1.5%) for gender prediction. As expected, the structurally comparable CaffeNet architecture obtains relatable results for both learning problems with random model weight initialization. We then further compared the relatively fast to train CaffeNet model to the GoogleNet model in all datapreprocessing configurations when trained from scratch and fine-tuned based on the ImageNet weights. We try to replicate the measurements from <cit.> to verify the observations made based on the other models. Here, we did not fully manage to reach the reported results, despite using the model pre-trained on the IMDB-WIKI data as provided by the authors. However, we closely scrape by the reported results with slight differences in both accuracy (-1.2%) and 1-off accuracy (-0.8%), averaged over all five splits of the data with a model trained on the mixed training set. In all evaluated settings shown in Figure <ref> we can observe overall trends in choices for architecture, dataset composition and preprocessing and model initialization. §.§ Remarks on Model ArchitectureIn all settings, the CaffeNet architecture is outperformed by the more complex and deep GoogleNet and VGG-16 models. For gender classification under comparable settings, the best VGG-16 models outperform the best GoogleNet models. Figure <ref> visualizes the different characteristics of input faces as used by the classifiers to predict either male or female gender.We observe that model performance correlates with network depth, which in turn correlates with the structure observable in the heatmaps computed with LRP. For instance, all models recognize female faces dominantly via hair line and eyes, and males based on the bottom half of the face. The CaffeNet model tends to contentrate more on isolated aspects of a given input compared to the other two, especially for men, while being less certain in its prediction, reflected by the stronger negative relevance.§.§ Observations on PreprocessingFor all three models, we can observe the overall trend for both prediction problems, that the in-plane alignment preprocessing step is not beneficial to classifier performance, compared to rotation alignment. The only exception to this trend is the randomly initialized GoogleNet model, which loses one percent accuracy for age prediction under rotation alignment albeit still gaining performance in measured 1-off prediction. We reason the better performance on only rotation aligned images to be justified in the potential of and for DNNs to learn for the domain of face images canonically meaningful sets of features. For the face images aligned using the technique presented in <cit.>, this is more difficult. Especially for images of children, the faces aligned to reference frames suitable for adults result in head shapes of uncharacteristic aspect ratios for the age group or even faulty alignments.Figure <ref> demonstrates the nature of this artefactual noise introduced to the data by unsuitable alignment. All models benefit the most from combining both the rotation aligned and the landmark aligned data sets for training. For one, this effectively doubles the training set sizes,but also – perhaps more importantly – allows the learning of a more robust feature set: The models trained on a combination of both the landmark aligned and rotation aligned images perform well on test sets resulting from both preprocessing techniques. Tables <ref> and <ref> show results for models trained on the combined set which were evaluated on the rotation aligned test set.Performance measurements on in-plane aligned data are with <1% only insignificantly lower. In order to underline the effect of increased robustness of the models trained on the more diverse [r]oration aligned training set we evaluated models trained on [i]n-plane aligned images with [r]otation aligned test images and vice versa. Corresponding model performances are listed in Tables <ref> and <ref>. Some models trained on data prepared with one alignment technique evaluated against the test set of the other perform even worse than the early SVM-based models from <cit.>, despite their competitive results from the combined training set. The models trained on thein-plane aligned images have more difficulty predicting on the unseen setting than the models trained on the only rotated images, where the original facial pose and the proportions of the face image are mostly preserved.For the VGG-16 model, we compared the in-plane alignment to the mixed training set – the worst to the best expected results. Here again, the mixed training data results in a better model than when only in-plane alignment is used.Figure <ref> shows an overview of all results over training time. §.§ Observations on Initialization We find that the GoogleNet model responds well to fine-tuning on the weights pre-trained on ImageNet and responds with an increase in performance for both classification problems and in all dataset configurations. The CaffeNet, however, slightly loses performance when fine tuned for age group prediction, while benefiting in gender prediction. The better response of the GoogleNet compared to the CaffeNet, when initialized with their respective ImageNet weights might be caused by the quality of the initial parameters: While the GoogleNet achieves a 6.6% top-5 error on ImageNet, the CaffeNet only reaches 19.6%. Evaluating on the incorrect test data (Tables <ref> and <ref>), both fine tuned models trained on rotation aligned images manage to recover their respective performance ratings compared to models trained from scratch and being evaluated on the correct data. The GoogleNet model even exceeds the performance of the same architecture initialized randomly but both trained and evaluated on the rotated images. The measurable beneficial effect of appropriate pretrainingis visualized in Figures <ref> and <ref>.ImageNet pretraining leads to the use of larger and meaningful parts of the face for prediction for the GoogleNet, while the randomly initialized model picks out single characteristics during training which correlate the most with the target class. This includes eyebrows and lips defining female faces and nose, chin and uncovered ears for men for gender recognition. We see comparable results for the VGG-16 on age group estimation when comparing pretraining on ImageNet and IMDB-WIKI. The model initialized with IMDB-WIKI weights, with the pretraining task being age estimation on 101 age categories, concentrates more on the facial features themselves, while the ImageNet-initialized one is more prone to distraction from background elements and clothing items. Facial features seen in examples of opposing classes of the respectively weaker models in both figures – independent of the ensemble of facial features – leads to less certain, noisy decisions. For the problem of gender recognition, the VGG-16 is affected less from weight initialization than from the quality of data preprocessing. Here, IMDB-WIKI pretraining might have an only diminished effect due to firstly the ImageNet weights providing an already good set of starting weights and secondly, the pretraining objective (age recognition) being orthogonal to the task of gender recognition. In fact, other than for age recognition, the VGG-16 models initialized with ImageNet weights converged to better parameters than their counterparts.Figure <ref> reports the prediction performances of the CaffeNet, the GoogleNet and the VGG-16 model in all evaluated settings, averaged over the five splits of the Adience data set. The recorded model scores over time illustrate that suitably initializing a model largely outweighs the problems introduced with artefactual data in our experiments. Next to the overall better model obtained after convergence, we also observe a considerably faster increase in the learning progress early in training.§ CONCLUSIONRecent deep neural network models are able to accurately analyze human face images, in particular recognize the persons' age, gender and emotional state. Due to their complex non-linear structure, however, these models often operate as black-boxes and until very recently it was unclear why they arrived at their predictions. In this paper we opened the black-box classifier using Layer-wise Relevance Propagation and investigated which facial features are actually used for age and gender prediction. We compared different image preprocessing, model initialization and architecture choices on the challenging Adience dataset and discussed how they affect performance. By using LRP to visualize the models' interactions with the given input samples, we demonstrate that appropriate model initialization via pretraining counteracts overfitting, leading to a holistic perception of the input.With a combination of simple preprocessing steps, we achieve state of the art performance for gender classification on the Adience benchmark data set. ieee
http://arxiv.org/abs/1708.07689v1
{ "authors": [ "Sebastian Lapuschkin", "Alexander Binder", "Klaus-Robert Müller", "Wojciech Samek" ], "categories": [ "stat.ML", "cs.AI", "cs.CV", "cs.IR", "cs.LG", "68" ], "primary_category": "stat.ML", "published": "20170825110838", "title": "Understanding and Comparing Deep Neural Networks for Age and Gender Classification" }
fadings [name=fade out, inner color=transparent!0, outer color=transparent!100] [name=fade right, left color=transparent!0, right color=transparent!100] [name=fade left, right color=transparent!0, left color=transparent!100] [name=fade mid, left color = transparent!100, right color = transparent!100, middle color=transparent!0] laser beam action/.style= line width=+1.4pt,draw opacity=.095,draw=#1, , laser beam recurs/.code 2 args= #1-1 0 preaction=laser beam action=#2 preaction=laser beam action=#2,laser beam recurs=#2 , laser beam/.style=preaction=laser beam recurs=20#1,draw opacity=1,draw=#1,calc tocline#1#2#3#4#5#6#7#1>@̧tocdepthsecpenalty#2 M ifempty#4 tempdimar@tocindent#1 tempdima#4 @ #3tempdimapnumwidth plus4em -pnumwidth #5-tempdima #11em 2em 3em#6topnumwidthtocpagenum#7theoremTheorem[subsection] corollary[theorem]Corollary lemma[theorem]Lemma proposition[theorem]Proposition definition[theorem]Definition warning[theorem]Warning questionsQuestion question[theorem]Question conjecture[theorem]Conjecture assumptionAssumption[subsection] construction[theorem]Construction example[theorem]Example quasi-theorem[theorem]Quasi-Theorem blank remark[theorem] ssubsection[theorem] ThTheorem[] rem1[theorem]Remark remarknot1[theorem]Notation notation Efrat Bank University of Michigan Mathematics Department 530 Church Street Ann Arbor, MI 48109-1043 United States mailto:[email protected]@umich.edu Tyler Foster L'Institut des Hautes Études Scientifiques Le Bois-Marie, 35 route de Chartres 91440 Bures-sur-Yvette France mailto:[email protected]@ihes.frplainplainCorrelations between primes in short intervalson curves over finite fields Tyler Foster December 30, 2023 ============================================================================-.5cm We prove an analogue of the Hardy-Littlewood conjecture on the asymptotic distribution of prime constellations in the setting of short intervals in function fields of smooth projective curves over finite fields. Introduction In recent work <cit.>, Bary-Soroker and the first author use their work with Rosenzweig <cit.> on the asymptotic distribution of primes inside short intervals in _q[t] to establish a natural counterpart, over the field _q(t), to the still unsolved Hardy-Littlewood conjecture. In <cit.>, the two present authors show how to extend the results of <cit.> to give asymptotic distributions of primes in short intervals on the complement of a very ample divisor in a smooth, geometrically irreducible projective curve C over a finite field _q. In the present paper, we apply ideas from <cit.> and <cit.> to prove a natural counterpart to the Hardy-Littlewood conjecture on the complement of a very ample divisor in a smooth, geometrically irreducible projective curve C over a finite field _q. The Hardy-Littlewood conjectures for short intervals Let K be a number field, 𝒪_K its ring of integers, and N(h)#𝒪_K/(h) the norm on elements h∈𝒪_K. Given an n-tuple σ=(σ_1,...,σ_n) of elements in 𝒪_K, denote by π_K,σ(x) the n-tuple principal prime counting function π_K,σ(x) #{ h∈𝒪_K : 2<N(h)≤ x, and each (h+σ_i)⊂𝒪_K is prime}.The Hardy-Littlewood n-tuple conjecture <cit.> (henceforth the HL conjecture), asserts that: The function π_,σ satisfies the asymptotic formula π_,σ(x) ∼ S(σ) x/(log x)^n, x→∞for a positive constant S(σ). Despite much numerical verification and many proof attempts, the HL conjecture remains open. Results toward the conjecture include the work of Goldston, Pintz and Yildirim <cit.>, Zhang <cit.> and Maynard <cit.>. See Graville <cit.> for a partial history of the problem. One can also formulate short interval variants of the HL conjecture. As in <cit.>, the ambiguity in defining subsets of 𝒪_K in terms of the norm on K leads to at least two possible formulations. Fix a n-tuple σ=(σ_1,...,σ_n) of elements in 𝒪_K. Then each real number 1>ε>0 determines a family of intervals I(x,ε) [x-x^ε, x+x^ε]⊂, with corresponding prime counting function π_K,σ(I(x,ε)) #{ N_K(h)∈ I(x,ε):each (h+σ_i)⊂𝒪_K is prime}.On the other hand, if S={K} and ε_S=(ε_𝔭)_𝔭∈ S is an #S-tuple of real numbers 1>ε_𝔭>0, then for each b∈𝒪_K, we can define I(b,ε_S){ a∈𝒪_K :|a-b|_𝔭≤ |b|^ε_𝔭_𝔭𝔭∈ S },with corresponding prime counting function π_K,σ(I(b,ε_S)) = #{a∈ I(b,ε_S):(a+σ_i)⊂𝒪_K}. (i). The function π_K,σ(I(x,ϵ)) satisfies π_K,σ(I(x,ϵ)) ∼ S(σ)#I(x,ϵ)/(log x)^n, x→∞,where S(σ) is some positive constant depending on σ and on the class number of K. (ii). The function π_K,σ(I(b,ε_S)) satisfies pi_K,σ(I(b,ε_S)) ∼S(σ)#I(b,ε_S)/(log N_K(b))^n, N_K(b)→∞,where S(σ) is some positive constant depending on σ and on the class number of K. Gross and Smith <cit.> present numeric evidence for Conjecture <ref>.(i) for certain number fields and reasonable sets, which are regions that may be interpreted as short intervals. Based on this, they present a conjecture similar to Conjecture <ref>.(i) above. Notice that when K=, then S={∞} and Conjectures <ref>.(i) and (ii) coincide. Natural analogues of Conjecture <ref> and <ref> also hold when we replacewith field of rational functions _q(t) over a finite field _q in the large q limit. For f∈_q[t] monic and ε>0 a real number, define I(f,ε) {h∈_q[t] : | h-f| ≤ | f |^ε}, with |f| q^ f. For n-tuples 𝐟=(f_1,...,f_n) of distinct polynomials f_i and distinct polynomials in _q[t], define π_q,𝐟(k) #{ h∈_q[t]: h monic of degreek, and eachh+f_i is a prime polynomial}; [π_q,𝐟(I(f,ε))#{h∈ I(f,ε): eachh+f_i is a prime polynomial}. ] Fix an integer B>n. Then: (i) (Pollack <cit.>, Bary-Soroker <cit.>, Carmon <cit.>). The function π_q,f(k) satisfies π_q,𝐟(k) = q^k/k^n(1 + O_B(q^-1/2))uniformly in f_1,...,f_n with degrees bounded by B as q∞. .2cm (ii) (Bank & Bary-Soroker <cit.>). The function π_q,𝐟(I(f_0,ε)) satisfies π_q,𝐟(I(f_0,ε)) = # I(f_0,ε)/i=1n(f_0+f_i)( 1 + O_B(q^-1/2))uniformly for all f_1,…,f_n of degree at most B, and for all monic polynomials f_0 satisfying 2/ε< f_0<B, as q∞ an odd prime power. Main results: Correlation of primes in short intervals on curves Let C be a smooth projective geometrically irreducible curve over a finite field _q. Fix an effective divisor E=m_1𝔭_1+⋯ +m_s𝔭_s on C. Let σ(σ_1,...,σ_n) be an n-tuple of rational functions σ_i on C, regular on C\ E C\supp(E). Define an n-tuple principal prime counting function π_C,σ(E) #{[ h∈ K(C)div(h)_-=E,; function h+σ_1, …, h+σ_n; ideal in the ring of regular functions on C\ E ]}. The condition that div(h)_-=E should be thought of as analogous to the condition h=k in (<ref>). In the special case where C=^1, the ring of rational functions on C regular away from the point ∞∈^1 is the polynomial ring _q[t]. Thus when E=∞, the function π_C,σ defined in (<ref>) reduces to the function π_q,f defined in (<ref>). The function π_C,σ(E) satisfies π_C,σ(E) ∼ S(σ) q^deg_ E/(deg_ E)^n, q →∞for a positive constant S(σ) depending on σ, uniformly in σ_1,...,σ_n with bounded degree.Given a regular function f on C\ E, the interval (of size E around f) is the set [ I(f,E){[ C\ E such; ν_𝔭_i(h-f)≥ -m_i1≤ i≤ s ]}; = f+H^0(C,𝒪(E)). ]The interval I(f,E) is a short interval if the order of the pole of f at each 𝔭_i is at least m_i, and strictly greater than m_i for at least one 𝔭_i. Define π_C,σ(I(f,E)) #{[ h∈ I(f,E) such that h+σ_1,...,h+σ_n generate prime; ideals in the ring of regular functions onC\ E ]}. Our main result is an analogue of Conjecture <ref> that extends Theorem <ref>.(ii) to curves of arbitrary genus over _q. Fix a smooth projective geometrically irreducible curve C over _q. Let E=m_1𝔭_1+⋯+m_r𝔭_r be an effective divisor on C, and let f_0,σ_1,...,σ_n be distinct regular functions on C\ E satisfying -ν_𝔭(f_0)> m_𝔭 and ν_𝔭(f_0) ≠ν_𝔭(σ_i) for each 1≤ i≤ n. Fix an integer B>n. If char_ _q2 and E≥ 3E_0 for some effective divisor E_0 on C with deg_ E_0≥2g+1, then the asymptotic formula π_C,σ(I(f_0,E)) = #I(f_0,E)/∏_i=1^n (f_0+σ_i)(1+O_B(q^-1/2))holds uniformly for all E and f_0,σ_1,…,σ_n as above satisfying deg(div(f_0+σ_i)|_E)<B, and as q∞ an odd prime power. Outline of paper Our strategy in proving Theorem <ref> is similar in spirit to <cit.>. In <ref> we briefly review the necessary divisor theory and show that the splitting fields of distinct linear functions, evaluated at the generic element ℱ_A of a short interval, are linearly disjoint.We use this to show that the Galois group of the product of several linear functions evaluated at ℱ_A is isomorphic to a direct product of symmetric groups. In <ref> we use a Chebotarev-type density theorem to estimate π_C,σ(I(f_0,E)). In <ref>, we prove the main Theorem <ref> as a special case of the more general Theorem <ref> , which deals with general factorization types. Acknowledgments The authors would like to thank Lior Bary-Soroker for suggesting a version of this problem. We would like to thank Jeff Lagarias and Mike Zieve for many helpful conversations. We also extend a warm thank you to Jordan Ellenberg, Alexei Entin, and Zeev Rudnick for helpful perspectives. The research that lead to this paper was conducted while the first author was at the University of Michigan and whilethesecondauthorwasavisitingresearcheratL`InstitutdesHautes Études Scientifiques and at L'Institut Henri Poincaré. The first author thanks the AMS-Simons Travel Grant for supporting her visit to IHES. The second author thanks Le Laboratoire d`Excellence CARMIN for their financial support. Galois group calculationRelevant background We make use of the theory of divisors on algebraic varieties. Necessary background appears in <cit.>, with further details in <cit.>. For each point 𝔭∈ C, let ν_𝔭 denote valuation at 𝔭, applied to both functions and divisors on C. For the remainder of the present <ref>, we fix an effective very ample divisor E on C and a function f_0 regular on C\ E satisfying -ν_𝔭(f_0) > ν_𝔭(E) 𝔭∈supp(E).We require a somewhat stronger positivity condition on E than “effective and very ample." Namely, assume that there exists a divisor E_0 on C such that E ≥ 3E_0. Observe however that if E is effective and very ample but fails to satisfy (<ref>), then we can replace E by 3E to achieve (<ref>). Let σ=(σ_1,...,σ_n) be an n-tuple of distinct rational functions on C, each regular on C\ E, and each satisfying the inequalities ν_𝔭(σ_i) ≠ ν_𝔭(f_0) -ν_𝔭(σ_i) > ν_𝔭(E), 𝔭∈supp(E).For each σ_i, define the monic linear polynomial L_i(X)σ_i+X in_q(C)[X]. Let R denote the ring of regular functions on C\ E. Fix a basis {1,g_1,...,g_m} of H^0(C,𝒪(E)) once and for all, let _q(A)=_q(A_0, . . . , A_m) denote the field of rational functions in m+1 variables, and define 𝔸^m+1Spec_ _q[A]=Spec_ _q[A_0,...,A_m]. On the trivial family of curves (C\ E)×__q𝔸^m+1=Spec_ R[A]=Spec_ R⊗__q_q[A], we have the regular function ℱ_𝐀 f_0+A_0+∑_j=1^mA_j g_j.For each _q-rational point a∈𝔸^m+1, the restriction of ℱ_A to (C\ E)×__qSpec_ κ(a) defines a regular function ℱ_a on C\ E..2cm Let K/_q(𝐀) be an algebraic extension. For an ideal ℑ in the Dedekind domain K⊗__q(𝐀)R(𝐀), let ℑ = 𝔓^e_1_1⋯𝔓^e_ℓ_ℓdenote the prime decomposition of ℑ. (i) If e_1=⋯=e_ℓ=1 in (<ref>), then we define the splitting field of ℑ over K, denoted split(ℑ) or split(ℑ/K), to be the composite split(ℑ) split(𝔓_1)⋯split(𝔓_ℓ),where for each 1≤ i≤ n, split(𝔓_i) denotes the normal closure of κ(𝔓_i) in _q(A). (ii) If each extension κ(𝔓_i)/K is separable, then the composite extension split(ℑ)/K is normal, and we define the Galois group of ℑ to be Gal(ℑ/K) Gal(split(ℑ)/K).Linearly disjointness of splitting fields For each 1≤ i≤ n, the rational function L_i(ℱ_A) on C__q(A) determines a morphism L_i(ℱ_A):C__q(A)⟶^1__q(A)and a field extension _q(A) ↪ _q(C)(A_0,…,A_m).Likewise, if we define _q(A')_q(A_1,...,A_m), then the rational function Ψ_i σ_i+f_0+∑_j=1^mA_j g_j = L_i(ℱ_A)-A_0.on C__q(A') provides a morphism Ψ_i:C__q(A')⟶^1__q(A')and field extension _q(A')(t) ↪ _q(C)(A_1,…,A_m).The field extension (<ref>) is separable. The assumption (<ref>) implies that for each 𝔭∈supp(E), we have ν_𝔭(σ_i +f_0) = min{ ν_𝔭(σ_i), ν_𝔭(f_0) } < -ν_𝔭(E).This makes I(σ_i+f_0,E) a short interval in the sense of <cit.>. Thus by <cit.>, the variety V(L_i(ℱ_A))⊂ (C\ E)__q(A) consists of a single point 𝔓_i such that the field extension _q(A) ↪ κ(𝔓_i)is separable. Under the identification [ _q(A) _q(A')(t); A_0 ⟼ -t, ]the extensions (<ref>) and (<ref>) become isomorphic. Thus (<ref>) is separable. Let 𝔓_i be the underlying point of V(ℱ_A) as in the proof of Lemma <ref>. By Lemma <ref>, we can consider the splitting field split(L_i(ℱ_A)) split(κ(𝔓_i)/_q(A))obtained as the normal closure of the extension (<ref>), as defined in <cit.>. Let 𝔡_i denote the discriminant of the extension split(L_i(ℱ_A))/_q(A), as defined in <cit.> for instance. The discriminant is a fractional ideal in _q(A) that restricts to an actual ideal in _q(A')[A_0]. Because _q(A')[A_0] is a principal ideal domain, there exists a function D_i∈_q(A')[A_0], well defined up to multiplication by elements in _q(A')^×, such that 𝔡_i = (D_i) _q(A')[A_0]. Via the identification (<ref>), <cit.> implies that the map Ψ_i is ramified at a point x in C__q(A')\ E__q(A') if and only if D_i vanishes at the point Ψ_i(x) inside Spec_ _q(A')[A_0]⊂^1__q(A'). Let Ω^1_C denote the sheaf of Kähler differentials on C. For each 1≤ i≤ n, the functions Ψ_i and L_i(ℱ_A) are regular on (C\ E)__q(A') and (C\ E)__q(A), respectively. Let dΨ_i and dL_i(ℱ_A) denote their differentials, and note that dΨ_i = dL_i(ℱ_A) C__q(A)\ E__q(A). Given a field extension K/_q, a section ω∈Γ(C_K\ E_K, Ω^1_C_K), and any point x∈ C_K\ E_K, let ω|_x denote the restriction of ω to the fiber (Ω^1_C_K)_x. In the notation of Remark <ref>, D_1 and D_2 are relatively prime in the polynomial ring _q(A')[A_0] if and only if the system of equations [ dΨ_1|_x =0;; dΨ_2|_y =0;;Ψ_1(x) = Ψ_2(y), ]has no solution in pairs of (not necessarily distinct) points x,y∈(C\ E)__q(A'). Because Ψ_i^-1(∞)=supp(E__q(A')), Remark <ref> implies that D_1 and D_2 are relatively prime if and only if the branch divisors of Ψ_1 and Ψ_2 are disjoint in Spec_ _q(A')[A_0]. Recall that the support of the branch divisor of Ψ_i is the set of all 𝔭∈^1__q(A') such that 𝔭 = Ψ_i(x) dΨ_i|_x = 0for some x∈ C__q(A'). Thus a point 𝔭 of Spec_ _q(A')[A_0] in the support of both branch divisors Ψ_1 and Ψ_2 is any point satisfying Ψ_1(x)=𝔭=Ψ_2(y) and dΨ_1|_x=0=dΨ_2|_y for some (not necessarily distinct) pair of points x,y∈ C__q(A')\ E__q(A').If char_ _q 2, then the system of equations (<ref>) has no solution in points x,y∈ C__q(A')\ E__q(A'). Since Ψ_i=L_i(ℱ_A)-A_0, it suffices to prove that the system of equations [ dL_1(ℱ_A)|_x=0; dL_2(ℱ_A)|_y=0;L_1(ℱ_A)(x)=L_2(ℱ_A)(y) ]has no solutions over _q(A). As in the proof of <cit.>, choose an effective very ample divisor E_0 satisfying (<ref>), define m_0_def=dim_ H^0(C,𝒪(E_0))-1, and let C⊂^m_0 be the closed embedding determined by E_0. Assume first that x≠ y. For each pair of distinct points ξ,η∈ C__q\ E__q, let t be a linear form on ^m_0__q such that t(ξ) t(η). The assumption on E_0 gives us a new _q-linear basis {1,t,t^2,…,t^ℓ,g_ℓ+1,…,g_m} H^0(C__q,𝒪(E__q)),and a new coordinate system A_0,B_1,…,B_m on 𝔸^m+1__q such that L_i(ℱ_A) = σ_i+f_0+A_0+B_1t+⋯+B_ℓt^ℓ+B_ℓ+1g_ℓ+1+⋯+B_mg_mfor each i=1,2. For i=1,2, define Φ_i L_i(ℱ_A)-B_1t-B_2t^2. As in the proof of <cit.>, we can choose a Zariski opens U_ξη that provide a covering of ((C__q\ E__q)_q×(C__q\ E__q))\{diagonal},such that an _q(A)-valued solution (u,v)∈ U_ξη to (<ref>) is the same thing as a solution to the single equation det(u,v) det( [ 1 2t(u)φ_1(u); 1 2t(v)φ_2(v); t(v)-t(u) t(v)^2-t(u)^2c(u,v) ]) = 0,where φ_i-dΦ_i/dt for i=1,2, and where c(u,v)Φ_1(u)-Φ_2(v). A direct calculation gives det(u,v) = (t(v)-t(u)) (2 c(u,v)+(t(u)-t(v))(φ_1(u)+φ_2(v))).By the same reasoning as in <cit.>, it suffices to prove that det(u,v) cannot be 0 identically on U_ξη. If n ≥ 3, then the coefficient of B_3 in 2 c(u,v)+(t(u)-t(v))(φ_1(u)+φ_2(v)) is 2(t(u)^3-t(v)^3)+(t(u)-t(v)) 3(t(u)^2+t(v)^2), which is not identically 0. Since t(u) t(v), this implies that det(u,v) is not identically zero on U_ξη. Assume next that x=y. If the pair x=y∈ C__q(A)\ E__q(A) is a solution to (<ref>), then L_1(ℱ_A)(x)= L_2(ℱ_A)(x), implying that (σ_1-σ_2)(x)=0. Because σ_1 and σ_2 are distinct regular functions on C__q\ E__q, this implies that x∈ C__q\ E__q.This allows us to choose a linear form t on ^m_0__q such that t(x)≠ 0. Using this linear form t, choose an _q-linear basis (<ref>). This choice lets us write L_i(ℱ_A) as in (<ref>). The solution x satisfies dL_1(ℱ_A)|_x=0. But our choice of t lets us write 0 = dL_1(ℱ_A)|_x = dσ_1/dt(x)+df_0/dt(x)+B_1+∑_j=2^ℓj B_j t^j -1(x) + ∑_j=ℓ+1^mB_jdg_j/dt(x)and so -B_1 = dσ_1/dt(x)+df_0/dt(x)+∑_j=2^ℓj B_j t^j -1(x) + ∑_j=ℓ+1^mB_jdg_j/dt(x).However, the right hand side of this last equation does not involve B_1, contradicting the fact that x∈ C__q\ E__q. If char_ _q 2, then the splitting fields split(L_i(ℱ_𝐀)), for 1≤ i≤ n, are linearly disjoint over _q(A). As noted in the proof of Lemma <ref>, for each 𝔭∈supp(E) and for each 1≤ i≤ n, the conditions (<ref>) imply that the inequality (<ref>) holds. Thus the hypotheses of <cit.> are satisfied and we have (L_i(ℱ_𝐀),_q(A)) S_k_i with k_i_ div(f_0+σ_i)|_C\ E = _ div(L_i(f_0))|_C\ E.Because we have inclusions (L_i(ℱ_𝐀),_q(A))⊆(L_i(ℱ_𝐀),_q(A))↪ S_k_i, this implies (L_i(ℱ_𝐀),_q(A)) S_k_i. Let A_k_i⊂S_k_i denote the alternating group on k_i letters. Its fixed field is the quadratic extension split(L_i(ℱ_𝐀))^A_k_i ≅ _q(A)(√(D_i)),where D_i∈_q(A')[A_0] is an element generating the discriminant ideal 𝔡_i as in Remark <ref>. By <cit.>, it suffices to prove that the fields _q(A)(√(D_i)), for 1≤ i≤ n, are linearly disjoint. Without loss of generality, consider i=1,2. By Lemma <ref> and Proposition <ref>, the elements D_1,D_2∈_q(A')[A_0] have no common prime factors. Likewise, <cit.> implies that both D_1 and D_2 are square free. Because the fields _q(A)(√(D_1)) and _q(A)(√(D_2)) are degree-2 extensions of _q(A), their intersection _q(A)(√(D_1))∩_q(A)(√(D_2)) is either _q(A) itself, or else the two field extensions coincide: _q(A)(√(D_1))=_q(A)(√(D_2)). The latter is the case if and only if the product D_1D_2∈_q(A')[A_0] contains the square of a prime factor, contradicting the fact that D_1 and D_2 are square free and relatively prime in _q(A')[A_0]. Combining the results above, we have the following:For each algebraic extension K/_q, we have natural group isomorphisms Gal( ni=1L_i(ℱ_𝐀), K(A) ) ≅ni=1 Gal( L_i(ℱ_𝐀), K(A) ) ≅ S_k_1×⋯× S_k_n. □ Counting argument and proof of Theorem <ref> In this section we prove Theorem <ref> the counting Proposition <ref>. The latter provides an asymptotic formula for the number of _q-valued points 𝐚∈^m+1(_q) for which each of the elements L_1(ℱ_𝐚),...,L_n(ℱ_𝐚) generates a prime ideal in R. The formulation of Proposition <ref> should be viewed as an explicit Chebotarev theorem. Our proof makes use of <cit.> and <cit.>. Factorization types and general counting argument Suppose given an _q-rational point a∈^m+1(_q). If R/(L_i(ℱ_𝐚)) is a separable _q-algebra, then because R is a Dedekind domain, the ideal (L_i(ℱ_𝐚))⊂ R admits a prime factorization (L_i(ℱ_𝐚)) = 𝔣_i1⋯ 𝔣_iℓ_i, such that each residue field κ(𝔣_ij)=R/(𝔣_ij) is a separable extension of _q. The conditions on f_0 and σ_i guarantee that in this case k_i =_ div(L_i(ℱ_𝐚))|_C\ E = _ div(𝔣_i1)|_C\ E+⋯ +_ div(𝔣_iℓ_i)|_C\ E. Given an _q-rational point a∈^m+1(_q), if R/(L_i(ℱ_𝐚)) is a separable _q-algebra, the factorization type λ_i, 𝐚 of L_i(ℱ_𝐚) is the partition of k_i given in (<ref>). The n-tuple factorization type counting function of ℒ=(L_1,...,L_n) for a fixed n-tuple λ=(λ_1,...,λ_n), where each λ_i is a partition of k_i= L_i(f_0), is the assignment π_C,ℒ(-;λ) taking the short interval I(f_0,E) to the value π_C,ℒ(I(f_0,E);λ) #{𝐚∈^m+1(_q):R/(L_i(ℱ_𝐚))λ_i,a=λ_i 1≤ i≤ n }. Given a positive integer N and a permutation τ∈ S_N, the partition type of τ, denoted λ_τ, is the partition of N determined by the cycle decomposition of τ. Having fixed a subgroup G⊆ S_N, for each partition λ of N we define P(λ) _def= #{τ∈ G|λ_τ=λ}/|G|. In other words, P(λ) is the probability that a given permutation in G has partition type λ. Given two positive integers N_1 and N_2, subgroups G_1⊆ S_N_1 and G_2⊆ S_N_2, and partitions λ_1 of N_1 and λ_2 of N_2, we write P(λ_1) and P(λ_2) for the respective probabilities, without explicit reference to the groups G_1 and G_2. Define ℒ(ℱ_𝐀) L_1(ℱ_𝐀)⋯ L_n(ℱ_𝐀) ∈ R[𝐀].Let G=(ℒ(ℱ_𝐀), _q(𝐀)), and fix a partition λ=(λ_1,...,λ_n) of N _ div(L_1(f_0)⋯ L_n(f_0))|_C\ E.Then under the assumptions in <ref>, there exists a constant c=c(B) such that | π_C,ℒ(ℱ_𝐀)((I(f_0,E);λ)- P(λ)q^m+1| ≤ c q^m+1/2. Consider the _q-scheme V(ℒ(ℱ_𝐀)). By <cit.>, for each 1≤ i≤ n there exist an affine open U_i=𝒜_i ⊂𝔸^m+1 and a monic separable polynomial 𝒢_i(t)∈𝒜_i[t] such that V(L_i(ℱ_𝐀))_U_i ≅ 𝒜_i[t]/(𝒢_i(t)).Hence there exists an affine subscheme U=Spec_ 𝒜⊂⋂_i=1^n U_i such that V(𝒢(t))_U V(ℒ(ℱ_𝐀))_U, 𝒢(t) 𝒢_1(t)⋯𝒢_n(t). Let U=𝒜 and 𝒢(t)∈𝒜[t] as in Remark <ref>. Then there exists some element a∈_q(𝐀)^× such that a𝒢(t)∈_q[𝐀][t]. Since V(𝒢(t)) V(ℒ(ℱ_𝐀)) over U, we have that Gal(a𝒢(t),_q(𝐀)) G. Thus by <cit.>, Proposition <ref> holds with a𝒢(t) in place of ℒ(ℱ_𝐀) for some constant c_1(B). Interpreting the closed complement Z𝔸^m+1\ U as an m-cycle in 𝔸^m+1, <cit.> implies that there exists some constant c_2(B) such that #Z(_q)≤ c_2(B)q^m. Finally, [ | π_C,ℒ(ℱ_𝐀)(I(f_0,E);λ)-P(λ) q^m+1|≤c_1(B) q^m+1/2+c_2(B) q^m; ≤c(B) q^m+1/2, ] where c(B)=c_1(B)+c_2(B). Proof of Theorem <ref> We obtain Theorem <ref> as a specific case of the following more general Theorem <ref>, which deals with arbitrary factorization types.In the conditions and notation of Theorem <ref>, let λ be a partition as in Proposition <ref>. Then π_C,ℒ(ℱ_𝐀)( I(f_0,E);λ) = P(λ_1)⋯ P(λ_n) #I(f_0,E) (1 + O_B(q^-1/2) ). By Proposition <ref>, (ℒ(ℱ_𝐀),_q(𝐀)) S_k_1×⋯× S_k_n. Since P(λ)=P(λ_1)⋯ P(λ_n) and #I(f_0,E)=q^m+1, Proposition <ref> gives [π_C,ℒ(ℱ_𝐀)( I(f_0,E);λ)= P(λ_1)⋯ P(λ_n) q^m+1 +O_B(q^m+1/2); = P(λ_1)⋯ P(λ_n) #I(f_0,E) (1 + O_B(q^-1/2) ), ]as desired. In Theorem <ref>, take each λ_i to be the partition of k_i into a single cell. Then, P(λ_i)=1/k_i=1/(f_0+σ_i) and so π_C,ℒ(ℱ_𝐀)( I(f_0,E);λ) = P(λ_1)⋯ P(λ_n) q^m+1 +O_B(q^m+1/2) = #I(f_0,E)/∏_i=1^n_ div(f_0+σ_i)|_C\ E (1 + O_B(q^-1/2) ), as desired. In the formulation of the Hardy-Littlewood Conjecture <ref>, one can consider polynomial functions more general than the monic linear functions L_i(X)=σ_i+X∈𝒪_K[X]. Some of the resulting variant forms of Conjecture <ref> are well known conjectures or results in their own right. For example, Conjecture <ref> becomes the quantitative Goldbach conjecture if we set n=2 with L_1(X)=X and L_2(X)=σ -X, for σ∈𝒪_K. If one takes L_1(X)= X and L_2(X)=σ_0 + σ_1 X for σ_0,σ_1∈𝒪_K, then Conjecture <ref> provides an asymptotic count of primes in an arithmetic progression. Over _q(t), Bary-Soroker and the first author establish a version of Theorem <ref> for non-monic linear functions σ_i1X+σ_i0∈_q[t][X]. We expect that this and other interesting and important variants of Theorem <ref> hold on curves of higher genus over _q. We plan to investigate these questions in future work. 1cm plain
http://arxiv.org/abs/1708.07491v1
{ "authors": [ "Efrat Bank", "Tyler Foster" ], "categories": [ "math.NT", "math.AG", "11R58, 14H05, 11R44, 11R45, 11N25" ], "primary_category": "math.NT", "published": "20170824165716", "title": "Correlations between primes in short intervals on curves over finite fields" }
Email: [email protected] ^1 Department of Physics, School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, 113-0033 Tokyo, Japan ^2 Nishina Center for Accelerator-Based Science, RIKEN, 2-1 Hirosawa, Wako, 351-0198 Saitama, Japan ^3 Department of Physics and the Joint Institute for Nuclear Astrophysics Center for the Evolution of the Elements,University of Notre Dame, Notre Dame, Indiana 46556, USA ^4 Department of Physics, Kyoto University, Kitashirakawa-Oiwakecho, Sakyo-ku, Kyoto, 606-8502 Kyoto, Japan ^5 GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstrasse 1, D-64291 Darmstadt, Germany ^6 Department of Physics, Nara Women's University, Kita-Uoya Nishimachi, Nara, 630-8506 Nara, Japan ^7 Department of Life and Environmental Agricultural Sciences, Faculty of Agriculture, Tottori University, 4-101 Koyamacho-Minami, Tottori, 680-8551 Tottori, Japan ^8 Center for Nuclear Study, The University of Tokyo, 2-1 Hirosawa, Wako, 351-0198 Saitama, Japan ^9 Stefan Meyer Institute for Subatomic Physics, Austrian Academy of Sciences, Boltzmanngasse 3, A-1090 Vienna, Austria piAF Collaboration We observed the atomic 1s and 2p states of π^- bound to ^121 Sn nuclei as distinct peak structures in the missing mass spectra of the ^122 Sn(d,^3 He) nuclear reaction. A very intense deuteron beam and a spectrometer with a large angular acceptance let us achieve potential of discovery, whichincludes capability of determining the angle-dependent cross sections with high statistics. The 2p state in a Sn nucleus was observed for the first time. The binding energies and widths of the pionic states are determined and found to be consistent with previous experimental results of other Sn isotopes. The spectrum is measured at finite reaction angles for the first time. The formation cross sections at the reaction angles between 0 and 2^∘ are determined. The observed reaction-angle dependence of each state is reproduced by theoretical calculations. However, the quantitative comparison with our high-precision data reveals a significant discrepancy between the measured and calculated formation cross sections of the pionic 1s state.36.10.Gv, 11.30.Rd, 25.45.-z, 25.40.VeSpectroscopy of pionic atomsin ^122𝐒𝐧(𝐝,^3𝐇𝐞) reaction and angular dependence of the formation cross sectionsK. Yoshida^2 December 30, 2023 =================================================================================================================The spectroscopy of pionic atoms hascontributed to the fundamental knowledge of the non-trivial structure of the vacuum in terms of chiral symmetry and quantum chromodynamics (QCD) in the low energy region <cit.>. The spatial overlaps between the pionic orbitals and the core nuclei peak near the half-density radii of the nuclei.The pions are excellent probes for the study of medium effects in the nuclear matter. An order parameter of the chiral symmetry <cit.>, a quark condensate expectation value q̅q, was deduced at thenuclear densityin investigations on in-medium modification of an isovector π-nucleonstrong interaction through the wave function renormalization <cit.>.The low energy pion-nucleus interaction is described by a phenomenological optical potential. Parameter sets of the potential were obtainedby fitting many known pionic-atom and pion-nucleus-scattering data including isotope shifts of pionic atoms with different neutron numbers <cit.>. Among these parameters is an s-wave isovector parameter b_1, which is closely related to the order parameter of the chiralsymmetry breaking in the nuclear medium q̅q_ρ <cit.>.Low-lying pionic orbitals are located in aclose vicinity of the core nuclei for Z ≳ 50 <cit.>. This localized distribution is due tointegration of the attractive Coulomb potential and the repulsive and absorptive strong interaction potential. Determining the levels and widths of the bound states provides quantum-mechanical information that leads to constraints on the strong interaction. Previous experiments discovered methods of directly populating the low-lying orbitals, analyzed them spectroscopically <cit.>, and measured pionic states in Pb and Sn nuclei. The ratio of q̅q_ρ to the in-vacuum value q̅q_0 was evaluated to be q̅q_ρ/q̅q_0 ∼ 67% <cit.> based on the in-medium modification of the isovector interaction, which is in good agreement with chiral perturbation theories <cit.>.Measurements of the pionic atom formation cross sections at finite reaction angles provide unprecedented information. This contributes to a better understanding of the pionic-atom formation mechanisms and leads to higher accuracy in deduction of the fundamental quantities. Experimental data at larger reaction angles open new prospects for the identifications of the quantum numbers corresponding to the structures observed in the spectra. Previously one had merely to rely on the comparison between the measured and theoretical spectralshapes <cit.>.Up to the present experiment, the formation cross sections were measured only in very limited solid angles around 0^∘. Meanwhile, theoriespredict reaction-angle dependence mainly originating from the different momentum transfers in the (d,^3 He) reactions <cit.>. Only a small angular dependence is expected from the elementary π^- production cross sections of the n(d,^3 He) π^- reaction <cit.>.We conducted spectroscopic measurements of the missing mass of the ^122 Sn(d,^3 He) nuclear reactions at the RI Beam Factory RIBF, RIKEN <cit.> in October 2010. The missing mass of the ^122 Sn(d,^3 He) reaction near the pion emission threshold was measured for the first time. The experiment demonstrated the excellent performance of RIBF applied in an experimental program of high precision spectroscopy with the primary beam. We employed a deuteron beam with a typical intensity of 2 × 10^11/s accelerated by the cyclotron complex of AVF-RRC-SRC and focused onto the target location ofthe BigRIPS in-flight magnetic separator <cit.>. The beam was extracted in micro bunches of frequency of 13.7 MHz and had a high duty factor. The beam energy has been determined to be T_d = 498.9 ± 0.2 MeVby NMR measurements in BigRIPS. The horizontal beam emittance and intrinsic momentum spread have been estimated to be ∼ 0.54 × 3.0 π mm·mrad (σ) and 0.04 ^+0.01_-0.02%, respectively.Figure <ref> illustrates the schematic layout ofthe employed detectors and BigRIPS used as a spectrometer. The acceptance aperture of ∼ 2^∘ was utilized to cover areaction-angle range centered near 0^∘. The deuteron beam impinged on a 1 mm wide strip target of ^122 Sn at F0 which was isotopically enriched to 95.8% and had a thickness of 12.5±0.5 mg/cm^2. The achieved mean luminosities of about 10^31 cm^-2 s^-1 in the present experiment were much higher than those of previous experiments at GSI <cit.> despite of the thinner target. We expected direct formation of pionic atoms coupled with neutron hole states in the nuclear reactions near the recoil-free kinematical condition. Relevant neutron hole states (excitation energies E_n(n'j'l')) in the ^121 Sn nucleus are 2d_3/2 (0.0 MeV),3s_1/2 (0.06034 MeV),2d_5/2 (1.1212 MeV), and 2d_5/2 (1.4035 MeV) <cit.>.The emitted ^3 He particles were momentum analyzed at F5 with a momentum dispersion of 60.64± 0.15 mm/%. We installed two sets of multi-wire drift chambers (MWDC)near the momentum dispersive focal plane at F5 and measured the tracks of the charged particles with a tracking resolution of ∼ 40 μm (σ). The ^3 He ions were identified by the time-of-flight of ∼ 174 ns between F5 and F7 measured by the scintillation counters. The counters had a distance of about 23 m. We achieved almost background-free ^3 He spectra (< 0.1 % contamination). A typical trigger rate of the data acquisition was 200/s due tothe relatively narrow coincidence gate between F5 and F7 detectors. The deadtime of the data-acquisition system has been estimated to achieve an efficiency of ∼ 93%.The measured ^3 He tracks have been used to determine the ^3 He kinetic energies, the positions, and the angles at the target after detailed analyses and corrections of the optical transfer coefficients up to the fifth order aberrations <cit.>. Hereafter measured ^3 He emissionangles are treated as the reaction angle θ. The θ resolution has been estimated to be ∼ 0.3^∘ (σ). The mass of the reaction product, the pionic atom, has been determined by the measured ^3 He energyand θ. It has been related to the excitation energy E_ exrelative to the ground-state mass of ^121 Sn (M(^121 Sn)). The corresponding relation at 0^∘ is given by the equation E_ ex = [M_ mm - M(^121 Sn)]c^2 = m_π^-c^2 - B_nl + E_ n(n'l'j')where M_ mm is the mass of the reaction product, B_nl is the binding energy of the pionin a state characterized by the principal (n) and angular (l) quantum numbers. m_π^-=139.571 MeV/c^2 is the π^- rest mass.The ^3 He kinetic energy has been calibrated by using the two-body reaction of p(d,^3 He)π^0. The target consists of a100 ± 1 μm thick 2 mm wide polyethylene strip. We performed a calibration measurement every two hours during the production runs. The calibration spectra have low-energy tails due to the large reaction angles. Therefore, we have examined the spectral response by Monte Carlo simulations including effects of matter, spectrometer acceptance and detector performances. A fit has been conducted to reproduce the entire spectral shape in the plane of the measured positions and angles and related them to the ^3 He energies. The systematic errors of the absolute E_ ex values have been estimated to be ^+0.036_-0.033 MeV (σ).The p(d,^3 He)π^0 reaction has also been used to calibratethe effective beam intensity on the target. The reaction cross section has been estimated to be 7.6 μb/sr in the θ range of 0 – 0.5^∘ based on an extrapolation of the dataat a slightly higher energy to ours <cit.>. For this we have used the measured beam energy dependence in Ref. <cit.>. After applying an acceptance correction of the spectrometer evaluated by a Monte Carlo simulation, we have estimated a systematic error for the absolute cross section scale of 30%.The experimental resolution has been estimatedby the quadratic sum of contributions from the incident beam emittance and intrinsic momentum spread, the target thickness, and the optical aberrations of the spectrometer. We have found a quadratic dependence of the resolution on E_ ex due to combined effects of multiple scattering at a vacuum window at F5 and higher-order optical-aberrations. This dependence has been estimated to be R(E_ ex) = √(R_ min^2 + (0.122×(E_ ex - 139.799  MeV))^2)(FWHM) with the resolution minimum R_ min = 0.42 MeV, whichagrees with the measured spectral responses in the calibration reaction of p(d,^3 He)π^0.Figure <ref> (top panel) displays the measured excitation spectrum for nearly the full acceptance of the spectrometer. The abscissa is the excitation energy and the ordinate is the double differential cross section of the ^122 Sn(d,^3 He) reaction. The π^- emission threshold is indicated by the vertical solid line at 139.571 MeV. On the left side of the spectrum in the range of E_ ex < 134 MeV, a linear background is observed for nuclear excitation withoutpion production. Above the emission threshold a continuum is observed due to quasi-free pion production. Three prominent peaks are observed below the pion emission thresholdin the region of 134 MeV ≲ E_ ex≲ 139 MeV. The leftmost peak is due tothe formation of a pionic 1s state mainly coupled with a neutronhole state of (3s_1/2)_n^-1.The middle peak contains contributions from the configurations (1s)_π(2d_5/2)_n^-1,(2p)_π(3s_1/2)_n^-1 and (2p)_π(2d_3/2)_n^-1. The peak on the right side originates mainly from the (2s)_π(3s_1/2)_n^-1 and (2p)_π(2d_5/2)_n^-1 configurations.The spectrum has been fitted in an excitation energy region[128.0,138.0] MeV with calculated spectra based on theoretical pionic atom formation cross sections in Ref. <cit.> folded by the experimental resolution expressed by Gaussian functions. Pionic 1s, 2p, 2s, 3p, and 3s states have been taken into considerations and other higher states as well as the quasi-free contributions have been neglected.In the fit 8 parameters have been used: the differential cross sections (dσ/dΩ) of pionic (nl) states I_1s, I_2p, the binding energies B_1s, B_2p,the 1s width Γ_1s, and a slope and an offset for the linear background. The 2p width has been fixed to a calculated value of 0.109 MeV <cit.> since it is much smaller than the experimental resolution. Since contributions from the other states 2s, 3p and 3s are small, their binding energies and widths have been fixed to theoretical values and their relative cross section ratios I_2s/I_1s, I_3s/I_1s and I_3p/I_2p to theoretical ratios <cit.>. The resolution minimum R_ min has also been used as a free parameter.The fitted curve is presented with contributions from the pionic 1s and 2pstates. The overall fit has a χ^2/n.d.f of 135.8/92.Figure <ref> (bottom panel) shows the decomposition of the 1s and 2p formation cross sections into different neutron hole states of ^121 Sn as indicated. The peak on the left is coupled with pionic 1s and the one on the right with 2p.We have evaluated the systematic errors attributed to the deduced binding energies and width resulting from i) absolute E_ ex scale error arising from the energy calibration, the uncertainty of the primary beam energy, the uncertainty of the target thickness and the ion-optical properties of the spectrometer, ii) the E_ ex dependence of the resolution within evaluated errors, iii) the fitting region and iv) 20% errors in the spectroscopic factors of relevant neutron holes. The systematic errors of the binding energies are mainly arisingfrom the energy calibration and the dispersion of the spectrometer.The binding energies and width are deduced to beB_1s =3.828± 0.013(stat.) ^+0.036_-0.033(syst.)  MeV Γ_1s =0.252± 0.054(stat.)^+0.053_-0.070(syst.)  MeVB_2p =2.238± 0.015(stat.)^+0.046_-0.043(syst.)  MeV.with the statistical and systematic errors. The achieved resolution minimum of R_ min = 0.394^+0.064_-0.044 MeV is consistent with the estimation of 0.42 MeV described above. B_1s and B_2p of a pionic Sn isotope are determined simultaneously for the first time.The deduced B_1s, Γ_1s and B_2p fairly well agree with the theoretical values B^ theo_1s=3.787 – 3.850 MeV, Γ^ theo_1s=0.306 – 0.324 MeV and B^ theo_2p=2.257 – 2.276 MeV <cit.>. The measured spectrum has been decomposed into different θ ranges expecting the θ dependence of the formation cross sections <cit.>.Figure <ref> shows the excitation spectra for θ in the ranges of 1.5 – 2.0^∘, 1.0 – 1.5^∘, 0.5 – 1.0^∘ and 0 – 0.5^∘. The peak structures are clearly observed for each θ range. The peak positions and widths are nearly constant for the different θ ranges, which demonstrates the high quality of the measurement.The decomposed spectra have been fitted to deduce θ-dependent cross sections for the pionic 1s and 2p states, I_1s(θ) and I_2p(θ), with other parameters fixed to those determined in the fitting procedure described above. The linear background has been scaled by a parameter for each angular range. The resulting fitting curves are shown together with curves for 1s and 2p contributions.Figure <ref> (top panel) depicts I_1s(θ) (black) and I_2p(θ) (grey). The abscissa is θ represented by the weighted averages of the solid angles. The boxes show the statistical errors and the bars the systematic errors in addition. They have been studied in the same way as those conducted for the binding energies and widths. An overall systematic error of 30% is attributed to the absolute scale of the cross section. The dashed curves show theoretically calculated cross sections of the pionic 1s and 2p states using phenomenological neutron wave functions of Koura-type given in Ref. <cit.>. The solid curves show the same but scaled with fitted factors of 0.17 and 0.79 for the 1s and 2p states, respectively. We observe that the theoretical θ dependences reproduce well the experimental data, which confirms the assignments of the angular momenta of the states. In principle, this suggests that the theoretical models of finite momentum transfer <cit.> are valid.However, the absolute value of measured I_1s(θ) is much smaller than the theoretical calculation. Figure <ref> (bottom panel) shows the ratios of the I_2p(θ) and I_1s(θ). The large discrepancies between the experimental data and the theory suggest missing factors of the formation cross sections which have to be independently applied to the pionic states.In conclusion, we have performedspectroscopy of pionic ^121 Sn atoms and observed the 1s and 2p states as prominent peak structures. For Sn, the 2p state isobserved as a peak structure for the first time. We have determined the binding energies of the 1s and 2p states and the width of the 1s state. The reaction-angle dependences of the pionic atom formation cross sections are measured and found to agree with the theoretical dependences, which supports the assignments of the quantum-numbers of the measured peak structures. We also find remarkable agreement with theoretically calculated 2p formation cross sections over a wide range of reaction angles. However, the measured absolutecross sections of the 1s state are smaller by a factor of ∼ 5. Note that the entire data were accumulated within 15 h, which is showing the potential of the facility RIBF for spectroscopy experiments. Continuous development is in progress aiming ata better spectral resolution of ≤ 150 keV. The major accomplishments in the present experiment will be succeeded by experiments with improved resolution, statistics and systematics errors to deduce π-nucleus isovector scattering length b_1 with better accuracy <cit.>. A new series of experiments to study pionic atoms over a wide range of nuclei is in preparation and will lead to a better understanding of the fundamental structure of the QCD vacuum based on measurements.The authors thank Prof. Emeritus Dr. Toshimitsu Yamazaki for fruitful discussions and late Prof. Emeritus Dr. Paul Kienle for his guidance in this study. The authors are grateful to the staffs of GSI for providing target materials and staffs of RIBF for stable operation of the facility. This experiment was performed at RI Beam Factory operated by RIKEN Nishina Center and CNS, University of Tokyo. This work is partly supported by MEXT Grants-in-Aid for Scientific Research on Innovative Areas (Grants No. JP22105517, No. JP24105712 and No. JP15H00844), JSPS Grants-in-Aid for Scientific Research (B) (Grant No. JP16340083),(A) (Grant No. JP16H02197) and (C) (Grants No. JP20540273 and No. JP24540274), Grant-in-Aid for JSPS Research Fellow (Grant No. 12J08538), the Bundesministerium für Bildung und Forschung, and the National Science Foundation through Grant No. Phys-0758100, and the Joint Institute for Nuclear Astrophysics through Grants No. Phys-0822648 and No. PHY-1430152 (JINA Center for the Evolution of the Elements). 99 Kienle04P. Kienle and T. Yamazaki, Prog. Part. Nucl. Phys. 52, 85 (2004). Gell_Mann68M. Gell-Mann, R. J. Oakes, and B. Renner, Phys. Rev. 175, 2195 (1968). Tomozawa66Y. Tomozawa, Nuovo Cimento A 46, 707 (1966). Weinberg66S. Weinberg, Phys. Rev. Lett. 17, 616 (1966). Kolomeitsev03E. E. Kolomeitsev, N. Kaiser, and W. Weise, Phys. Rev. Lett. 90, 092501 (2003). Suzuki04K. Suzuki et al., Phys. Rev. Lett. 92, 072302 (2004). Hayano10R. S. Hayano and T. Hatsuda, Rev. Mod. Phys. 82, 2949 (2010). Yamazaki12T. Yamazaki et al., Phys. Rept. 514, 1 (2012). Friedman07E. Friedman and A. Gal, Phys. Rept. 452, 89 (2007). Friedman14E. Friedman and A. Gal, Nucl. Phys. A 928, 128 (2014). Batty97C. J. Batty, E. Friedman, and A. Gal, Phys. Rept. 287, 385 (1997). Konijn90J. Konijn et al., Nucl. Phys. A 519, 773 (1990). Toki89H. Toki et al., Nucl. Phys. A 501, 653 (1989). Yamazaki96T. Yamazaki et al., Z. Phys. A 355, 219 (1996). Gilg00H. Gilg et al., Phys. Rev. C 62, 025201 (2000). Itahashi00 K. Itahashi et al., Phys. Rev. 62, 025202 (2000). Meissner02U.-G. Meissner, J. A. Oller, and A. Wirzba, Ann. Phys. (Amsterdam) 297, 27 (2002), and references therein. Ikeno15N. Ikeno et al., PTEP 2015, 033D01 (2015). Frank54W. J. Frank et al., Phys. Rev. 94, 1716 (1954). Kubo03T. Kubo, Nucl. Instr. Meth. Phys. Res. B 204, 97 (2003). Yano07Y. Yano, Nucl. Instr. Meth. Phys. Res. B 261, 1009 (2007). Ohya10S. Ohya, Nucl. Data Sheets 111, 1619 (2010). Nishi13T. Nishi et al., Nucl. Instr. Meth. Phys. Res. B 317, 290 (2013). Chapman64K. R. Chapman et al., Nucl. Phys. 57, 499 (1964). Betigeri01M. Betigeri et al., Nucl. Phys. A 690, 473 (2001). Ikeno11N. Ikeno et al., Prog. Theor. Phys. 126, 483 (2011). Ikeno16pN. Ikeno and T. Nishi, (private communication). Ikeno11AN. Ikeno, H. Nagahiro and S. Hirenzaki, Euro. Phys. J. A 47, 161 (2011). Koura00H. Koura and M. Yamada, Nucl. Phys. A 671, 96 (2000). Itahashi08K. Itahashi et al., RIBF Proposal 054R1 (unpublished).
http://arxiv.org/abs/1708.07621v2
{ "authors": [ "T. Nishi", "K. Itahashi", "G. P. A. Berg", "H. Fujioka", "N. Fukuda", "N. Fukunishi", "H. Geissel", "R. S. Hayano", "S. Hirenzaki", "K. Ichikawa", "N. Ikeno", "N. Inabe", "S. Itoh", "M. Iwasaki", "D. Kameda", "S. Kawase", "T. Kubo", "K. Kusaka", "H. Matsubara", "S. Michimasa", "K. Miki", "G. Mishima", "H. Miya", "H. Nagahiro", "M. Nakamura", "S. Noji", "K. Okochi", "S. Ota", "N. Sakamoto", "K. Suzuki", "H. Takeda", "Y. K. Tanaka", "K. Todoroki", "K. Tsukada", "T. Uesaka", "Y. N. Watanabe", "H. Weick", "H. Yamakami", "K. Yoshida" ], "categories": [ "nucl-ex" ], "primary_category": "nucl-ex", "published": "20170825062421", "title": "Spectroscopy of pionic atoms in $\\mathbf{{}^{122}{\\textbf Sn}({\\textit d},{}^3{\\textbf He})}$ reaction and angular dependence of the formation cross sections" }
Centre for Theoretical Atomic, Molecular and Optical Physics, School of Mathematics and Physics, Queen's University Belfast, Belfast BT7 1NN, United KingdomAtomistic Simulation Centre, School of Mathematics and Physics, Queen's University Belfast, Belfast BT7 1NN, United Kingdom School of Chemistry and Chemical Engineering, Queen's University, Belfast BT7 1NN, United KingdomCentre for Theoretical Atomic, Molecular and Optical Physics, School of Mathematics and Physics, Queen's University Belfast, Belfast BT7 1NN, United Kingdom Laboratoire Kastler Brossel, ENS-PSL Research University, 24 rue Lhomond, F-75005 Paris, France We study the environment-assisted enhancement of the excitation-transport efficiency across a network of interacting quantum particles or sites.Our study reveals a crucial influence of the network configuration — and especially its degree of connectivity — on the amount of environment-supported enhancement. In particular, we find a significant interplay of direct and indirect connections between excitation-sending and receiving sites.On the other hand, the non-Markovianity induced by memory-bearing, finite-size environments does not seem to provide a critical resource for the enhanced excitation-transport mechanism.Excitation-transport in open quantum networks: The effects of configurations Mauro Paternostro December 30, 2023 ============================================================================ § INTRODUCTION It has long been recognized that, under suitable conditions, noise could be not just a source of detrimental effects for the dynamics of a system, but a mechanism for potential advantages. A significant, well-known example in classical physics is provided by stochastic resonance <cit.>, the phenomenon according to which, the signal-to-noise ratio of a given non-linear process can be enhanced by the addition of moderate-intensity white noise, owing to the occurrence of resonances.The benefits of noise in quantum dynamics, however, are not only less evident but also more subtle. Large environmental fluctuations typically act as sources of decoherence for a quantum system, destroying quantum coherences and leading to classical stochastic processes. Dissipation, on the other hand, depletes the performance of the excitation-transport mechanisms. On one hand, these phenomena have inspired a technological race to realization of conditions for ultra-high isolation of quantum systems from the corresponding environment. The results of such a race are the remarkable progresses achieved so far in the quantum control of the dynamics of a large number of quantum systems, from trapped ions to cold-atom and superconducting quantum devices, which are making the goal of a realizing a quantum processor foreseeable in a close future. On the other hand, the unavoidable system-environment interaction has inspired research aimed at understanding if and how environmental effects can be turned into an advantage and lead to the creation of quantum coherence in a given system. Remarkably, examples have been given of such a possibility <cit.>, in particular when addressing the dynamics of quantum networks that could model the working mechanisms of biological systems.Very significant steps have been taken towards the understanding of the features that provide an environment-assisted enhancement of the performance of excitation-transport in photosynhetic quantum networks <cit.>. The role of quantum interference operated by modest amounts of environmental dephasing or dissipation appears to be crucial, in this respect <cit.>. It is also important to remark the interest generated by system-environment interactions in light-harvesting systems for the production and control of quantum correlations <cit.>. Yet, much remains to be understood of the phenomenology of environment-assisted enhancement of the excitation-transport, from the influences of environmental memory effects to the significance of quantum coherence. Among the questions that are currently open, is the dependence of the performance of excitation-transport on the network configuration. This is a particularly relevant point when addressing quantum effects in light-harvesting systems, given the large degree of connectivity of their underlying networks <cit.>, and for the engineering of excitation-transport in nanostructures <cit.>, which might benefit of specifically arranged network configurations. Needless to say, the effect of network configuration on the efficiency of excitation-transport processes represents a question that can be addressed beyond the boundaries of the specific contexts mentioned above, namely quantum biology and the engineering of artificial nanostructures. Indeed, this problem would benefit of an abstract approach assessing the performance of general networks of connected sites and undergoing non-equilibrium open-system dynamics. This is precisely the viewpoint that we take in this paper, where we study the interplay between the configuration of a network of interacting particles and the occurrence of environment-assisted quantum transport (ENAQT). We first model a process of excitation-transport in a multi-site graph open to both local and collective environments. We then unveil an intriguing trade-off between the enhancement of the excitation-transport efficiency in the presence of modest amounts of dephasing noise, and the existence of multiple interfering pathways for the excitation-transport from a designated sending site to a receiving one. We show that existence of a direct link between the sending and receiving sites is crucial for the achievement of enhanced ENAQT. We also assess the effects that non-Markovianity induced by network-environment interaction might have on the enhanced ENAQT effect that we discuss. We thus compare the excitation-transport performance when the environments with which the network is in contact are memoryless — thus enforcing a standard Markovian dynamics on the network — and when they are able to induce a non-Markovian evolution. Our results show that non-Markovianity does not appear to be a resource for ENAQT, a conclusion that we verify through an extensive numerical assessment of the network dynamics. Our numerical findings reinforce the idea that graph-connectivity is a key factor in establishing enhanced environment-induced effects. This provides valuable information on the way, for instance, a nanostructure should be engineered to operate advantageously in the presence of an environment, which can be useful towards the construction of quantum-enhanced nanoscale devices and processes.The remainder of this paper is structured as follows: In Sec. <ref> we introduce the formal description of the system that we consider, introduce the models for environmental mechanisms, and the figure of merit for the quantification of the effects that we aim at exploring. Sec. <ref> presents our analysis of the dynamics, highlighting the situations that underpin the sink excitation probability (SEP) and the occurrence of ENAQT.Our study is quite extensive, and covers the effects of non-Markovianity as well as the influence that quantum coherences have on the phenomenology that we illustrate. Finally, Sec. <ref> summarizes our findings and draws their implications for the investigation of noise-affected quantum dynamics in quantum networks. § THE SYSTEM AND ITS MODELIZATIONWe consider a system of interacting qubits (hereby dubbed the system and labeled as S) coupled to individual and independent environments and thus undergoing open-system dynamics. Besides the intra-system coupling and the interaction with the surrounding environment, we also consider an (in general multi-qubit) ancillary system (dubbed the ancilla and labeled as A) that, in principle, undergoes open dynamics as well. As our primary goal is to quantify the excitation-transport efficiency from a sender to a receiving qubit across the system, we incoherently couple the latter to a sink particle. The coupling is devised in a way that the sink would thus absorb excitations reaching the receiving particle without being able to feed such excitations back to the system. Models with artificial sinks could be envisioned as systems that transfer energy to a zero-temperature bath. Similar sinking mechanisms have been employed in the modelization of excitation-transport across photosynthetic and light-harvesting complexes <cit.> The Hamiltonian for a general configuration of the model at hand thus reads (we assume units such that ħ=1 and measure energy in units of the Bohr frequency ω of the system's and ancilla's particles)H(J,Q) =∑_i∈{ S}σ _z^i+∑_α∈{ A}σ _z^k+∑ _i,j∈{ S}τ_ij(J)σ^i·σ^j +∑ _i∈{ S}∑ _α∈{ A}q_ik(Q)σ^i·σ^α.Here roman (Greek) indices run over the elements of system S (the ancilla A) and σ^k=(σ^k_x,σ^k_y,σ^k_z) is the vector of Pauli matrices of particle k=i,α. We have introduced the adjacency matrix for the intra-system couplings τ(J) and that for thecoherent S-A interaction q(Q). The strength of the respective coupling is set by the dimensionless rates J and Q. Both τ(J) and q(Q) are used here to control the details of the configuration of interactions within the model. Specifically, we have the following adjacency matrix entries τ_ij(J)=J when qubits i,j∈ S are coupled,0 otherwiseandq_iα(Q)=Q when qubit i∈ S is coupled toqubit α∈ A,0 otherwise.We now include the effects of the environment (labeled as E), which we assume to consist of individual memoryless mechanism affecting each particle of the system (and possible the ancilla) independently. The joint dynamics of S and A is assumed to be Markovian and modeled by phenomenological Lindblad-like superoperators. In order to describe the general case, we consider the environment E as including both dephasing and dissipative effects on the system. These effects are modeled by the superoperatorsL_deph(ρ) =D∑ _i∈ S ( σ_i^z ρσ_i^z-ρ),L_damp(ρ) =γ _∑ _i∈ SN_i ( σ_i^+ ρσ_i^–1/2{σ_i^-σ_i^+, ρ} )+γ _∑ _i∈ S(N_i+1) ( σ_i^- ρσ_i^+-1/2{σ_i^+σ_i^-, ρ} ).Here ρ is the joint S-A density matrix,D is the dephasing rate (assumed to be uniform across the system), γ_ (γ_) is the rate of incoherent loss (incoherent pump) of excitations into (from) the environment attached to particle i∈ S. The environment is assumed to be in thermal equilibrium at temperature T_i and with an average number of excitations N_i=(e^β_i-1)^-1, where β_i=1/K_BT_i is inverse temperature and K_B is the Boltzmann constant.In our model, the ancilla has two configurations: individual and communal. In the individual one, the constituents of the ancillary system do not interact with each other and are individually coupled to one part of S. In the communal configuration, the ancillary system interacts with the entire S at once. In both cases we allow for individual non-zero temperature damping effects on A of the formL_ A(ρ) =γ^A_ N_A∑_α∈ A ( σ_α^+ ρσ_α^–1/2{σ_α^-σ_α^+, ρ} )+γ^A _(N_A+1)∑_α∈ A ( σ_α^- ρσ_α^+-1/2{σ_α^+σ_α^-, ρ} ),where N_A is the average number of excitations in thebath attached to A and γ^A_, are the corresponding rates of incoherent loss and pump.Finally, we introduce the sink mechanism. As mentioned at the beginning of this Section, this can be understood as damping of excitations into a zero-temperature environment, which is unable to feed them back into the system. A minimal model for such a sink is given by a two-level system incoherently coupled to the n^th site of the system with decoherence rate γ_S, as <cit.>L_S(ρ)=γ_S(σ_S^+ σ_n^- ρσ_n^+ σ_S^–1/2{σ_n^+ σ_S^- σ_S^+ σ_n^- ,ρ}).With this at hand, the non-unitary evolution of the S-Adensity matrix is obtained by solving the equation ∂_tρ=-i [ H(J,Q),ρ ]+ L_deph(ρ)+ L_S(ρ)+ L_damp(ρ)+ L_ A(ρ),This dynamical equation provides the core information on the excitation-transport problem here under scrutiny, whose performance is characterized quantitatively by considering SEP, i.e. the probability that an excitation seeded in one of the particles of S reaches the site to which the sink is attached. Formally, SEP is defined asSEP=_S⟨ | _ SA(ρ) |⟩_Swhere _ SA stands for the partial trace over the degrees of freedom of S and A, which leaves us with the reduced state of the sink only.SEP is then calculated by performing theprojection of such reduced state onto the excited state |⟩_S of the sink. In what follows, for zero-temperature environments, we will consider the situations when ENAQT occurs at the steady-state of the dynamics, and calculate the corresponding value of SEP, which we will then compare to the corresponding value achieved in the absence of dephasing noise. We dub the difference between SEP in these two configurations as SEP improvement (or SEPI) and define it asSEPI(D)=SEP(D)-SEP(0),SEPI thus quantifies the degree of improvement in the sink-excitation probability resulting from ENAQT, an effect which we observe through dephasing. For environments at non-zero temperature, on the other hand, we would not consider the steady state of the dynamics, as the latter would not exhibit any ENAQT. We will thus resort to a study of the dynamical state. Before turning our attention to the characterization of the excitation-transport process, let us discuss an important point. As we aim at assessing the effects that the ancilla has on the performance of excitation-transport, in what follows we shall compare the results obtained through the coherent S-A coupling in Eq. (<ref>) to what is observed by replacing the coherent S- A interaction in Eq. (<ref>) with the incoherent interactionL_ SA(ρ) =∑ _i∈ S,α∈ A[Γ _ ( σ_i^+σ_α^- ρσ_α^+σ_i^--1/2{σ_α^+σ_i^-σ_i^+σ_α^-, ρ} ) .+.Γ _ ( σ_i^-σ_α^+ρσ_α^-σ_i^+-1/2{σ_α^-σ_i^+σ_i^-σ_α^+, ρ} )],where Γ_ (Γ_) is the rate at which excitations are transferred from (to) the i^th element of the system to (from) the α^th party of the ancilla. Therefore, the dynamics guided by Eq. (<ref>) will be contrasted to the one resulting from the master equation∂_tρ =-i [ H(J,0),ρ ]+ L_deph(ρ)+ L_S(ρ)+ L_damp(ρ)+ L_ A(ρ)+ L_ SA(ρ).§ ANALYSIS OF THE DYNAMICS, PHENOMENOLOGY OF SEP AND OCCURRENCE OF ENAQT §.§ No Ancillary SystemIn this Section, we consider a number of configurations in order to determine the requirements for ENAQT. We consider four archetypes as shown in Fig. <ref>: linear, loop, non-critical, and maximally connected. We use the term path or pathway to describe the route that an excitation might travel from the first qubit of a network (labeled as 1) to the sink. In the linear configuration, the system interacts through nearest-neighbor couplings, which gives a single pathway from qubit 1 to the sink [cf. Fig. <ref> (a)]. The loop configuration [panel (b)] also exhibits nearest-neighbor couplings and includes a direct interaction between the initial qubit and the final one. As will be explained later, such a direct connection is crucial for the phenomenology of ENAQT, and we thus dub it as critical connection. A configuration such as the one in Fig. <ref> (c), where the link between first and last element of the network only occurs through a set of non-nearest neighbor connections among the sites but lack of such a critical link will be referred to as non-critical. Finally, in the maximally connected configuration [Fig. <ref> (d)] every site is coupled to each other. This offers the maximum number of pathways through the system.In order to set a benchmark, we initially consider cold environments (N_i=0) and no ancilla coupled to S. We quantify the relation between ENAQT and the properties of the system, and find that there are a number of conditions that our system should satisfy in order for ENAQT to occur. To this end, we introduce the parameterR=J/γ_,which quantifies the relative strength of the coherent coupling versus the incoherent environmental coupling for the damping of excitations into the local environments. Fig. <ref> (b) shows the effect of increasing R values for a maximally interacting system of five qubits. ENAQT only takes place for sufficiently large values of R: while R≲1 correspondsto a monotonically decreasing behavior of SEP, a value of R≃5 already shows the existence of a region of values of D where SEP increases with the dephasing rate [cf. Fig. <ref> (b)]. Such a region grows with R, until SEP assumes a monotonically increasing trend. We have also addressed the dependence of ENAQT on the dimensionality of the system at hand [cf. Fig.<ref> (c)]. In general, larger systems require a lower threshold in the value of R for the enhancement effect to occur, and are associated with wider ENAQT ranges compared to smaller systems.A larger system also displays a comparatively more significant effect. However, the range of beneficial dephasing values can grow at a greater rate than the magnitude of the effect and as a result, for certain R and D values, we can observe smaller systems getting a greater benefit than larger ones.However, the maximum value of SEP will always increase with the size of the system. Our next observation addressed the various interaction pathways. We observed that non-critical and linear configurations, which both miss the direct connection between first and last element of the network, do not display any ENAQT effect, as SEP is a monotonically decreasing function of D. This correlation between the existence of a direct link between first and last element of the network suggests a general dependence of SEPI on the network configuration and thus, in turn, on the interference mechanisms that a chosen configuration entails.§.§ Assessing the effect of quantum coherence We now address the role, if any, that quantum coherences set in the state of the network have in the settling of ENAQT. In order to provide a quantitative assessment, we use the l_1-norm of coherence proposed in Ref. <cit.>, which readsC_l_1(ρ)=∑_i,j | ρ_ij |-1,where ρ_ij is a generic entry of the density matrix of the system and the double summation extends over all values of the indices i and j.Fig. <ref> shows the temporal behavior of C_l_1(ρ), comparing the results corresponding to the dephasing-free and dephasing-affected dynamics, the latter calculated using the value of R that optimize ENAQT.Strong quantum coherences are associated with values of R≫1, growing with the network size. While the undephased configuration shows a fading oscillatory behavior that however maintains a non-zero degree of coherences for a substantial time window, the dephased one displays a quick damping of coherences, which do not survive beyond the first period of oscillations of the D=0 case. However, despite the rapid disappearance of quantum coherence in the state of the system, Fig. <ref> (a) and (b) are associated with the occurrence of ENAQT. On its own, this is significant evidence that quantum coherences do not appear to play a crucial role in the emergence of ENAQT. Moreover, the results displayed in Fig. <ref> (c) provide further useful information. Despite showing similar or larger degrees of coherence than in (a) and(b), panel (c) does not showcases any ENAQT. As the configuration addressed there only lacks of the critical connection between the first and last qubit of the network, we conjecture the crucial role played by such a link in the establishment of ENAQT: quantum coherence alone are not sufficient for the effect to appear, but need to be complemented by the addition of a critical pathway between first and last element of the network. §.§ Assessing the Effect of Temperature We now aim to characterize the influence that temperature has on the phenomenology of ENAQT. To this end, we set N_i>0. Unlike the case with T_i=0, we observe no sign of ENAQT at the steady state. However, such effect is present dynamically: snapshots of the evolution of the system at various dephasing strengths show thatENAQT is present initially and disappears as we approach the steady state [cf. Fig. <ref>]. We observed the occurrence of such time-dependent enhancement for both`directional' configurations where N_1≠0 and all other N_i=0, and the case where N_i≠0 for all the site in the network. However, the fully warm regime had a larger threshold value of R for the occurrence of ENAQT than the directional regime. The addition of the ancillary system enriches the phenomenology of excitation-transport, as regimes where the effects of the warm environment are mediated by the ancillary system itself. However, we find common features throughout: as the number of `warm introduction sites' (i.e. sites exposed to a thermal environment additional excitations can enter) increases, the threshold value of R required to observe ENAQT also increases.§.§ Introducing the Ancillary SystemWe now address the case where A is introduced in the overall system. In order to keep the computational effort to a reasonable level, but without affecting the generality of our conclusions, we have opted to consider a three-qubit system for this analysis. We shall first consider the case of incoherently coupled ancillary system [i.e. Q=0 in the Hamiltonian and Γ_,≠0 in Eq. (<ref>)]. Within this archetype, we study the excitation-transport performance under various dynamical regimes. Specifically, we considered Γ_/Γ_=1, for which there is no bias between the incoherent process that pulls excitations away from the system into the ancilla and the opposite process, Γ_/Γ_>1, where the ancilla supplies excitations to S at a rate larger than it takes them away, and the complementary situation of Γ_/Γ_<1, where depletion due to the system-ancilla interaction is strong. Both an isolated and open ancilla have been taken in consideration. In the first case, excitations could only enter or leave the ancillary system via its coupling to S. This is not the case when an open ancilla is addressed, which significantly modifies the phenomenology of excitation-transport that we will highlight. The corresponding results are summarized in Fig. <ref>. The incoherent coupling regime has a noticeably larger threshold value of R to observe the ENAQT effect, when compared to the study reported in Subsec. <ref>. The maximum SEP for the individual isolated ancillary regime and the no-ancilla one turn out to be equal. We also observed that the maximum SEP for the communal isolated ancillary system was noticeably greater than both, though it had the least gain from ENAQT.Next we consider the case of a coherently coupled ancillary system. We thus take Q>0 in Eq. (<ref>) and assume Γ_,=0, so that Eq. (<ref>) should be considered to model the dynamics. As done above, we consider both closed and open ancillary systems.When compared against the occurrence of ENAQT, this configuration results in a threshold value of R that is lower than the one for both the incoherent S- A coupling the case with no ancilla at all. This is very interesting, as it implies that the interaction with an ancilla promotes the occurrence of ENAQT. The effect can be understood as analogous to the observed reduction of the threshold value of R when larger (coherently coupled) systems are studied: de facto, in this situation, A is akin to an extension of S. The extra degree of freedom provided by the possibility of tuning the value of the system-ancilla coupling rate Q allows for the characterization of ENAQT and SEP against variation of such a parameter. This is done in Fig. <ref>, where we have taken R=20. The ENAQT effect is highly dependent on the relative value of Q with respect to R. When Q≈ R, we observe a drop in SEP, but an increase in the ENAQT effect. We see the best ENAQT when this occurs as the ancillary and qubit system were effectively behaving as a slightly larger qubit system. Additionally, we saw that when Q>R, SEP grew. For the isolated ancillary system, configurations with large Q performed best, in terms of SEP, as excitations were more likely to transfer through the ancilla. Was the ancillary system no longer isolated, it would represent another source for excitation loss. In this configuration, the excitation transfer would perform as in the Q=0 case.In this regime, we have evidence of non-Markovian dynamics. In order to quantify the degree of non-Markovianity, we use an instrument put forward in Ref. <cit.> and based on the trace distance, which is defined as D(ρ_1(t),ρ_2(t))=1/2||ρ_1(t),ρ_2(t)||_1,where ||·||_1 is the trace norm and ρ_1,2(t) are two density matrices of the same system. The trace distance is null (equal to one) for indistinguishable (fully distinguishable) quantum states. It is contractive under Markovian dynamics, meaning that the trace distance between two completely distinguishable initial states of a system exposed to a Markovian environment is a non-increasing quantity. That is, the rate of change of the trace distance will always be negative. Any deviation from such behavior, has to be associated with an evolution that is non-Markovian in nature, for instance due to the back-action of the environment. Based on such considerations, it is possible to define a measure of non-Markovianity as follows <cit.>𝒩=max∫_dD/dt>0dD/dtdt,where the maximization is made over all possible pairs of initial states of the system. While observing a temporary increase in the trace distance is sufficient to conclude that the dynamical map is non-Markovian, the converse is not true: non-Markovian evolutions might exist such that the trace distance decreases monotonically. In this sense, the condition dD/dt>0 is only a witness for non-Markovianity. In our approach, rather than considering the state of the system itself, which is multipartite, we assess non-Markovianity from the perspective of the ancillary system. This choice greatly simplify the calculations. We also argue that, if non-Markovian dynamics is observed in the ancillary system due to their coupling with the primary system, then, by symmetry, the primary system dynamics will be also non-Markovian. Fig. <ref> shows our results, revealing that a strong non-Markovian behavior is indeed witnessed in the dynamical system studied here. The non-Markovian environmental mechanism acts just as an additional coherent resource.In fact, the enhancement can be equivalently obtained by a suitably greater value of R, i.e. without the introduction of qualitatively significant new features. The previous analysis is complemented by the study of the mixed-coupling case corresponding to Q, Γ_,≠0 [cf. Fig. <ref>]. As observed in previous regimes, the inclusion of the ancillary-environment interface causes a noticeable reduction in SEP as well as a reduction in the ENAQT effect. When Q≠0, we observe non-Markovian dynamics dependent upon the relative strengths of the incoherent couplings and the environmental coupling strengths. For incoherent coupling strengths that are large with respect to the coherent couplings [i.e. for Γ_,≫ Q], we see very limited or possibly no non-Markovianity. Large relative coherent couplings (Q≫Γ_,) are associated instead with pronounced non-Markovian features. In the mixed regime, we observed that the R requirement was dependent on this relative strength as well. We found that as Q became the more dominant parameter that our R requirement reduced [cf. Fig. <ref>].§ DISCUSSIONS AND CONCLUSIONSWe have addressed the question of the emergence of environment-induced advantages in the transfer of excitations across a network of interacting particles, studying explicitly the effects that different geometries of the network have on the efficiency of excitation-transport. Our chosen figure of merit for the efficiency of excitation-transfer, namely the sink excitation probability (SEP), exhibits dephasing-induced enhancement with respect to the fully unitary case, but only if a suitable network configuration is ensured. In particular, we have highlighted the critical role played by the availability of a direct link connecting the sending and receiving sites in the network, which interferes constructively with the paths going through the other sites owing to local dephasing noise of modest entity. The addition of an ancillary system offered the possibility to explore different environmental regimes. When studying coherent coupling between the ancillary system and the network, evidence of non-Markovianity were found: a communal ancilla resulted in larger SEP than for the individual-ancilla counterpart. This can be understood as the provision, by the common ancilla, of an extra pathway for the excitations to transfer across. In this respect, non-Markovianity does not appear to play a crucial role in the establishment of the efficiency of the transfer, which is instead strongly dependent on the network configuration, and especially its degree of connectivity. Our work addresses an open problem of great technological relevance: the identification of environment-assisted effects that are enhanced by a suitably chosen network configuration. These effects can be useful to design more efficient strategies for energy and excitation-transport in artificial nanostructures, thus emulating the behavior of biological processes such a photosynthesis.We acknowledge financial support from the Northern Ireland DfE, the EU FP7 Collaborative Project TherMiQ (grant agreement 618074), the Julian Schwinger Foundation (grant number JSF-14-7-0000), and the SFI-DfE Investigator Programme grant (grant 15/IA/2864). LS thank the EU Commission Marie Curie RISE Program (ENACT project, grant number 643998) and the UK EPSRC (grant nr. EP/P016960/1) for funding. 99 Gammaitoni L. Gammaitoni, P. Hänggi, P. Jung, and F. Marchesoni, Rev. Mod Phys. 70, 223 (1998).Haenggi P. Hänggi, Chemphyschem. 3, 285 (2002); L. Gammaitoni, P. Hänggi, P. Jung, and F. Marchesoni, Europ. Phys. J. B 69, 1 (2009) Plenio1 M. B. Plenio, and S. F. Huelga, Phys. rev. lett. 88, 197901 (2002). Hartmann L. Hartmann, W. Dür, and H. J. Briegel, Phys. Rev. A 74, 052304 (2006). Sink1 M. Mohseni, P. Rebentrost, S. Lloyd, and A. Aspuru-Guzik, J. Chem. Phys. 129, 174106 (2008). Huelga M. B. Plenio and S. F. Huelga, New J. Phys. 10, 113019 (2008). Chin A. W. Chin, A. Datta, F. Caruso, S. F. Huelga, and M. B. Plenio, New J. Phys. 12, 065002 (2010); F. Caruso, A. W. Chin, A. Datta, S. F. Huelga, and M. B. Plenio, J. Chem. Phys. 131, 105106 (2009). OlayaCastro A. Olaya-Castro, C. F. Lee, F. Fassioli Olsen, and N. F. Johnson, Phys. rev. B 78, 085115 (2008); F. Novelli, A. Nazir, G. H. Richards, A. Roozbeh, K. E. Wilk, P. M. G. Curmi, and J. A. Davis, arXiv:1503.00251 (2015); F. L. Semião, K. Furuya, and G. J. Milburn, New J. Phys. 12, 083033 (2010). HuelgaPlenio S. F. Huelga, and M. B. Plenio, Contemp. Phys. 54, 181 (2013). Checinska A. Checinska, F. A. Pollock, L. Heaney, and A. Nazir, J. Chem. Phys. 142, 025102 (2015).Caruso F. Caruso, A. W. Chin, A. Datta, S. F. Huelga, and M. B. Plenio, Phys. Rev. A 81, 062346 (2010). Sarovar M. Sarovar, A. Ishizaki, G. R. Fleming, K. B. Felming, and K. B. Whaley, Nature Phys. 6, 462 (2010). Bradler K. Bradler, M. M. Wilde, S. Vinjanampathy, and D. B. Uskov, Phys. Rev. A 82, 062310 (2010). Cifuentes A. A. Cifuentes, and F. L. Semião, Phys. Rev. A 95, 062302 (2017). ENAQT P. Rebentrost, M. Mohseni, I. Kassal, S. Lloyd, and A. Aspuru-Guzik, New J. Phys. 11, 033003 (2009). Lindderiv C. A. Brasil, F. F. Fanchini, and R. de Jesus Napolitano, arXiv:1110.2122 (2012). Badsink D. Gelbwaser-Klimovsky and A. Aspuru-Guzik, arXiv:1605.04875 (2016). FMO G. S. Engel, T. R. Calhoun, E. L. Read, T.-K. Ahn, T. Mancal, Y.-C. Cheng, R. E. Blankenship, and G. R. Fleming, Nature 446, 782 (2007). FMOB G. Panitchayangkoon, D. V. Voronine, D. Abramavicius, J. R. Caram, N. H. C. Lewis, S. Mukamel, and G. S. Engel, Proc. Natl. Acad. Sci. USA 108, 20908 (2011). QuantumBio S. Lloyd, J. Phys.: Conf. Ser. 302, 012037 (2011). Coherence T. Baumgratz, M. Cramer, and M. B. Plenio, Phys. Rev. Lett. 113, 140401 (2014). BPL H.-P. Breuer, J. Piilo, and E.-M. Laine, Phys. Rev. Lett. 103, 210401 (2009).review H.-P. Breuer, E.-M. Laine, J. Piilo, and B. Vacchini, Rev. Mod. Phys. 88, 021002 (2016).
http://arxiv.org/abs/1708.07474v1
{ "authors": [ "James Cormican", "Lorenzo Stella", "Mauro Paternostro" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170824160514", "title": "Excitation-transport in open quantum networks: The effects of configurations" }
Indian Institute of Astrophysics; Bangalore, 560034 India [email protected] W.J. McDonald Observatory and Department of Astronomy, University of Texas at Austin; Austin, TX 78712-1083 [email protected] Optical high-resolution spectra of V652 Her and HD 144941, the two extreme helium stars withexceptionally low C/He ratios, have been subjected to a non-LTE abundance analysis using the toolsTLUSTY and SYNSPEC. Defining atmospheric parameters were obtained from a grid of non-LTE atmospheresand a variety of spectroscopic indicators including He i and He ii line profiles,ionization equilibrium of ion pairs such as C ii/C iii and N ii/N iii.The various indicators provide a consistent set of atmospheric parameters:T_ eff=25000±300K, log g = 3.10±0.12(cgs), and ξ=13±2km s^-1 are provided for V652 Her, and T_ eff=22000±600K, log g = 3.45±0.15 (cgs),and ξ=10km s^-1 are provided for HD 144941. In contrast to the non-LTE analyses,the LTE analyses – LTE atmospheres and a LTE line analysis – with the available indicators do notprovide a consistent set of atmospheric parameters. The principal non-LTE effect on the elementalabundances is on the neon abundance. It is generally considered that these extreme helium stars withtheir very low C/He ratio result from the merger of two helium white dwarfs. Indeed, the derived compositionof V652 Her is in excellent agreement with predictions by Zhang & Jeffery (2012) who model the slow mergerof helium white dwarfs; a slow merger results in the merged star having the composition of the accreted white dwarf. In the case of HD 144941 which appears to have evolved from metal-poor stars a slow merger is incompatible withthe observed composition but variations of the merger rate may account for the observed composition. More detailed theoretical studies of the merger of a pair of helium white dwarfs are to be encouraged.§ INTRODUCTION Extreme helium stars (EHes) are very hydrogen poor stars with effective temperaturesof about 10000 K to 30000 K (i.e., spectral types A and B) with surface gravities oflog g ∼ 1 for the coolest stars and increasing to ∼ 3 for the hottest stars.A majority of EHes populate a locus of roughly constant log L/M ∼ 4.5 where theluminosity L and mass M are in solar units. This locus most likely represents anevolutionary track with stars evolving at about constant luminosity from low to high temperatures. Such EHes thoughtto form from the merger of a helium white dwarf with a carbon-oxygen white dwarf havecarbon-to-helium ratios by number of about 0.6 % with presently analyzed stars exhibitingC/He ratios by number in the range 0.3 % to 1.0 %. The carbon is provided by the surfaceof the C-O white dwarf and the helium primarily by the helium white dwarf.At the time of<cit.>'s succinct review of EHes,about 20 Galactic EHes were known. Two are set apart from the majority highlighted aboveby a much lowerC/He ratio. V652 Her according to <cit.> and an LTE analysishas the low C/He ratio of 0.006 % and HD 144941 according to <cit.>'sLTE abundance analysis has the even lower C/He ratio of 0.0017 %. Both ratios aresharply lower than the C/He ratio of the majority of the EHes. In addition, V652 Herand HD 144941 have higher surface gravities than the majority EHes of the sameeffective temperature and thus correspond to a log L/M smaller by about a factor of 1.3 dex than the majority. These differences especially the low C/He ratio suggest a different origin, namely the mergerof a helium white dwarf with another helium white dwarf. Again, see <cit.>'sreview for further details.In this paper, we describe a non-LTE analysis of new high-quality optical spectra of bothV652 Her and HD 144941 primarily in order to determine how non-LTE effects influence theC/He ratio but also to measure the effects of departures from LTE on the abundances of otherelements. The paper follows our similar analyses of non-LTE effects on several EHes having C/He ratios characteristic of the majorityEHes <cit.>.§ OBSERVATIONSHigh-resolution optical spectra of V652 Her and HD 144941 were obtained on 2011 May 13 at the coudé focus of theW.J. McDonald Observatory's Harlan J. Smith 2.7-m telescope with the Robert G. Tull cross-dispersed échelle spectrograph <cit.> at a resolving power of R = 60,000. Three thirty minutes exposures were recorded for each of these stars. The observing procedure and the wavelength coverage is same asdescribed in <cit.>. The Image Reduction and Analysis Facility (IRAF) software packagewas used to reduce these recorded spectra. HD 144941's spectrum obtained from Anglo-Australian Telescope (AAT), and analysed by <cit.>, was made available by Simon Jeffery (private communication) for comparison. The sample wavelength interval shown in Figure 1displays the extracted spectrum from each exposure of the observed EHes. All spectra were aligned to the rest wavelengths of well-known lines.Inspection of the Figure 1 shows that the line profiles are not always symmetric. Note the obvious asymmetry in the two exposures of V652 Her attributable to atmospheric pulsations <cit.>. V652 Her pulsates with a period of 2.592 hours <cit.> anda radial velocity amplitude of about70 km s^-1 <cit.>. For the abundance analysis, we have used the spectrum of V652 Her, showing symmetric profiles with signal-to-noise ratio of about 140 per pixel at 5600Å.The line profiles of HD 144941 for each exposure appear symmetric and show extremely weak metal lines.Hence, these exposures were coadded to enhance the signal-to-noise ratio forthe abundance analyses; the signal-to-noise ratio is about 280 per pixel at 5600Å.The pure absorption line spectrum of V652 Heris dominated by contributions from the following species: H i, He i, N ii, N iii, O ii, Ne i, Al iii, Si ii, Si iii, S ii, S iii, and Fe iii. However, the absorption line spectrum of HD 144941 is dominated byH i, and He i lines and a small collection of weak lines from other species. Revised Multiplet Table (RMT) <cit.>, tables of spectra of H, C, N, and O <cit.> and the NIST Atomic Spectra Database[http://www.nist.gov/pml/data/asd.cfm] (ver. 5.3) were used for line identification. The primary objective is to determine reliable atmospheric parameters (effective temperature, surface gravityand microturbulence) and then the chemical composition of V652 Her and HD 144941. § QUANTITATIVE FINE ANALYSES Non-LTE line-blanketed model atmospheres are used to determine the atmospheric parameters and the chemical composition. The effective temperature T_ eff andsurface gravity g are obtained fromintersecting loci in the T_ effversus log g plane. These loci represent the ionization equilibrium ofavailable ion pairs such asC ii/C iii, N ii/N iii, Si ii/Si iii, Si iii/Si iv and S ii/S iii and lociderived from the best fits to the Stark-broadened profiles of He i and He ii lines.The microturbulent velocity ξ is obtained from boththe N ii and the O ii lines with each ion providing lines spanning a range in equivalent width.In principle, the chemical composition of the adopted model atmosphere must match the composition derived from a spectrum. This match is achieved iteratively. Note that model atmospheres computed with C/He of 0.003 - 0.03% and H/He of 0.0001 and 0.1 have the same atmospheric structure and so provide the same atmospheric parameters including the composition. This helpful aide to the abundance analysis arises because neutral He is the dominant opacity source for the two EHes.Sincephotoionization ofneutral helium is the main source of continuous opacity, lines of another species, say C ii, are sensitive to the C/He abundance ratio. Abundances are given as logϵ(X) and normalized with respect to logΣμ_ Xϵ(X) = 12.15 where μ_ X is the atomic weight of element X. Since all elements but He have a very low abundance, the logarithmic He abundance is 11.54. Our abundance analyses were carried with non-LTE model atmospheres and non-LTE (and LTE) line formation for all major elements. For a fewminor elements, the abundance analysis could be done only in LTE.Partially line-blanketed non-LTE model atmospheres were computed with thecode TLUSTY <cit.> using atomic data and model atoms provided on the TLUSTY home page[http://nova.astro.umd.edu/index.html],as described in <cit.>. These model atmospheres included opacity from both bound-free andbound-bound transitions of H, He, C, N, O, Ne, Mg, Si, S, and Fe in non-LTE. The adopted model atoms with their number of levels given in parentheses, are: H i(9), He i(14), He ii(14), C i(8), C ii(11), C iii(12), C iv(13), N i(13), N ii(6), N iii(11), N iv(12), O i(22), O ii(29), O iii(29), Ne i(35), Ne ii(32), Ne iii(34), Mg ii(14), Si ii(16), Si iii(12), Si iv(13), S ii(14), S iii(20), Fe ii(36), and Fe iii(50). Of the model atoms available in TLUSTY, these choices each refer to the smallest of the available atoms for many ions. Use of these modelatoms suffices for the calculations of model atmospheres but, as we describe below, larger model atomsare used for the calculations of equivalent widths of lines.Model atmospheres in LTE were also computed using TLUSTY.Model grid in non-LTE and LTE were computed covering the ranges T_ eff = 20 000 (1 000) 30 000 Kand log g = 3.0 (0.1) 4.5 cgs. In this paper, we have used TLUSTY and SYNSPEC for calculating LTE and non-LTE model atmospheres and line profiles <cit.>. Stellar atmospheric parameters provided in Table 1 are determined from the spectra of V652 Her and HD 144941 on the assumption of both non-LTE and LTE using the lines given in Tables 2 and 3. Except where noted, the gf-values of the lines are taken from the NIST database[http://www.nist.gov/pml/data/asd.cfm] (ver. 5.3). A few other sources consulted for gf-values are given in footnotes to the relevant tables.TLUSTY model atmospheres are calculated with line opacity provided by the smallest of the available model atoms for many ions, as detailed above. In many cases, the observed lines are not contained within these model atoms. In order to extend the non-NLTE calculations to more of the observed lines, we ran the statistical equilibrium calculations with larger model atoms available in TLUSTY as described in <cit.>. Results are provided for the following model atoms: C ii(22), C iii(23), N ii(42), N iii(32), O ii(29), Ne i(35), andSi iii(30). Lines with both the upper and lower level within themodel atom are marked by * in Table 2 and 3. We note that, our non-LTE abundance analysis using model atmospheres with extended (more levels) model atoms is fairly consistent in terms of the ionization balance and the derived abundances when compared with the analysis using small (less levels) model atoms; the mean abundances differ by about 0.05 to 0.3 dex, and that individual line abundances differ by typically ≤0.3 dex.However, for most of the ions, we notice that the larger model atoms provide relatively less line-to-line scatter in the derived abundances. For the elements H to Fe, identification of lines suitable for analysis is not a major issue for V652 Her. In particular, a good selection of clean lines representing the ions of key elements is available. For HD 144941, lines of H, He, N, and O are available in good number butfewer lines are present for C, Ne, Al, Si, and S with Fe represented only by upper limits to equivalent widths of the most promising lines of Fe iii. <cit.> is the primary source of wavelengths and classifications for these lines. §.§ V652 Her §.§.§ Non-LTE analyses The Non-LTE code SYNSPEC <cit.> was adopted to compute the line profiles and the theoretical equivalent widths using the non-LTE model atmospheres. The observed absorption profile orits measured equivalent width was matched with the SYNSPEC prediction to obtain thenon-LTE abundance. The unresolved blends of two or more lines were dealt by synthesizing and then matching to the observed feature by adjustment of abundances.For determining the T_ eff, log g and ξ, a standard procedure is followed. The microturbulent velocity ξ is estimated from N ii and O ii lines as theyshow a wide range in equivalent width. To minimise the temperature dependence, N ii lineswith similar lower excitation potentials (LEP) were used: N ii lines were used with LEPsabout 18, 21, 23, and 25 eV. ξ was found from the requirement that the derived abundance is independent of the measured equivalent width. A microturbulent velocityξ=13±2km s^-1 is obtained from N ii and O ii lines.Ionization equilibrium is imposed, using model atmospheres computed with small model atoms, to provide loci inthe (T_ eff, log g) plane for the following pairs of ions: C ii/C iii, N ii/N iii, S ii/S iii,and Si iii/Si iv but low weight is given to the last ratio because the Si abundances from the Si iii lines show a large line-to-line scatter. Fits to the Stark-broadened wings of He i and He ii line profiles provide additional loci in the (T_ eff, log g) plane. The line broadening coefficients are from TLUSTY/SYNSPEC which adopts the line broadening coefficients for He i 4471Å, 4388Å, and 4026Åfrom <cit.>. For He i 4009Å, TLUSTY/SYNSPEC uses an approximate Stark broadening treatment. For He ii 4686Å, TLUSTY/SYNSPEC uses a broadening table fromSchoening (private communication to I. Hubeny). The predicted line profiles depend on the electron densities and, therefore, on the temperature and surface gravity. Observed profiles of the He i 4471Å, 4388Å, 4026Åand 4009Å lines and the He ii 4686Å were used in the analysis.Sample observed profiles of the He i and He ii 4686Åline are shown in Figure 2 with predicted non-LTE profiles for a non-LTE atmosphere of T_ eff=25000K and three different surface gravities. The He i and He ii loci were obtained by fitting the line profiles for a range of effective temperatures. Note that, the predicted profiles have been convolved with the instrumental profile and the stellar rotation profile. A projected rotation velocity of 10-12 km s^-1 was obtained from fits of synthetic spectra to clean O ii lines with an allowance for the instrumental profile. The loci derived from the application of ionization equilibrium to C, N, Si, and S ions are also added to the loci from the He i and He ii profiles. Figure 3 shows these loci. Their intersection suggests the best non-LTE model atmosphere has T_ eff=25000±300K and log g = 3.10±0.12.H i observed profiles at 4102Å, 4340Å and 4861Å were chosen for estimating the non-LTE hydrogen abundance by spectrum synthesis. The line wings of 4340Å and 4861Åprofiles are mainly used for this purpose. It is noted that the best fitting theoretical profile for each line does not have an emission core but such cores appear for theoretical profiles of higher H abundance. Less weight is given to the NLTE hydrogen abundance derived from the poor signal-to-noise profile of 4102Å line. The hydrogen model atoms and the line broadening coefficients adopted from <cit.> are from TLUSTY. Observed profiles of the 4102Å, 4340Å, and 4861Åare shown in Figure 4 with predicted non-LTE profiles for a non-LTE atmosphere of T_ eff=25000K and log g = 3.10 for three different hydrogen abundances.The abundances of all elements were derived for the adopted model atmosphere (T_ eff, log g, ξ)=(25000, 3.10, 13.0), computed with extended model atoms. The final photospheric line by line non-LTE abundances including the mean abundance and the line-to-line scatter are given in Table 2.The abundance rms errors due to uncertainty in T_ eff and log g, from C ii, C iii, N ii, N iii, O ii, Ne i, Mg ii, Si iii, Si iv, S ii, S iii, and Fe iii are 0.03, 0.13, 0.03, 0.16, 0.04, 0.02, 0.04, 0.05, 0.13, 0.05, 0.05, and 0.03 dex, respectively. §.§.§ LTE analyses Analysis of the line spectrum was repeated with the TLUSTY LTE models and LTE line analysis. This LTE analysis uncovers several inconsistencies arising from substantial non-LTE effects on some of the lines. These inconsistencies demand worrying compromises in selecting the atmospheric parameters and, thus, in determining the elemental abundances.Among the concerns are the fits to the He i and He ii line profiles. Observed profiles of He i and theHe ii 4686 Å lines are shown in Figure 5 with predicted LTE profiles for the LTEatmosphere of T_ eff = 25300 K and three different surface gravities. The predictedHe i profiles fail to reproduce the observed cores but are acceptable fits tothe line wings. Note that the non-LTE profiles provide a satisfactory fitboth tothe cores andthe wings (Figure 2). In LTE, the fit to the He ii 4686 Å line requires a much highersurface gravity at a given temperature than other indicators. Figure 6, the LTE counterpart to Figure 3, shows the He i and He ii loci, as well as those corresponding to ionization equilibrium. The loci set by ionization equilibrium for C, N and Si are almost coincident and each is only slightly shifted from their non-LTE location in the T_ eff,log g plane. However, the locus set by S ii/S iii is shifted away from other ionization equilibrium loci because of the large non-LTE effect on the S iii lines.Other species subject to appreciable non-LTE effects do not enter into consideration in determining the atmospheric parameters. The species most obviously affected by non-LTE effects is Ne i .An enthusiast dedicated to LTE analyses with Nelsonian eyesight might adopt the LTE model (see Figure 6) with T_ eff = 25300±300 K,log g = 3.25±0.12 and a microturbulence of 13 km s^-1. The LTE abundances for this adopted LTE TLUSTY model are given in Table 2. The abundance rms errors, due to uncertainty in T_ eff and log g, from Al iii, Si ii, P iii, and Ar ii are 0.05, 0.07, 0.03, and 0.04 dex, respectively. The abundance rms errors for the rest of the species are very similar to those estimated for the appropriate non-LTE model atmosphere. Of course, such errors do not recognize that the choice of the LTE model atmosphere involves compromises. §.§ HD 144941 §.§.§ Non-LTE analyses Essentials of theprocedurediscussed in Section 3 for V652 Her were adopted for the non-LTE analyses of HD 144941. A microturbulent velocity ξ=10km s^-1 is used as suggested by <cit.>. Except for the observed H i and He i lines, all lines in the observed spectrum are weak.Of course, the derived abundances from these weak lines are almost independent of the adopted microturbulence.Fits to the Stark-broadened wings of He i and He ii line profiles provide loci in the (T_ eff, log g) plane. Unfortunately, the He ii 4686Å line profile is not detected on our spectrum but the upper limit to its presence provides a limiting locusin the (T_ eff, log g) plane. Figure 7 shows the sample observed profiles with predicted non-LTE profiles for a non-LTE atmosphere ofT_ eff=22000K and three different surface gravities.The only locus obtained from ionization balance is through sulphur ions: S ii/S iii but this is based on just one weak S ii and two weak S iii lines. Figure 8 shows the loci obtained from the fits to the He i and He ii profiles and the ionization balance of S ii/S iii using model atmospheres computed with small model atoms.These intersecting loci are used in determiningthe final model parameters of T_ eff=22000±600K andlog g = 3.45±0.15.Observed profiles of the 4102Å, 4340Å, and 4861Å are shown in Figure 9 with predicted non-LTE profiles for a non-LTE atmosphere of T_ eff=22000K and log g = 3.45 for three different hydrogen abundances. The line-wings are mainly used for this purpose as the predicted profiles showemission in the line-core. Unlike V652 Her, the best fits to the wings areaffected by emission in the core with the intensity of emission increasing from Hδ to Hβ. However, the predicted profiles by <cit.> match theobservations (the wings as well as the core) fairly well.For comparison, observed profiles of the 4102Å, 4340Å, and 4861Å lines for three different H abundances are shown in Figure 10 with predicted non-LTE profiles for a non-LTE atmosphere with the stellar parameters, T_ eff=22000K and log g = 4.15 adopted by <cit.>:(logϵ( H) = 10.1 is the non-LTE abundance estimated by <cit.> and <cit.>.The non-LTE abundances for the adopted non-LTE TLUSTY model(T_ eff, log g, ξ)=(22000, 3.45, 10.0), computed with small model atoms, are given in Table 3. The abundance rms errors, due to uncertainty in T_ eff and log g, from C ii, N ii, O ii, Ne i, Mg ii, Si iii, S ii, S iii, and Fe iii are 0.06, 0.06, 0.05, 0.04, 0.07, 0.06, 0.08, 0.06, and 0.04 dex, respectively.§.§.§ LTE analyses Inconsistencies among atmospheric parameter indicators when using LTE model atmospheres and LTE line analysis techniques may be expected to resemble those inconsistencies identified as present for V652 Her. The lower abundances of many elements inHD 144941 relative to V652 Her may effect the radiation field in HD 144941's atmosphere even though opacity at many wavelengths including the optical region is dominated by helium.Observed profiles of the He i and He ii 4686Åline are shown in Figure 11 with predicted LTE profiles for a LTE atmosphere of T_ eff=21000K and three different surface gravities. As anticipated, the predicted LTE He i profiles for HD 144941 fail to reproduce the observed line-core but the line-wings are well reproduced but the predicted non-LTE He i profiles successfully reproduce the observed core as well as the wingswith the adopted non-LTE TLUSTY model (see Figure 7).The He ii 4686Å line is not positively detected on our spectra. Predicted profiles of this line provide a limiting locus with significantly higher gravities than other indicators, a similar situation occurs for V652 Her (Figure 6).Figure 12 shows the loci obtained from the fits to the He i and He ii profiles and ionization equilibria which are provided by the the following pairs of ions: Si ii/Si iii and S ii/S iii. The Si locus appears in Figure 12 but not Figure 8 because TLUSTY lacks an adequate model Si^+ atom. The final compromise LTE model parameters areT_ eff=21000±600K and log g = 3.35±0.15.LTE abundances for the adopted LTE model are given in Table 3. Observed profiles of the Balmer lines 4102Å, 4340Å, and 4861Å are shown in Figure 13 with predicted LTE profiles for a LTE atmosphere of T_ eff=21000K and log g = 3.35 for three different hydrogen abundances. The abundance rms errors, due to uncertainty in T_ eff and log g, from Al iii, and Si ii are 0.06, and 0.11 dex, respectively. The abundance rms errors for the rest of the species are very similar to those estimated for the appropriate non-LTE model atmosphere. In comparing non-LTE - LTE abundance differences in Table 2 for V652 Her and in Table 3 for HD 144941, it must be noted that the compromise LTE model for V652 Her is 300 K hotter and 0.15 dex greater in log g than its non-LTE model but the LTE model for HD 144941 is 1000 K cooler and 0.10 dex lower in log g than its non-LTE model.§ DISCUSSION - CHEMICAL COMPOSITION Tables 4 and 5 summarize our derived non-LTE and LTE abundances for V652 Her and HD 144941, respectively. Mean elemental abundances are given for elements represented by more than a single stage of ionisation. Composition of the solar photosphere is givenin the final column <cit.>.§.§ V652 Her Inspection of the abundances for V652 Her offers three pointers to the star's history: i) most obviously, it isH-poor by a factor of about 300, ii) CNO-cycling was most likely responsible for conversion of H to He because, as first noted by <cit.>, the star is N-rich and relatively C and O poor, and iii) the overall metallicity of the star is approximately solar, as judged by the abundances of elements from Mg to Fe. Before connecting these points to the star's evolutionary status, brief remarks are made on the previous LTE abundance analysis of this star.<cit.> obtained a time series of optical spectra at a resolving power of 10000. Results of a LTE abundance analysis when this pulsating star was near maximum radius were given. This LTE atmosphere had parameters T_ eff = 22000 K, log g = 3.25 and a microturbulence of 9 km s^-1. This model is 3000 K cooler than our LTE model and differs slightly in surface gravity and microturbulence. A direct comparison of LTE abundances (our Table 4 and their Table 2) gives differences of less than ±0.3 dex for all elements except C, Ne and P for which differences in the sense (us - them) are -0.4, +0.6 and -0.9 dex, respectively. Jeffery et al. isolate their P abundance estimate for comment and recommend that `The older value [as provided by <cit.>] should be preferred for the present.' This older value for P is within 0.1 dex of our LTE value.When the 1999 values for H to Fe are adopted, the (us - them) differences are within ±0.3 dex except for S at 0.4 dex. The 1999 LTE model atmospheres corresponded to T_ eff = 24550±500 K, log g = 3.68±0.05 and a microturbulence 5 km s^-1. In short,our and published LTE abundance analyses are in good agreement but this should be hardly surprising given the similarities of spectra and analytical tools. <cit.> provided a hybrid non-LTE analysis (i.e., a non-LTE analysis of absorption lines was made using a LTE model atmosphere) of a selection of lines measured off the spectrum used by <cit.>. The atmosphere was very similar to that used by <cit.> and, thus, 3000 K coolerbut of similar surface gravity to our chosen non-LTE model atmosphere. The abundances of H, C, N, O, Mg and S differ in the sense (Us - Prz) by +0.3, -0.4, +0.2, -0.5, -0.8 and -0.3 dex, respectively. C, Mg and S were representedby very few lines. With the exception of the Mg abundance from theMg ii 4481 Å feature (the sole Mg indicator), the non-LTE - LTE corrections are within±0.2 dex. This independent non-LTE analysis fully confirms the resultthat V652 Her's atmosphere is now highly enriched in CNO-cycled material.What is new here is the demonstration that adoption of a set of non-LTE atmospheres and the chosen collection of non-LTE line tools reveal inconsistencies in theindicators previously employed to determine the appropriate atmospheric parameters and corrections to LTE abundances for non-LTE effects which can for some species (e.g., Ne i) be considerable. At the present time, the non-LTE abundances should be used to discuss the three pointers mentioned in this section's opening paragraph. Even if V652 Her is the result of a merger of two He white dwarfs, it is very likely to have retained the metallicity - say, Ne to Fe abundances - of the two stars from which the white dwarfs evolved with mass loss and mass exchange playing major roles. For the five elements (Ne, Mg, Si, S and Fe) with non-LTE abundances, the mean difference (V652 Her - Sun) is -0.1 which suggests a near-solar initial composition. (The differences of -0.5 for Mg and Fe are intriguing.)Thus, V652 Her is a member of the thin disk.V652 Her's high N abundance is 0.8 dex above the solar value. This coupled with the high H deficiency and sub-solar values of C and O point to CNO-cycling as the primary process for conversion of H to He. The CNO-cycles preserve the total number of C, N and O nuclei. The non-LTE CNO-sum is 8.78 and the solar sum is 8.92. Almost fortuitously, the expected sum for a mix corresponding to a metal deficiency of -0.1 dex is 8.82!.§.§ HD 144941 Comparison of the non-LTE and solar abundances (Table 5) shows a near-uniform difference (star - Sun) for elements from C to Fe. With the exception of Ne and S, the mean difference is -1.6 with individual values ranging from -1.4 to -1.8.(The LTE Al abundance gives a difference of -1.4.) The difference for Fe is ≤ -0.9. The Fe line at 5156Å is present in allthree exposures, and clearly seen in Figure 14 but since this is the sole positive detection and not confirmed by other lines,we assign an upper limit to the Fe abundance.Adopting the Fe abundance from IUE spectra and the LTE analysisby <cit.>, the difference is -1.8, a value consistent with our much higher limit. Their logarithmic Fe abundance of 5.7±0.2 and our non-LTE abundances, except for the α-elements: Ne and S, are consistent with the composition of metal-poor stars,residents of the Galactic thick disk or halo, which show α-element enhancements of approximately +0.3 for O, Mg and Si. Moreover, the abundances of C, N and O are consistent with the difference of -1.6, and, hence, the H-deficiency is not obviously attributable to CNO-cycling, as is the case for V652 Her.As just noted, Ne and S provide striking exceptions to the run of (star - Sun) differences offered by other elements. These elements give differences of -0.7 for Ne and -0.9 for S. There are no convincing nucleosynthetic reasons for these differences. (Just conceivably, N-rich He-rich material could have been heated and a considerable amount for N burnt to Ne by two successive alpha-captures. The total initial sum of CNO abundances could have been about 7.3 but the Ne abundance of 7.2 implies a near perfect conversion of CNO to Ne.) The Ne i lines appear to be secure identification - see Figure 14 for the strongest Ne i line. The primary suspect for the high S abundance are systematic errors associated with the identification of the weak S ii and S iii lines. The gf-values and non-LTE treatment are less likely sources of errors as the measured lines are among a larger set used for V652 Her. Our LTE abundances are in good agreement with the previous LTE analysis of an optical spectrum by <cit.> using the model atmosphere T_ eff = 23200 K, log g = 3.9 and a microturbulence of 10 km s^-1.The abundance differences (us - them) are within±0.25 dex limits. Unfortunately, Harrison & Jeffery did not include Ne and S. Their Fe abundance of 6.4 was lowered to 5.7 by their synthesis of lines in IUE spectra <cit.>. Our non-LTE abundances are also in good agreement but for Mg with results from <cit.>'s hybrid non-LTE analysis: differences (Us - Pry) are+0.3, 0.0, -0.3, -0.4 and -0.8 dex for H, C, N, O and Mg, respectively. Our corrections (Non-LTE - LTE) are inagreement to within ±0.15 dex including for Mg ii with those found by <cit.>. The model atmosphere used by <cit.> has the same effective temperature but a higher surface gravity (log g = 4.15 rather our 3.45).<cit.> selected lines from the optical spectrum used by <cit.>. § CONCLUDING REMARKS A possible origin for extreme He stars such as V652 Her and HD 144941 involves, as noted in the Introduction, the merger of two He white dwarfs. The white dwarfs themselves began life as main sequence stars in a binary system which then experienced two massexchange and mass loss episodesto form the pair of low-mass He white dwarfs. Emission of gravitational radiation causes the white dwarfs to slowly approach each other.White dwarf binaries which can merge in a Hubble time or less are candidates to account for V652 Her and HD 144941. <cit.> present evolutionary tracks for the merger of two white dwarfs. Three modes of merger are discussed: slow, fast and composite.Since their calculations are most extensive for Z=0.02 white dwarfs,V652 Her is discussed first.In the slow merger (see also <cit.>), the lower mass white dwarf loses its mass in a few minutes toform a disk around the more massive white dwarf.Accretion from the disk by the surviving white dwarf may last several million years and completes the merger. The composite star is initially a luminous red giantwhich evolves to become a hot subdwarf before entering the white dwarf cooling track.Along this evolutionary track, the star may appear as an EHe.Zhang & Jeffery's calculations predict that the surface composition of the merged star (i.e., the EHe) is that of the less massive white dwarf; there is no mixing between the accreted material and the accreting star. The recipe for setting the compositions of the He white dwarfs as described by Zhang & Jeffery proves insensitive to the less massive star's assumed mass (see Figure 2 from <cit.> for Z = 0.02 star). The stars are predicted to be N-rich (N/C ∼ 100 and N/O ∼ 10) with C/He ∼ 0.01%. This pattern for He, C, N, O and also Ne matches to within a factor of about two, the observed composition of V652 Her (Table 4). (Oddly, the predicted mass fraction of ^24Mg is about an order of magnitude higher than observed.) Predicted evolutionary tracks in the (T_eff, log g) plane about the EHe domain are coincident for masses of 0.5-0.7M_⊙ with tracks for higher masses occurring at lower surface gravities (see Figure 15 from<cit.>). (The tracks at T_ eff≤ 40000 K appear insensitive to the initial composition; Z= 0.02 was generally adopted, i.e., slightly supra-solar.) With non-LTE parameters from Table 1, V652 Her falls on the predicted 0.8M_⊙ track which is only 0.6 dex lower in log g than the 0.5-0.7M_⊙ tracks.Since <cit.> estimate V652 Her's mass at 0.6±0.2M_⊙, we consider the observed star fits the predicted evolutionary track for a slow merger of two He white dwarfs. This fit is echoed by the correspondence between predicted and observed compositions.In short, V652 Her may be the result of a slow merger of two low-mass He white dwarfs.In a fast merger, accretion is considered to be complete in a few minutes. In Zhang & Jeffery's simulations the envelope resembles a hot corona with carbon produced by He-burning and N destroyed by ^14N(α,γ)^18O and at higher temperatures α-capture converts the ^18O to ^22Ne. Burning in a He-shell occurs in flashes. Zhang & Jeffery conclude description of fast mergers with the remark that `For all the fast merger models the surface composition is rich in^12C, ^18O and ^22Ne but there is almost no ^14N for models with initial composition of Z = 0.02. (Of course, it is frustrating that isotopic wavelength shifts for atomic lines withstellar line widths do not permit determinations of the isotopic mix of C, O and Ne.) For slow mergers with Z = 0.02, the predicted N/C ratio is about 100 and independent of the final mass as noted above, but for fast mergers the N/C is predicted to run from 0.01 for a final mass of 0.5M_⊙ to 10^-9 for a final mass of 0.8M_⊙ (see Figure 19 of Zhang & Jeffery).Clearly V652 Her is not the result of a fast merger.In the composite model, as envisaged by Zhang & Jeffery, the first phase of accretion transfers about half of the mass as a fast merger with the second half transferred via a disk as in the case of a slow merger. For mergers resulting in a total mass of less than about 0.6M_⊙ (apparently, resulting from roughly equal masses for the two white dwarfs), the surface convection zone is absent and, thus, the surface composition resulting from accretion from the disk is that of the accreted white dwarf, i.e., the predicted composition is equivalent to that of a slow merger (see Figure 20 from Zhang & Jeffery). At total masses above 0.6M_⊙ the predicted compositions tend in the direction to those predicted by a fast merger; the N/C ratio is about 0.3 for a mass 0.8M_⊙ but the prediction for a fast merger is 10^-11. V652 Her may have resulted from a composite merger (effectively, a slow merger) with a total mass less than about 0.6M_⊙.In brief, V652 Her'scomposition and location in the (T_ eff,log g) plane encourage the idea that the star resulted from the merger of two Helium white dwarfs. As modeled by Zhang & Jeffery,the star resulted from a slow or a composite merger with a rather relaxed constraint on the masses of the white dwarfs. A fast merger, as defined by Zhang & Jeffery, leads to a star with N/C less than unity in sharp conflict with the observed N/C ratio of 50. Significantly, V652 Her's composition and position in the (T_ eff,log g) plane can be met by a range of slow or composite mergers. In contrast to V652 Her, the picture of a merger of He white dwarfs is not so obviously readily applicable to HD 144941. An initial impression of HD 144941's composition (Table 5) is that the C, N and O are consistent with the initial abundances for a star with [Fe/H] of about -1.6 or Z ∼ 0.0004 but such a consistency surely sits uneasily with the messy conversion of a main sequence star to a highly-evolved H-poor star. Moreover, the star's Ne abundance is about 1 dex greater than anticipated for a normal metal-poor star. Thus, we have sought possible solutions in Zhang & Jeffery's paper. It is worthy of note that with respect to the (T_ eff,log g) plane, HD 144941's non-LTE parameters provide an excellent fit to Zhang & Jeffery's evolutionary tracks for merging white dwarfs from stars with Z=0.001. Observed and predicted compositions are more difficult to reconcile. For slow mergers at Z = 0.001, models predict products with a much lower C/He ratio than observed and a very high N/C ratio (see Figure 20 from Zhang & Jeffery), say N/C ∼ 200 but the observed ratio is N/C ∼ 0.3. Fast mergers invert the N/C ratio and but are likely to provide a N abundance declining steeply with increasing total mass such that a match to HD 144941 will be found only for a narrow range of masses. A composite merger may match the observed He, C and N abundances with adjustments to Zhang & Jeffery's recipe for this merger process. Introduction of a period of fast merging is expected to increase the Ne (as ^22Ne) abundance and so possibly match the observed Ne abundance. (An episode of fast merging may also provide abundant ^18O.) Expansion of the parameter space considered by Zhang & Jeffery is highly desirable.Our abundance analysis shows that the use of non-LTE effects in the construction of model atmospheres and analyses of absorption lines for He-rich warm stars leads to significant changes in the defining atmospheric abundances and in certain elemental abundances relative to the assumption of LTE for model atmospheres and abundance analysis. Judged by our determinations of composition and location in the (T_ eff,log g) plane, V652 Heris likely to have resulted from the merger of two helium white dwarfs. HD 144941's location in the(T_ eff,log g) plane is similarly consistent with the merger hypothesis. Its composition may also be consistent with formation through a merger but additional theoretical predictions seem required. These conclusions are based on Zhang & Jeffery's bold and exploratory calculations of the merging process.Perhaps, our analyses will not only encourage refinements to the study of white dwarf mergers but also a search for additional EHe stars with the very low C/He ratio that is a characteristic feature of V652 Her and HD 144941.Quite fortuitously, after submission of the paper,<cit.> reported the discovery of a third EHe star with a low C/He (= 0.0023%) ratio. The star GALEX J184559.8-413827 according to the LTE abundance analysis by <cit.>has a subsolar (-0.4)metallicity withC, N and Oabundances similar to that of V652 Her, i.e., the atmosphere of the new discovery is rich in CNO-cycled material. Our abundance analyses of V652 Hershow that a non-LTE reanalysis of J184559.8-413827 will not alter this conclusion.We thank Simon Jeffery and Mike Montgomery for helpful email exchanges. We also thank Ivan Hubeny for helping us in using the TLUSTY and SYNSPEC codes. We would like to thankthe anonymous referee for the constructive comments. DLL acknowledges the support of the Robert A. Welch Foundation of Houston, Texas through grant F-634. llcccc0pt 7 Summary of atmospheric parametersT_ eff log g ξ v sin iStar Analyses (K) (cgs units) (km s^-1) (km s^-1) V652 Her non-LTE 25000±3003.10±0.12 13±2 10 - 12V652 Her LTE 25300±3003.25±0.12 13±2 10 - 12HD 144941 non-LTE 22000±6003.45±0.15 10 10HD 144941 LTE 21000±6003.35±0.1510 10 lccrcc 0pt 7 Measured equivalent widths (W_λ) and NLTE/LTE photospheric line abundances for V652 Her. χW_λ 2clog ϵ( X)5-6 Line (eV) log gf (mÅ) NLTEa LTEb H i λ 4101.734 10.199 -0.753 Synth 9.429.65 H i λ 4340.462 10.199 -0.447 Synth 9.479.64 H i λ 4861.323 10.199 -0.020 Synth 9.529.69 Mean...9.47±0.059.66±0.03 C ii λ 3920.681* 16.333 -0.232 357.216.94 C ii λ 4267.001* 18.046 +0.563C ii λ 4267.183* 18.046 +0.716C ii λ 4267.261* 18.046 -0.584 125 7.296.94 C ii λ 6578.050* 14.449 -0.026 697.046.86 C ii λ 6582.880* 14.449 -0.327 457.086.90 Mean... 7.16±0.12 6.91±0.04 C iii λ 4647.418* 29.535+0.070 256.86 6.88 C iii λ 4650.246* 29.535-0.151 226.92 7.02 Mean... 6.90±0.04 6.95±0.10 N ii λ 3955.851* 18.466 -0.849 1398.56 N ii λ 3994.997* 18.497 +0.163 3088.758.99 N ii λ 4227.736* 21.600 -0.061 1538.64 N ii λ 4459.937* 20.646 -1.476288.778.63 N ii λ 4507.560* 20.666 -0.817748.54 N ii λ 4564.760* 20.409 -1.589308.73 N ii λ 4601.478* 18.466 -0.452 2208.738.87 N ii λ 4607.153* 18.462 -0.522 2068.738.83 N ii λ 4613.868* 18.466 -0.665 1948.798.86 N ii λ 4621.393* 18.466 -0.538 2158.798.91 N ii λ 4630.539* 18.483 +0.080 3168.78 N ii λ 4643.086* 18.483 -0.371 2228.558.82 N ii λ 4654.531* 18.497 -1.506858.83 N ii λ 4667.208* 18.497 -1.646658.79 N ii λ 4674.908* 18.497 -1.553808.84 N ii λ 4718.377 27.746 -0.042338.91 N ii λ 4774.244* 20.646 -1.280518.78 N ii λ 4779.722* 20.646 -0.587 1068.60 N ii λ 4781.190* 20.654 -1.337408.70 N ii λ 4788.138* 20.654 -0.363 1208.49 N ii λ 4803.287* 20.666 -0.113 1608.57 N ii λ 4810.299* 20.666 -1.084548.63 N ii λ 4987.376* 20.940 -0.584878.53 N ii λ 4991.243* 25.491 -0.180508.81 N ii λ 4997.224* 25.491 -0.657288.96 N ii λ 5002.703* 18.462 -1.022 1268.65 N ii λ 5023.053* 25.507 -0.165608.91 N ii λ 5045.099* 18.483 -0.407 2458.668.83 N ii λ 5073.592* 18.497 -1.550858.91 N ii λ 5179.344 27.980 +0.497 N ii λ 5179.521 27.746 +0.675958.71 N ii λ 5183.200 27.980 -0.090268.94 N ii λ 5184.961 27.739 -0.044268.84 N ii λ 5452.070* 21.148 -0.881608.748.66 N ii λ 5454.215* 21.153 -0.782978.888.91 N ii λ 5478.086* 21.153 -0.930408.47 N ii λ 5480.050* 21.160 -0.711568.45 N ii λ 5495.655* 21.160 -0.220 1068.43 N ii λ 5526.234* 25.491 -0.312458.588.93 N ii λ 5530.242* 25.498 +0.113818.488.90 N ii λ 5535.347* 25.507 +0.398 N ii λ 5535.383* 25.491 -0.204 1358.438.98 N ii λ 5540.061* 25.491 -0.557328.658.98 N ii λ 5543.471* 25.498 -0.092688.598.98 N ii λ 5551.922* 25.507 -0.189638.649.03 N ii λ 5676.020* 18.462 -0.367 2368.779.06 N ii λ 5710.770* 18.483 -0.518 1998.878.88 N ii λ 5730.660* 18.483 -1.703518.77 N ii λ 5747.300* 18.497 -1.091 1168.77 N ii λ 5767.450* 18.497 -1.447958.95 N ii λ 6136.890* 23.132 -1.124238.89 N ii λ 6150.750* 23.125 -1.086248.87 N ii λ 6167.750* 23.142 +0.025 1228.86 N ii λ 6170.160* 23.125 -0.311738.75 N ii λ 6173.310* 23.132 -0.126 1008.82 N ii λ 6340.580* 23.246 -0.192798.73 N ii λ 6346.860* 23.239 -0.901378.96 N ii λ 6379.620* 18.466 -1.191 1038.608.81 N ii λ 6482.050* 18.497 -0.311 2298.829.04 N ii λ 6504.610* 23.246 -0.626558.93 Mean... 8.70±0.12 8.80±0.16 N iii λ 4514.850* 35.671 +0.221469.13 N iii λ 4634.130* 30.459 -0.086788.528.81 N iii λ 4640.640* 30.463 +0.168 1008.418.81 Mean... 8.50±0.10 8.92±0.18 O ii λ 4345.560* 22.979 -0.346 62 7.647.67 O ii λ 4366.895* 22.999 -0.348 60 7.607.64 O ii λ 4414.899* 23.441 +0.172104 7.537.64 O ii λ 4416.975* 23.419 -0.077107 7.787.91 O ii λ 4452.378* 23.442 -0.788 23 7.567.65 O ii λ 4590.974* 25.661 +0.350 70 7.797.68 O ii λ 4596.177* 25.661 +0.200 52 7.727.63 O ii λ 4638.856* 22.966 -0.332 64 7.647.69 O ii λ 4649.135* 22.999 +0.308140 7.647.74 O ii λ 4650.838* 22.966 -0.362 38 7.357.40 O ii λ 4661.632* 22.979 -0.278 62 7.577.62 O ii λ 4676.235* 22.999 -0.394 42 7.447.50 O ii λ 4699.011* 28.510 +0.418 O ii λ 4699.218* 26.225 +0.270 36 7.377.30 O ii λ 4705.346* 26.249 +0.477 44 7.437.40 O ii λ 5206.651* 26.561 -0.266 15 7.737.69 Mean... 7.59±0.14 7.61±0.15 Ne i λ 5852.488* 16.848 -0.490208.51 Ne i λ 6143.063* 16.619 -0.100538.138.62 Ne i λ 6163.594* 16.715 -0.620247.988.72 Ne i λ 6266.495* 16.715 -0.370257.758.50 Ne i λ 6334.428* 16.619 -0.320408.238.69 Ne i λ 6382.991* 16.671 -0.240408.138.62 Ne i λ 6402.246* 16.619 +0.330104 8.098.72 Ne i λ 6506.528* 16.671 -0.030528.078.58 Ne i λ 6598.953* 16.848 -0.360227.978.48 Ne i λ 7032.413* 16.619 -0.260408.168.67 Mean... 8.06±0.14 8.61±0.09 Mg ii λ 4481.126*8.864 +0.749 Mg ii λ 4481.150*8.864 -0.553 Mg ii λ 4481.325*8.864 +0.594211 7.097.48 Al iii λ 4149.913 20.555 +0.620 Al iii λ 4149.968 20.555 -0.680 Al iii λ 4150.173 20.555 +0.470103 6.40 Al iii λ 4479.885 20.781 +0.900cAl iii λ 4479.971 20.781 +1.020cAl iii λ 4480.009 20.781 -0.530c755.93 Al iii λ 4512.565 17.808 +0.410896.24 Al iii λ 4528.945 17.818 -0.290 Al iii λ 4529.189 17.818 +0.660146 6.34 Al iii λ 5696.604 15.642 +0.230169 6.67 Al iii λ 5722.730 15.642 -0.070126 6.60 Mean... 6.36±0.27 Si ii λ 4128.0549.837 +0.359 30 7.19 Si ii λ 4130.8729.839 -0.783 Si ii λ 4130.8949.839 +0.552 34 7.04 Si ii λ 5041.024 10.066 +0.029 35 7.67 Si ii λ 5055.984 10.074 +0.523 44 7.30 Mean... 7.30±0.27 Si iii λ 3796.124* 21.730 +0.407 Si iii λ 3796.203* 21.730 -0.703167 7.187.04 Si iii λ 3806.526* 21.739 +0.679 Si iii λ 3806.700* 21.739 -0.071307 7.607.69Si iii λ 4567.840* 19.016 +0.068285 7.407.85 Si iii λ 4574.757* 19.016 -0.409204 7.347.57 Si iii λ 4716.654* 25.334 +0.491 83 7.467.15 Si iii λ 4813.333* 25.979 +0.708 85 7.047.11 Si iii λ 4819.712* 25.982 +0.937 Si iii λ 4819.814* 25.982 -0.354116 7.017.24 Si iii λ 4828.951* 25.987 +0.937 Si iii λ 4829.111* 25.980 -0.354120 7.037.16 Si iii λ 5739.734* 19.722 -0.096226 7.367.76 Mean... 7.27±0.21 7.40±0.32 Si iv λ 4088.862* 24.050 +0.194185 7.637.75 Si iv λ 4116.104* 24.050 -0.110120 7.487.44 Mean... 7.56±0.11 7.60±0.22 P iii λ 4222.198 14.610 +0.210 80 5.42 P iii λ 4246.720 14.610 -0.120 65 5.61 Mean... 5.52±0.13 S ii λ 5032.434 13.672 +0.188 40 7.35 S ii λ 5103.332 13.672 -0.457 25 7.76 S ii λ 5212.620 15.068 +0.316 25 7.217.33 S ii λ 5320.723 15.068 +0.431 27 7.127.26 S ii λ 5428.655* 13.584 -0.177 20 7.577.37 S ii λ 5432.797* 13.617 +0.205 35 7.467.27 S ii λ 5509.705* 13.617 -0.175 20 7.567.37 S ii λ 5564.958 13.672 -0.336 33 7.80 S ii λ 5606.151 13.733 +0.124 25 7.22 S ii λ 5639.977 14.067 +0.258 S ii λ 5640.346 13.701 -0.036 45 7.23 S ii λ 5660.001 13.677 -0.222 17 7.37 Mean... 7.38±0.21 7.39±0.20 S iii λ 4253.589* 18.244 +0.107174 7.467.18 S iii λ 4284.979* 18.193 -0.233103 7.286.90 S iii λ 4332.692* 18.188 -0.564 66 7.266.88 S iii λ 4354.566* 18.311 -0.959 53 7.497.15 S iii λ 4361.527* 18.244 -0.606 68 7.326.95 S iii λ 4364.747* 18.318 -0.805 34 7.076.74 S iii λ 4499.245* 18.294 -1.640 16 7.527.20 Mean... 7.34±0.16 7.00±0.18 Ar ii λ 4806.021 16.644 +0.210 326.91 Fe iii λ 4137.764 20.613 +0.630c 426.747.05 Fe iii λ 4164.731 20.634 +0.923c 836.917.20 Fe iii λ 4296.851 22.860 +0.418cFe iii λ 4296.851 22.860 +0.879c 306.717.02 Fe iii λ 4310.355 22.869 +0.189cFe iii λ 4310.355 22.869 +1.156c 456.767.05 Fe iii λ 4395.7558.256 -2.595c 397.047.34 Fe iii λ 4419.5968.241 -2.218c 817.157.38 Fe iii λ 4431.0198.248 -2.572c 387.027.30 Fe iii λ 5063.4218.648 -2.950c 187.147.42 Fe iii λ 5086.7018.659 -2.590c 407.177.46 Fe iii λ 5127.3878.659 -2.218c1007.66 Fe iii λ 5156.1118.641 -2.018c1007.317.46 Fe iii λ 5235.658 18.266 -0.107c 337.18 Fe iii λ 5243.306 18.270 +0.405c 717.13 Fe iii λ 5276.476 18.264 -0.001c 367.13 Fe iii λ 5282.297 18.266 +0.108c 437.12 Fe iii λ 5299.926 18.261 -0.166c 317.21 Fe iii λ 5302.602 18.262 -0.120c 317.17 Fe iii λ 5460.799 14.178 -1.519c 306.887.59 Fe iii λ 5485.517 14.176 -1.469c 306.837.54 Fe iii λ 5573.424 14.175 -1.390c 426.927.64 Fe iii λ 5833.938 18.509 +0.616c 666.96 Fe iii λ 5891.904 18.509 +0.498c 546.96 Fe iii λ 5929.685 18.509 +0.351c 487.04 Mean... 7.05±0.16 7.30±0.22 a(T_ eff, log g, ξ)=(25000, 3.10, 13.0) b(T_ eff, log g, ξ)=(25300, 3.25, 13.0) cKurucz gf-value *The lines covered by the adopted model atom, including the extended model atom with more levels, for the ions providing the ionization balance and some other ionslccrcc 0pt 7 Measured equivalent widths (W_λ) and NLTE/LTE photospheric line abundances for HD 144941. χW_λ 2clog ϵ( X)5-6 Line (eV) log gf (mÅ) NLTEa LTEb H i λ 4101.734 10.199 -0.753 Synth10.47 10.41 H i λ 4340.462 10.199 -0.447 Synth10.32 10.32 H i λ 4861.323 10.199 -0.020 Synth10.24 10.19 Mean... 10.37±0.09 10.35±0.05 C ii λ 6578.050* 14.449 -0.026 886.886.64 C ii λ 6582.880* 14.449 -0.327 526.846.58 Mean... 6.86±0.03 6.61±0.04 N ii λ 3994.997* 18.497 +0.163746.686.98 N ii λ 4447.030* 20.409 +0.221226.506.72 N ii λ 4601.478* 18.466 -0.452166.496.75 N ii λ 4607.153* 18.462 -0.522156.546.80 N ii λ 4621.393* 18.466 -0.538156.546.81 N ii λ 4630.539* 18.483 +0.080356.366.64 N ii λ 4994.360* 25.498 -0.164 N ii λ 4994.370* 20.940 -0.098 86.556.75 N ii λ 5001.474* 20.654 +0.435246.476.69N ii λ 5025.659* 20.666 -0.558 86.947.15 N ii λ 5045.099* 18.483 -0.407166.496.76 N ii λ 5666.630* 18.466 -0.080126.106.37 N ii λ 5676.020* 18.462 -0.367 86.196.47 N ii λ 5679.560* 18.483 +0.250246.116.39 N ii λ 6482.050* 18.497 -0.311 76.176.45 Mean... 6.44±0.23 6.70±0.22 O ii λ 4414.899* 23.441 +0.172 19 6.946.87 O ii λ 4416.975* 23.419 -0.077 16 7.107.02 O ii λ 4590.974* 25.661 +0.350 14 7.247.10 O ii λ 4596.177* 25.661 +0.200 11 7.267.12 O ii λ 4638.856* 22.966 -0.3328 6.946.86 O ii λ 4649.135* 22.999 +0.308 37 7.157.08 O ii λ 4650.838* 22.966 -0.3627 6.916.83 Mean... 7.08±0.15 6.98±0.13 Ne i λ 6143.063* 16.619 -0.100127.267.49 Ne i λ 6402.246* 16.619 +0.330287.267.50 Ne i λ 6506.528* 16.671 -0.030107.147.38 Mean... 7.22±0.07 7.46±0.07 Mg ii λ 4481.126*8.864 +0.749 Mg ii λ 4481.150*8.864 -0.553 Mg ii λ 4481.325*8.864 +0.594 51 5.775.88 Al iii λ 4479.885 20.781 +0.900cAl iii λ 4479.971 20.781 +1.020cAl iii λ 4480.009 20.781 -0.530c205.11Al iii λ 4512.565 17.808 +0.410215.22 Al iii λ 4528.945 17.818 -0.290 Al iii λ 4529.189 17.818 +0.660335.17 Al iii λ 5696.604 15.642 +0.230214.96 Al iii λ 5722.730 15.642 -0.070114.95 Mean... 5.08±0.12 Si ii λ 5041.024 10.066 +0.0298 6.11 Si ii λ 5055.984 10.074 +0.523 15 5.90 Mean... 6.01±0.15 Si iii λ 4552.622* 19.016 +0.292 70 6.115.93 Si iii λ 4567.840* 19.016 +0.068 46 6.035.85 Si iii λ 4574.757* 19.016 -0.409 20 6.015.84 Si iii λ 4819.712* 25.982 +0.937 Si iii λ 4819.814* 25.982 -0.354 12 5.755.99 Si iii λ 4828.951* 25.987 +0.937 Si iii λ 4829.111* 25.980 -0.354 16 5.906.15 Si iii λ 5739.734* 19.722 -0.096 23 6.165.95 Mean... 5.99±0.15 5.95±0.11 S ii λ 5432.797 13.617 +0.205 12 6.086.09S iii λ 4253.589* 18.244 +0.107 30 6.305.71 S iii λ 4284.979* 18.193 -0.233 17 6.316.11 S iii λ 4332.692* 18.188 -0.564 ≤15 ≤6.59≤6.39 Mean... 6.31±0.01 5.91±0.28 Fe iii λ 4395.7558.256 -2.595c ≤19≤7.09≤6.79 Fe iii λ 4419.5968.241 -2.218c ≤29≤6.96≤6.62 Fe iii λ 4431.0198.248 -2.572c ≤16≤6.99≤6.68 Fe iii λ 5127.3878.659 -2.218c ≤20≤6.98≤6.59 Fe iii λ 5156.1118.641 -2.018c ≤14≤6.60≤6.21 Mean... ≤6.60±0.00 ≤6.21±0.00 a(T_ eff, log g, ξ)=(22000, 3.45, 10.0) b(T_ eff, log g, ξ)=(21000, 3.35, 10.0) cKurucz gf-value *The lines covered by the adopted model atom, including the extended model atom with more levels, for the ions providing the ionization balance and some other ionslccc 0pt 4 Summary of V652 Her's photospheric abundancesElement non-LTE LTE Suna H9.59.7 12.0 He11.5 11.5 10.9 C 7.0 6.98.4 N 8.7 8.97.8 O 7.6 7.68.7 Ne8.1 8.67.9 Mg7.1 7.57.6 Al6.46.5 Si7.4 7.47.5 P5.55.4 S7.4 7.27.1 Ar6.96.4 Fe7.1 7.37.5 a<cit.>lccc 0pt 4 Summary of HD 144941's photospheric abundancesElement non-LTE LTE Suna H10.4 10.412.0 He 11.5 11.510.9 C6.96.68.4 N6.46.77.8 O7.17.08.7 Ne 7.27.57.9 Mg 5.85.97.6 Al 5.1 6.5 Si 6.06.07.5 S6.26.07.1 Fe ≤6.6 ≤6.2 7.5 a<cit.>25 natexlab#1#1[Asplund et al.(2009)Asplund, Grevesse, Sauval, & Scott]asplund09 Asplund, M., Grevesse, N., Sauval, A. J., & Scott, P. 2009, , 47, 481[Barnard et al.(1974)Barnard, Cooper, & Smith]barnard1974 Barnard, A. J., Cooper, J., & Smith, E. W. 1974, , 14, 1025[Harrison & Jeffery(1997)]harrison97 Harrison, P. M., & Jeffery, C. S. 1997, , 323, 177[Hill et al.(1981)Hill, Kilkenny, Schoenberner, & Walker]hill81 Hill, P. W., Kilkenny, D., Schoenberner, D., & Walker, H. J. 1981, , 197, 81[Hubeny(1988)]hubeny88 Hubeny, I. 1988, Computer Physics Communications, 52, 103[Hubeny & Lanz(1995)]hubeny95 Hubeny, I., & Lanz, T. 1995, , 439, 875[Hubeny & Lanz(2011)]hubeny2011 —. 2011, Synspec: General Spectrum Synthesis Program, Astrophysics Source Code Library[Hubeny & Lanz(2017)]hubeny2017 —. 2017, ArXiv:1706.01859 e-prints[Hubeny et al.(1994)Hubeny, Lanz, & Jeffery]hubeny94 Hubeny, I., Lanz, T., & Jeffery, C. S. 1994, CCP7 Newsletter on Analysis of Astronomical Spectra, 30[Jeffery(2008)]jeffery08.hdef3.b Jeffery, C. S. 2008, in Astronomical Society of the Pacific Conference Series, Vol. 391, Hydrogen-Deficient Stars, ed. A. Werner & T. Rauch, 53[Jeffery(2017)]jefferyMN2017 Jeffery, C. S. 2017, , 470, 3557[Jeffery & Harrison(1997)]jeffery97 Jeffery, C. S., & Harrison, P. M. 1997, , 323, 393[Jeffery et al.(1999)Jeffery, Hill, & Heber]jeffery99 Jeffery, C. S., Hill, P. W., & Heber, U. 1999, , 346, 491[Jeffery et al.(2001)Jeffery, Woolf, & Pollacco]jeffery01b Jeffery, C. S., Woolf, V. M., & Pollacco, D. L. 2001, , 376, 497[Landolt(1975)]landolt75 Landolt, A. U. 1975, , 196, 789[Moore(1972)]moore72 Moore, C. E. 1972, A multiplet table of astrophysical interest - Pt.1: Table of multiplets - Pt.2: Finding list of all lines in the table of multiplets.[Moore(1993)]moore93 —. 1993, Tables of Spectra of Hydrogen, Carbon, Nitrogen, and Oxygen Atoms and Ions, ed. J. W. Gallagher, CRC Series in Evaluated Data in Atomic Physics (CRC Press)[Pandey et al.(2014)Pandey, Kameswara Rao, Jeffery, & Lambert]pandey14 Pandey, G., Kameswara Rao, N., Jeffery, C. S., & Lambert, D. L. 2014, , 793, 76[Pandey et al.(2001)Pandey, Kameswara Rao, Lambert, Jeffery, & Asplund]pandey01 Pandey, G., Kameswara Rao, N., Lambert, D. L., Jeffery, C. S., & Asplund, M. 2001, , 324, 937[Pandey & Lambert(2011)]pandey11 Pandey, G., & Lambert, D. L. 2011, , 727, 122[Przybilla et al.(2005)Przybilla, Butler, Heber, & Jeffery]przybilla05 Przybilla, N., Butler, K., Heber, U., & Jeffery, C. S. 2005, , 443, L25[Przybilla et al.(2006)Przybilla, Nieva, Heber, & Jeffery]przybilla06 Przybilla, N., Nieva, M. F., Heber, U., & Jeffery, C. S. 2006, Baltic Astronomy, 15, 163[Saio & Jeffery(2000)]saio00 Saio, H., & Jeffery, C. S. 2000, , 313, 671[Shamey(1969)]shamey1969 Shamey, L. J. 1969, PhD thesis, UNIVERSITY OF COLORADO AT BOULDER.[Tull et al.(1995)Tull, MacQueen, Sneden, & Lambert]tull95 Tull, R. G., MacQueen, P. J., Sneden, C., & Lambert, D. L. 1995, , 107, 251[Vidal et al.(1973)Vidal, Cooper, & Smith]VCS Vidal, C. R., Cooper, J., & Smith, E. W. 1973, , 25, 37[Zhang & Jeffery(2012)]zhang12a Zhang, X., & Jeffery, C. S. 2012, , 419, 452
http://arxiv.org/abs/1708.07945v1
{ "authors": [ "Gajendra Pandey", "David L. Lambert" ], "categories": [ "astro-ph.SR" ], "primary_category": "astro-ph.SR", "published": "20170826072515", "title": "Non-Local Thermodynamic Equilibrium abundance analyses of the extreme helium stars: V652 Her and HD 144941" }
Hyperbolic Relaxation Method for Elliptic Equations Hannes R. Rüter^1, David Hilditch^1,2, Marcus Bugner^1, and Bernd Brügmann^1 December 30, 2023 ================================================================================ Linares et al. (2016) obtained quasi-simultaneousg', r' and i'-band light curves and an absorption line radial velocity curve of thesecondary star in the redback system . The light curves showed two maxima and minima primarily due to the secondary star's ellipsoidalmodulation, but with unequal maxima and minima. We fit these light curves and radial velocitieswith our X-ray binary model including either adark solar-type star spot or a hot spot due tooff-centre heating from an intrabinary shock, to account for the unequal maxima.Both models give a radial velocity semi-ampltiude and rotational broadening that agree with the observations.The observed secondary star's effective temperature is best matchedwith the value obtained using the hot spot model, which gives aneutron starand secondary star mass of M_ 1=1.85 ^+0.32_-0.26andM_ 2=0.50 ^+0.22_-0.19 , respectively. binaries: close –stars: fundamental parameters –stars: individual: 3FGL J0212.1+5320 – stars: neutron –X-rays: binaries§ INTRODUCTION Progenitors of binary millisecond pulsars (MSPs) are believed to berecycled dead pulsars in low-mass X-ray binaries (LMXBs). According tothis scenario the neutron star in the LMXB accretes material and angularmomentum from its late-type companion star and is thereby spun up forbillions of years to ultimately evolve into a MSP <cit.>. Compact binary MSPs with orbital periods less than 1 d are commonly classified aseither ”black widows” or ”redbacks” depending on the mass of thecompanion star, M_ 2. Black widows have relatively low-mass degenerate companion stars0.02 M_⊙≤ M_ 2≤ 0.05 M_⊙, whereas redbacks have relatively more massive non-degenerate companions 0.2 M_⊙≤ M_ 2≤ 0.4 M_⊙ <cit.>.The evolutionary link between redback and black widow MSPs is stilluncertain. According to <cit.>, black widows and redbacks are twodistinct populations of MSPs with different evaporation efficiencies. Onthe other hand, <cit.> argue that redbacks with compactorbits evolve to black widows, while the ones with longer orbital periodsevolve to MSP-He white dwarf systems, and thatblack widows descendfrom redbacks but not all redbacks become black widows.The Large Area Telescope (LAT) on the Fermi Gamma-Ray Space Telescope hasbeen successful in uncovering binary MSPs because they are γ-rayemitters similar to young pulsars <cit.>. Subsequently, thanks totargeted radio and X-ray surveys where Fermi-LAThas localized sources, more than 30 black widow and 14 redback systemshave been discovered <cit.>.To datethere are three (PSR J1023+0038, IGR J18245-2452 and XSS J12270-4859)redback systems that transition betweenaccretion-powered LMXB states and rotation-powered radio pulsar statesclearly indicates the close relationship between LMXBs and radio MSPs andhas provided further support for the recycling scenario<cit.>. The optical light curves of black widow and redback binaries showlarge-amplitude variability due to irradiated ellipsoidal modulation ofthe near-Roche filling secondary star. Compared to the black widows, theredbacks, with their larger secondary stars and closer distance arerelatively bright in the optical, allowing for detailed photometric andspectroscopic observations <cit.>.In the last few years, dynamical photometric and spectroscopic studies ofbinary MSPs have largely proven the neutron star masses in thesesystems are generally heavier than the 1.4 M_⊙ canonical value,as expected theoretically for the binary evolution of MSPs<cit.>.In this paper we present the results of modelling the optical light andradial velocity curves of the redback candidatediscovered by<cit.> and <cit.>. First we briefly describe the X-ray binary model used,the fitting procedure and the results where we determine the binary systemmasses. Finally we discuss the impact of these results. § Recently, both <cit.> and <cit.> presented the discoveryof a variable optical counterpart to the unidentified γ-ray source, and argued that it is a binary "redback" MSP candidate.<cit.> obtained quasi-simultaneous g', r' and i'-bandlight curves as well a low- and high resolution spectroscopy. The opticallight curves obtained by <cit.> and <cit.> both show twomaxima and minima, primarily due to the secondary star's ellipsoidalmodulation. However, the light curve obtained by <cit.> showsunequal maxima, which is not seen in the light curve obtained by<cit.> due to very poor orbital phase coverage. From the combinedphotometry and radial velocities<cit.> determined an orbital period of 0.86955(15) d, the sameorbital period was obtained by <cit.> from their R and g'-bandlight curves.From high resolution spectra taken between orbital phases 0.17 and 0.22,<cit.> found the secondary star to have a spectral type ofF6±2. They also found no notable changes in spectral type or colouracross thebinary orbit, not surprising given the long orbital period and hence weakX-ray heating effects. They determined a radial velocity curve using theHα absorption line and froma sinusoidal fit to the curve obtained a radial velocity semi-amplitude ofK_ 2=214.1±5.0. Finally using high resolution spectroscopythey estimate the projected rotational velocity of the secondary star tobe =73.2±1.6.§ THE X-RAY BINARY MODEL We use the X-ray binary light curve modelxrbcurve described in <cit.>, whichhassuccessfully been used to model the light curves and radial velocitycurves of neutron star and black hole X-ray binaries<cit.>. to fit and interpret the photometric light curves and theHα absorption-lineradial velocity curve ofpresented in<cit.>.Briefly, the model consists of a binary system in which the primary point-like compact object with mass M_ 1 and the secondaryis aRoche lobe filling star with mass M_ 2.which is assumed tobe in a circular orbit and in synchronous rotation. The binary mass ratioq is defined as the ratio M_ 2/M_ 1. The binary geometryis determined by thebinary masses, the orbital inclination i, and the Roche lobefilling factor f of the secondary star, defined as the ratio of theradius of the star from its centre of mass to the inner Lagrangian pointto the distance towards the inner Lagrangian point. §.§ The equivalent volume radiusWhen the secondary stars in interacting binaries such as cataclysmicvariables or X-ray binaries are tidally locked and in synchronous rotation,for a given orbital period the width of the rotationally-broadened absorption line profilearising from the star scales with the size of its Roche lobe. One canshow that the star's rotational broadeningand equivalent volumeradius <cit.> r_ eq/a (a is the binary separation),defined as the radiusof a sphere whose volume is the same as the volume of the secondary star,are related through the expression v_ rot sin i / K_ 2 = (1+q) r_ eq(q)/ a. For a star that fully fills its Roche lobe, <cit.> numericallycalculated the Roche lobe volume for different mass ratios and determinedan analytical expression for r_ eq/a as a function of q (normallyreferred to as Eggleton's formula). However, for stars that do not filltheir Roche lobe, one cannot use Eggleton's formula to determine r_eq/a, instead one has to calculate r_ eq/a numerically for a givenbinary q and f configuration. The star's rotational broadening is thengiven by v_ rot sin i / K_ 2 = (1+q)r_ eq(f,q)/a. Using our X-ray binary model we numerically determine r_ eq/a as afunction of f and q, assuming synchronous rotation. For a given q andf configuration we determine the binary Roche potential and thus thestar's Roche lobe. We sample the star's surface with 18340 quadrilateralsof approximately equal area and then perform a numerical integration tocalculate its volume and hence equivalent radius. Fig. <ref>shows a contour plot of the r_ eq/a values for different q and fvalues. In Fig. <ref> we show the results for f in the range 0.5to 1.0 and q in the range 0.1 to 1.0. The model f=1.0 is equivalent toEggleton's relation and we find our model and Eggleton's relation areconsistent to within 0.1 per cent.§.§ Light curve The secondary star's effective temperature the gravity-darkening exponentand reddening given byT_ 2,β and E(B-V), respectivelydetermines the observed light arising from the secondary star.The model includes the effects of heatingof the secondary star by a point source from thecompact object.The irradiating flux F_ Xis thermalised in thesecondary star's photosphere and is re-radiated locally at a highereffective temperature.For each point on the star's surface, the effective temperature is calculatedby combining the intrinsic and incident fluxes.We assume the "deepheating" approximation where the irradiationdoes not affect the temperature structure of the atmosphere. This means thateach element radiates as predicted by amodel atmosphere for a single star.The scale of the system is set bythe distance to the source in parsecs, the orbital period and the radial velocityamplitude of the secondary star (D_ pc, P_ orb and K_ 2, respectively). We use NextGen model-atmosphere fluxes <cit.> todetermine the intensity distribution on the secondary star anda quadratic limb-darkening law with coefficients taken from <cit.>, to correctthe intensity for limb-darkening.§.§ Radial velocity curveThe optical light curves of binary MSPs show the secondary star'sellipsoidal modulation combined with the effects of heating.The spectrum shows intrinsic absorption lines arising from the secondarystar as well as irradiation-induced absorption Balmer lines. Both radialvelocity curves are distorted because of shift on the centre of mass ofthe lines that are used to determine the radial velocities.The model light curve is determined by integrating the observed flux from eachelement of area on the star in the observers line-of-sight. For the radial velocitycurves we specify the strength of the absorption or emission lines overthe secondary star's surface, and integrate to obtain the correspondingline-of-sight radial velocity.We set theabsorption line strength given by its equivalent width (EW) according tothe effective temperature for each element on the star.However, asmentioned in <cit.> we must also consider theconsequences of external heating. The vertical temperature gradient in aninternally and externally heated atmosphereproduces weaker absorptionlines than expected from the effective temperature. Since there is nodecent models to treat the effects of external heating in atmospheres, weintroduce the factor F_ AV, which represent the fraction of theexternal radiation flux that exceeds the unperturbed flux. A value of F_ AV=1.10 means that if the external radiation flux is greaterthan 10 per cent of the unperturbed flux, then we set the EW for thatelement to zero, otherwise, the absorption line strength takes the EWcorresponding to the effective temperature of the element. The secondary stars in binary MSPs and X-ray binaries are typicallylate-type stars, later than F, which contain metals absorptionlines such as magnesium, calcium and iron as well as neutral hydrogen.Thestrongest absorption metal lines in the blue part of the optical spectrumare the Ca I 4226Å, Fe I 4383Å andMg I b triplet 5167,5172,5183Å. The EW versus temperaturerelation we use in our model is determined from observed stars in the 04to K2 spectral type range, obtained from the VLT-UVES Paranal ObservatoryProject database <cit.>.In Fig.<ref> we show the EWversus T_ eff relation for the Hα absorption line as also forcomparison the Mg I triplet.Given that we do not really understand irradiation, the EW relationcombined with F_ AV allows the modelto fit the observed line strength distribution<cit.>.§.§ Additional sources of lightSome of the optical light curves of binary MSPs are asymmetric withunequal maxima <cit.>. Various models have beenproposed to explain this, such as off-centre heating from an intrabinaryshock <cit.>, a hot spot <cit.> or solar-type star spots<cit.>.The collision between the pulsar wind and the mass outflow from thesecondary star can produce an intrabinary shock. Therefore the high energyradiation can be mediated by the intrabinary shock producing off-centreheating <cit.>. Indeed the changes in colour andspectral typeacross the face of the secondary star indoes not match heating patterns, suggesting that the heating is not direct.X-ray observations of PSR J2215+5135 show an X-rayminimum near orbital phase zero (inferior conjunction of the secondary star)which has been interpreted as due to variableobscuration from an intrabinary shock around the secondary star<cit.>. In PSR J2215+5135, an intrabinary shock is suggested toexplain the significant phase shift of the optical maximum withrespect to the radio-pulse ephemeris, as well as the asymmetric opticallight curves <cit.>.In some MSPs, the transition from an accretion-powered to arotation-powered pulsar has been observed, with the disappearance offeatures associated with an accretion disk, for example PSR J1023+0038<cit.>. However, as accretion onto the pulsar ceases, it ispossible that a quiescent disk could remain between the companion and thelight cylinder of the pulsar <cit.>, which would contribute to theoverall spectral energy distribution. In PSR J2215+5135, the differencebetween the observed colour temperature and the model temperature has beeninterpreted as an additional source of light from a hot accretion disk<cit.>. A large hot star spot could also produce anasymmetric temperature distribution.It is also possible that theintrinsic magnetic field associated with the secondary star could channelthe pulsar wind and cause enhanced local heating and an apparent hot spoton the star <cit.>. Dark star spots due to strong magnetic activity on the secondary starscould also be present on the surface of the star <cit.>. IndeedRoche tomography has revealed observational evidence of star spots inbinaries <cit.>. Therefore, to account for the possible sources of extra light we include anadditional flux components in each wave-band and to simulate the effects of adark star spot or off-centre heating (hot spot) which produces anasymmetric temperature distribution on the star, we add a Gaussiantemperature function to the elements of area on the star. Thenormalization is positive or negative for a hot or dark sport,respectively. The Gaussian position and width is constrained in latitudeand is extended uniformly in longitude across the star, roughly simulatinga spot. § XRBCURVE FITTINGHere we simultaneously fit the g',r',i'-bandlight curves and the absorption line radial velocity curve of presented in <cit.> with xrbcurve to determine thebinary masses. The individual g' (380 data points), r' (629 data points), i' (382 data points) band datapoints, were phase folded according to the orbital ephemeris and averagedinto 39, 32 and 39 orbital phase bins, respectively. Similarly, the 131radial velocity points were averaged into 30 phase bins.The optical light curves ofclearly show unequal maxima. The factthat one sees a increase or decrease in light at phase 0.25 or 0.75,respectively, suggests that the extra source of light arises from thesurface of the secondary star in terms of a dark star spot or a hot spot.To model these light curves with xrbcurve we assume either a hotor dark spot model. To reduce the number of free parameters in the model,we fix various model parameters. The value of the gravity-darkeningexponent depends on the mean temperature and gravity of the secondary star<cit.>. The observed F6 late-type spectral type for thesecondary star allows us to fix the gravity-darkening exponent to 0.072<cit.>.From the observed hydrogen column density wefind the reddening to be E(B-V)=0.251±0.054 <cit.>. The binary model parameters are f, q, cos i, T_ 2, logF_ X, D_ pc, K_ 2, F_ AV and E(B-V). There is also the additional sourceof light in each filter E_ g', E_ r' and E_ i', thelight curve phase shift δ_ g', δ_ r' andδ_ i', the radial velocity phase shift and systemic velocityδ_ AV, γ_ AV, and finally the Gaussian spotparameters, position C_ spot, width W_ spot, normalizationT_ spot and extent E_ spot. Given q, cos i and K_2 we calculate M_ 1 and M_ 2. Given q and f wedetermine R_ eq and which when combined with q and K_ 2gives(see Section <ref>). However, note that the observedvalue fordepends on orbital phase, because it reflects the sizeof the tidally distorted star. The value for R_ eq we determine represents the mean radius of thestar observed and so the predicted value for is amean value. To optimize the fitting procedure tofirst fit photometric light curve and the absorption-line radialvelocity curve with xrbcurve assuming a dark spot on thesecondary star. We use the differential evolution algorithm described in<cit.> with a crossover probability of 0.9,amutation scaling factor of 0.9, and a maximum number of generations of 2000,which corresponds to ∼ 100,000 calculations of themodel. Given that there are a number of different types of data withdifferent number of data points, to optimize the fitting procedure weassigned relative weights to the different data sets, determined after aninitial fit.We rescaled the error bars oneach data set sothat the total reduced χ^2 of the fit was ∼ 1 for each data setseparately.The simultaneous light and radial velocity curve fitting is amulti-dimensional nonlinear optimization problem. In order to obtaina robust error analysiswe used Markov chain Monte Carlo (MCMC) method convolved with a differentialevolution fitting algorithm; differentialevolution adaptive metropolis (dream)<cit.>, which works well in problems witha high number ofdimensions.dream simultaneously runs multipledifferent chains uses differential evolution <cit.> asthe genetic algorithm for population evolution with a Metropolis selectionrule to decide whether parents are replaced bycandidate points not.In order to ensure that the MCMC has convergedwe also performed the Gelman-Ruben test,which analyzes the difference between multipleMarkov chains and we also visually inspectedthe trace of the parameters.The convergence is assessed by comparing the estimatedbetween-chains and within-chain variances for each model parameter<cit.>. We use a Bayesian framework to make statistical inferences on our binarymodel parameters.Our fitting makes use of flat prior probabilitydistributions for all the model parameters, except E(B-V), where we usea Gaussian prior [see <cit.> for anintroduction to Bayesian analysis].We use 20 individual chains to explorethe parameter space and followed each for 40,000 iterations.We rejectthe first 500 iterations ("burn-in") and only include every 10th point("thinning"), based on the observed auto correlation lengths for theindividual parameters.§ RESULTSWe simultaneously fit g', r' and i'-band light curves and anabsorption line radial velocity curve ofwith xrbcurvewith a constant source of light in each band to account for a possibledisk and/or intrabinary shock and two model scenarios (a) a dark star spot or(b) a hot spot due to off-centre heatingto account for the unequal maxima. Our preliminary fits show that the dark or hot spot model F_ AV is notconstrained, which means that the model requires all the inner face of thestar contributes to the radial velocity curve, so we fix it to a large valueaccordingly. Furthermore, we find that thephase shifts for each filter are the same,so in our final fits we set δ_ i'=δ_ r'=δ_ g'.Also, for the hot spot model we find that no heating is required, so we fix F_ X to zero.The dark and hot spot model have 18 and 17 free model parameters, respectively. For our final fits, the dark and hot spot model best fits have a χ^2of 182 and 193, with 122 and 123 degrees of freedom,respectively.Using the F-test to test the null hypothesis that the χ^2 of twomodels are the same, we find that we can reject the null hypothesis at the1.1 per cent significance level,Hence, statistically the dark spotmodel is only better than thehot spot model at the 98.9 per cent(∼2.29σ) confidence level. The best fit models for the hot and dark spot areshown in Figures <ref> and <ref>, respectively. The plotsshows the best fit to the light and radial velocity curves as well asthe light curve of the extra source of light in each band. We also showthe observed maps of the continuum flux and Hαabsorption line strengthon the secondary star at different orbital phases. InTable <ref> we give the mean and 1-σ limits of theposterior probability distribution functions on each parameter for bothmodels. The result of the MCMC for the hot and dark spot model are shownin Figs. <ref> and <ref>, respectively.As one can see, mostof the two-dimensionalprobability distribution functions are relativelywell determined.Changing the gravity darkening coefficient only changes the χ^2 by 3, resulting in a change in M_1 of 10 per cent. The fits with the dark and hotspot model give similar values for f, i and K_ 2 (seeTable <ref>).However, the value for q and T_ 2 are different.The dark spotmodel best fit suggests a system at an inclination angle of 65^∘ and massratio q=0.18 with some effects of X-ray heating; 24 percentof the secondary star has a change in temperature more than 10 degreeslarger than the non-irradiated level. The dark spot has a minumumtemperature of T_ spot=332 K and covers 2 percent of the star; see Fig. <ref>). In contrast the best fit with ahot spot suggest a system at slightly higher inclination angle of 69^∘ anda less extreme mass ratio q=0.28 with no heating.The hot spot has a temperature of T_ spot=538 Kand covers 3 percent of the star. For the dark spot model we find that the extra source of light contributes13, 11 and 8per cent to the observed flux in the g', r' and i'bands, respectively. Similarly for the hot spot model we find that theextra source of light contributes 31, 25 and 23per cent to the observedflux in the g', r' and i' bands, respectively. InFig. <ref> we show the spectrum of the extra flux component forthe dark and hot spot model, which can be represented with a spectral index(F_ν∝ν^α) of 1.1± 0.3 and 0.8± 0.2, respectively. In principle the shape of the ellipsoidal light curve can provideimportant clues to the binary inclination angle, because the peak to peakamplitude and difference between the two minima depend primarily on thebinary inclination angle and mass ratio. Irradiation not only fills theminimum at phase 0.5, but it also a produces a larger amplitudemodulation.However, for a Roche lobe under-filling star, this dependencewith mass ratio disappears, because the star is not as tidally distortedcompared to a fully Roche lobe filling star. For our values of f thedependence of the light curve amplitude with mass ratio still exists. <cit.>. For a similar inclination angle, a system with a less extreme mass ratio and no heating will produce a similar light curve to asystem with a more extreme mass ratio andwithheating. Our hot and darkspot model fits suggest either a non-irradiated system at an extreme mass ratio or weakly irradiated system at a less extreme mass ratio, respectively. The lack of heating of the secondary starrequired by the hot spot model by the pulsar wind and/or radiation is notsurprising, given the wide orbit. The only other MSP which also has a wideorbit with a similar lack of irradiation is PSR J1740–5340<cit.>.Both the hot and dark spot model give similar values forf, K_ 2, i and , but different values for q and T_ 2. T_ 2 determined from the hot spot modelis consistent with the observedspectral type F6±2 <cit.>, corresponding to a temperaturerange of 6170–6640 K <cit.>.§ DISCUSSION§.§ The extra light source <cit.> determined the spectral energy distribution ofPSR J1023+0038 from near-IR to X-rays, when the system was in anaccretion-powered phase. They found that the spectral energy distributionis well modelled by contributions from the secondary star, the accretiondisc and an intrabinary shock. The neutron star spin-down luminosityirradiates the accretion disc and the secondary star, accounting for theUV and optical emission. X-rays and gamma-rays are produced in anintrabinary shock and the shock emission is powered by the neutron starspin-down luminosity which extends to lower energies with a photon indexof Γ∼1.5 which corresponds to a spectral index of α∼2.5 (α = 1+Γ; EN(E) ∝ f_ν; N(E) ∝E^Γ; F_ν∝ν^α, where N(E) is the X-ray (0.5–10 keV)photon spectrumand f_ν is the source spectrum). Indeed, the photon index measured forredbacks in the accretion or pulsar state is in the range 0.9–1.8<cit.> andis no exception with a photon index ofΓ∼ 1.3, corresponding to a spectral index of α∼ 2.3.When determining<cit.> found that their best matchtemplate star required a non-stellar light veiling of 10–30 per cent inthe ∼ g'-band. For the dark spot model we find that the extra sourceof light contributes 13, 11 and 8 per cent to the observed flux in the g',r' and i' bands, respectively. Similarly for the hot spot model wefind that the extra source of light contributes 31, 25 and 23 per cent tothe observed flux in the g', r' and i' bands, respectively. Both modelsagree well with the veiling measured from the spectra <cit.>. The extra light source contribution required to fit our light curves can berepresented with a spectral index; F_ν∝ν^α. We find a spectralindex of 1.1 and 0.8, for the dark and hot spot model, respectively, which does notagree with the X-ray spectral index, suggesting that the extra optical emissionmay not be produced by an intrabinary shock. §.§ Energy flux <cit.> and <cit.> have determined the spectral energydistribution ofwhich shows the optical band dominated by thesecondary star, the gamma-rays from the MSP and the shock between the MSPand secondary star's wind (intrabinary shock), which most likely powersthe X-ray emission. The unabsorbed 0.1–100 GeVenergy flux is 1.71×10^-11, whereas the unabsorbed0.5–10 keV luminosity is 1.8×10^-12<cit.>. We can compare these values with the bolometricfluxdetermined from our dark spot model; no X-ray heating isrequired in the hot spot model. For thedark spot model we find a flux of6.6×10^-11whichis a factor of ∼4 more than what is observed in the γ-rays.The extra source of heating arise from the pulsar whichemits prompt particles to heat the secondary star <cit.>§.§ The secondary stars rotational broadeningSynchronization via tidal forces will suppress differential rotationand make the angular velocity of the secondary star constant. However, since the star is distorted, the linear rotationalvelocity will vary with longitude around the star and sowill varyacross the orbit. The variations show two maxima and two minima perorbital cycle and have an amplitude of ∼10, which depends on qand i <cit.>. The procedure normally used to measure thesecondary star's rotational broadening is to compare it with the spectrumof a slowly rotating template star convolved with a limb-darkened standardrotation profile <cit.>.The width of thelimb-darkened standard rotationprofile (adopting the continuum value for the limb-darkening coefficient) is varied until it matches the width of the target spectrum<cit.>.However, because of theassumed spherical shape of the rotation profile.there are assumptions inherent in this method. The secondary stars inbinary MSPs substantially fill their Roche lobes, and so they will have distortedline profiles which depend on the exact Roche binary geometry <cit.>.Also, because both temperature and gravity vary over the star's photospheredue to the Roche lobe shape, the spectrum of the secondarycannot be described by a single-star spectrum.Finally, we use thecontinuum value for the limb-darkening coefficient. Firstly, because the line flux arises from higher regions in theatmosphere than the continuum flux, absorption lines have corelimb-darkening coefficients much less than the continuum value <cit.>.Hence using the standard rotation profile with zero and the continuum valuefor the linelimb-darkening coefficient gives a value for q that brackets the valuefound using the full geometrical treatment <cit.>. Using theextreme cases for the limb-darkening coefficient of zero or the continuumvalue introduces a systematic uncertainty of about 14 per cent in thedetermination<cit.>. <cit.> used the standard method described above to estimate theprojected rotational velocity of the secondary star, which they find to be=73.2±1.6from spectra around orbital phase 0.2. They thenuse thiswith their value for K_ 2, determined from asinusoidal fit to their radial velocity curve and obtain q=0.26 (seeequation<ref>), assuming a Roche lobe filling, tidally locked andspherically symmetric companion star (see Section <ref>).However, the variation ofwith orbital phase (10–15 per cent)introduces coupled with the uncertainties in the line limb-darkeningcoefficient (∼14 per cent) and the fact that the star does not fillits Roche lobe, contributes significantly to the accuracy to which one candetermine q using this method.§.§ Favoured model is thought to be a redback based on its optical,X-ray and γ-ray properties,which are similar to other redbacks <cit.>. Most redbackshave secondary stars with minimum secondary star masses of <0.8<cit.> and so the secondary star masses determined from the dark and hot spot model are consistent with the system being a redback. The reduced χ^2 of the best-fit for the dark and hot spot modelsare significantly different, with the dark spot model being only statisticallybetter at the 98.9 (=2.29σ) confidence level (see Section <ref>),The value determined forfor both models agrees (within ∼3-σ) with the observations (73.2±1.6), however, the hot spot model with T_ 2=6416 K more constent with observations.(6170–6640 K <cit.>). We therefore, favour the hot spot model which givesa neutron star mass ofM_ 1=1.85 ^+0.32_-0.26and a secondary star mass ofM_ 2=0.50 ^+0.22_-0.19 . §.§ Binary masses <cit.> obtained R and g' band light curves and two absorptionline radial velocity points. They used the ELC code<cit.> to fit the data, assuming T_ 2=5750 K, M_ 2∼0.4 M_ with no irradiation effects. They performed a gridsearch in i=60–90 degrees and found β_V=0.70 to 0.64 (whereβ_V is the ratio of the equivalent volume radius of the companion starto the Roche lobe radius) corresponding to M_ 1=1.5 to2.2 M_. Note that their definition for the Roche lobe fillingfactor is different to that in our model. Their estimated range forβ_V corresponds to f=0.50 to 0.54 (determined using xrbcurve),which is very different to the value we obtain (seeTable <ref>). However, as noted by the authors, given theirpoor fits and large systematics, the binary parameters they obtain aremerely indicative. In contrast we simultaneously fit the absorption line radialvelocity and g', r', i' light curve, which allow us to constrain various model parameters such as the filling factor of the companion and its surface temperature. It is evident that this redback companion does not fill its Roche lobe, thus confirming the previous findingby <cit.> that redbacks and black widows may have smaller radii than originally expected (due to the presence of radio eclipses in some systems).The presence of temperature asymmetry also indicates that a hot or cold spot is present on the companion's surface. The physical reason for this remains unclear(see Section <ref> for possible models), but long-term monitoring could provide further insights to shed light on this mystery. A cold or a hot spot due to an atmospheric phenomenon might move on a time scale related to the magnetic activity on the star for instance.Star spots arebelieved to be common features on many late-type stars and typically vary on timescales of months to a few years <cit.>.Star spots have been observed in the surface map of the secondary star in the LMXB Cen X–4<cit.>. Indeed, in cataclysmic variables, studies of the surface maps of the secondary stars show star spots moving on timescale of few days<cit.>.There are about 32 reliable neutron star mass measurements in X-ray binaries,double neutron star systems and millisecond pulsars, most of which areare at least partly based on the radio timing techniques <cit.>, Studies statistically distinguishbetween different types of neutron stars and between those believed to be close totheir birth masses, with a mean mass of 1.35±0.05which reflects a highly tunedformation channel <cit.>, and the ones that have undergone long-term accretion episodes. Most recycled pulsars are accompanied by mid-F to late-K type low-mass<0.5 companions but for these systems to form in the age of theUniverse, the companion must originally have been a star of ∼1.Thus, the companion star must have lost a large fraction ofits mass, accreted onto the neutron star, leading to a increase in the neutronstar's mass. The most precise massive neutron star measurements have beenobtained through the measurement of the Shapiro delay, 1.94±0.04 and1.667±0.021 in PSR J1614-2230 and PSR J1903+0327, respectively<cit.>. These masses has allowed us to place fundamental constraintson the equation of state of nuclear matter at high densities, excludingmany of the soft equations of state <cit.>. Dynamical photometricand spectroscopic studies of binary MSPs have largely revealed thatneutron star masses in these systems are generally heavier than the1.4 M_⊙ canonical value,with some particularly heavy ones in PSR B1957+20 (M_ 1=2.4,) and PSR J1816+4510 (M_ 1≥1.84,).Our mass measurement for the neutron star inusingthehot spot model suggests a massive 1.85 ^+0.32_-0.26 neutron star.§ ACKNOWLEDGEMENTS This research has been supported by the Spanish Ministry of Economy andCompetitiveness (MINECO) under the grant AYA2013-42627. M.L. is supported by EU's Horizon 2020 programme through aMarie Sklodowska-Curie Fellowship (grant nr. 702638). R.P.B. received fundingfromtheEuropeanUnionSeventhFramework Programme under grant agreement PIIF-GA-2012-332393. This paper makes use of the IAC's Supercomputing facility condor. @urlcharsothermakeother $&#_% @doi@urlcharsother ifnextchar [ @doi@ @doi@[] @doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1 @eprint#1#2@eprint@#1:#2::nil @eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1 @eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml dblp:#1 @eprint@#1:#2:#3:#4niltempa #1tempb #2tempc #3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc[AbdoAbdo2009]Abdo09 Abdo A. A. e. a.,2009, @doi [Science] 10.1126/science.1176113, http://adsabs.harvard.edu/abs/2009Sci...325..848A 325, 848[AceroAcero2015]Acero15 Acero F. e. a.,2015, @doi [] 10.1088/0067-0049/218/2/23, http://adsabs.harvard.edu/abs/2015ApJS..218...23A 218, 23[Alpar, Cheng, Ruderman& ShahamAlpar et al.1982]Alpar82 Alpar M. A.,Cheng A. F.,Ruderman M. A., Shaham J.,1982, @doi [] 10.1038/300728a0, http://adsabs.harvard.edu/abs/1982Natur.300..728A 300, 728[Archibald et al.,Archibald et al.2009]Archibald09 Archibald A. M.,et al., 2009, @doi [Science] 10.1126/science.1172740, http://adsabs.harvard.edu/abs/2009Sci...324.1411A 324, 1411[Bagnulo, Jehin, Ledoux, Cabanac, Melo, Gilmozzi& ESO Paranal Science Operations TeamBagnulo et al.2003]Bagnulo03 Bagnulo S.,Jehin E.,Ledoux C.,Cabanac R.,Melo C.,Gilmozzi R., ESO Paranal Science Operations Team 2003, The Messenger, http://adsabs.harvard.edu/abs/2003Msngr.114...10B 114, 10[Bassa et al.,Bassa et al.2014]Bassa14 Bassa C. G.,et al., 2014, @doi [] 10.1093/mnras/stu708, http://adsabs.harvard.edu/abs/2014MNRAS.441.1825B 441, 1825[Benvenuto, De Vito& HorvathBenvenuto et al.2014]Benvenuto14 Benvenuto O. G.,De Vito M. A., Horvath J. E.,2014, @doi [] 10.1088/2041-8205/786/1/L7, http://adsabs.harvard.edu/abs/2014ApJ...786L...7B 786, L7[Bhattacharya & van den HeuvelBhattacharya & van den Heuvel1991]Bhattacharya91 Bhattacharya D.,van den Heuvel E. P. J.,1991, @doi [] 10.1016/0370-1573(91)90064-S, http://adsabs.harvard.edu/abs/1991PhR...203....1B 203, 1[Bouvier & BertoutBouvier & Bertout1989]Bouvier89 Bouvier J.,Bertout C.,1989, , http://adsabs.harvard.edu/abs/1989A[Breton et al.,Breton et al.2013]Breton13 Breton R. P.,et al., 2013, @doi [] 10.1088/0004-637X/769/2/108, http://adsabs.harvard.edu/abs/2013ApJ...769..108B 769, 108[Chen, Chen, Tauris& HanChen et al.2013]Chen13 Chen H.-L.,Chen X.,Tauris T. M., Han Z.,2013, @doi [] 10.1088/0004-637X/775/1/27, http://adsabs.harvard.edu/abs/2013ApJ...775...27C 775, 27[Claret & BloemenClaret & Bloemen2011]Claret11 Claret A.,Bloemen S.,2011, @doi [] 10.1051/0004-6361/201116451, http://cdsads.u-strasbg.fr/abs/2011A[Collins & TruaxCollins & Truax1995]Collins95 Collins II G. W.,Truax R. J.,1995, @doi [] 10.1086/175225, http://adsabs.harvard.edu/abs/1995ApJ...439..860C 439, 860[Coti Zelati et al.,Coti Zelati et al.2014]Zelati14 Coti Zelati F.,et al., 2014, @doi [] 10.1093/mnras/stu1552, http://adsabs.harvard.edu/abs/2014MNRAS.444.1783C 444, 1783[Crawford et al.,Crawford et al.2013]Crawford13 Crawford F.,et al., 2013, @doi [] 10.1088/0004-637X/776/1/20, http://adsabs.harvard.edu/abs/2013ApJ...776...20C 776, 20[Demorest, Pennucci, Ransom, Roberts & HesselsDemorest et al.2010]Demorest10 Demorest P. B.,Pennucci T.,Ransom S. M.,Roberts M. S. E., Hessels J. W. T.,2010, @doi [] 10.1038/nature09466, http://adsabs.harvard.edu/abs/2010Natur.467.1081D 467, 1081[Deneva et al.,Deneva et al.2016]Deneva16 Deneva J. S.,et al., 2016, @doi [] 10.3847/0004-637X/823/2/105, http://adsabs.harvard.edu/abs/2016ApJ...823..105D 823, 105[EggletonEggleton1983]Eggleton83 Eggleton P. P.,1983, @doi [] 10.1086/160960, http://adsabs.harvard.edu/abs/1983ApJ...268..368E 268, 368[Ekşİ & AlparEkşİ & Alpar2005]Eksi05 Ekşİ K. Y.,Alpar M. A.,2005, @doi [] 10.1086/425959, http://adsabs.harvard.edu/abs/2005ApJ...620..390E 620, 390[Freire et al.,Freire et al.2011]Freire11 Freire P. C. C.,et al., 2011, @doi [] 10.1111/j.1365-2966.2010.18109.x, http://adsabs.harvard.edu/abs/2011MNRAS.412.2763F 412, 2763[Gelman & RubinGelman & Rubin1992]Gelman92 Gelman A.,Rubin D.,1992, Statistical Science, 7, 457[Gentile et al.,Gentile et al.2014]Gentile14 Gentile P. A.,et al., 2014, @doi [] 10.1088/0004-637X/783/2/69, http://adsabs.harvard.edu/abs/2014ApJ...783...69G 783, 69[GrayGray2005]Gray05 Gray D. F.,2005, The Observation and Analysis of Stellar Photospheres:, 3 edn. Cambridge University Press, Cambridge, @doi10.1017/CBO9781316036570[GregoryGregory2005]Gregory05 Gregory P. C.,2005, Bayesian Logical Data Analysis for the Physical Sciences: A Comparative Approach with `Mathematica' Support. Cambridge University Press[Hauschildt, Allard& BaronHauschildt et al.1999]Hauschildt99 Hauschildt P. H.,Allard F., Baron E.,1999, @doi [] 10.1086/306745, http://adsabs.harvard.edu/abs/1999ApJ...512..377H 512, 377[Hessels et al.,Hessels et al.2011]Hessels11 Hessels J. W. T.,et al., 2011, in Burgay M.,D'Amico N.,Esposito P.,Pellizzoni A., Possenti A.,eds,American Institute of Physics Conference Series Vol. 1357, American Institute of Physics Conference Series. pp 40–43 (@eprint arXiv 1101.1742), @doi10.1063/1.3615072[Hill, Watson, Shahbaz, Steeghs& DhillonHill et al.2014]Hill14 Hill C. A.,Watson C. A.,Shahbaz T.,Steeghs D., Dhillon V. S.,2014, @doi [] 10.1093/mnras/stu1460, http://adsabs.harvard.edu/abs/2014MNRAS.444..192H 444, 192[Kaplan, Bhalerao, van Kerkwijk, Koester, Kulkarni& StovallKaplan et al.2013]Kaplan13 Kaplan D. L.,Bhalerao V. B.,van Kerkwijk M. H.,Koester D., Kulkarni S. R., Stovall K.,2013, @doi [] 10.1088/0004-637X/765/2/158, http://adsabs.harvard.edu/abs/2013ApJ...765..158K 765, 158[Kiziltan, Kottas, De Yoreo& ThorsettKiziltan et al.2013]Kiziltan13 Kiziltan B.,Kottas A.,De Yoreo M., Thorsett S. E.,2013, @doi [] 10.1088/0004-637X/778/1/66, http://adsabs.harvard.edu/abs/2013ApJ...778...66K 778, 66[Lattimer & PrakashLattimer & Prakash2004]Lattimer04 Lattimer J. M.,Prakash M.,2004, @doi [Science] 10.1126/science.1090720, http://adsabs.harvard.edu/abs/2004Sci...304..536L 304, 536[Li, Halpern& ThorstensenLi et al.2014]Li14 Li M.,Halpern J. P., Thorstensen J. R.,2014, @doi [] 10.1088/0004-637X/795/2/115, http://adsabs.harvard.edu/abs/2014ApJ...795..115L 795, 115[Li, Kong, Hou, Mao, Strader, Chomiuk& TremouLi et al.2016]Li16 Li K.-L.,Kong A. K. H.,Hou X.,Mao J.,Strader J.,Chomiuk L., Tremou E.,2016, @doi [] 10.3847/1538-4357/833/2/143, http://adsabs.harvard.edu/abs/2016ApJ...833..143L 833, 143[LinaresLinares2014]Linares14 Linares M.,2014, @doi [] 10.1088/0004-637X/795/1/72, http://adsabs.harvard.edu/abs/2014ApJ...795...72L 795, 72[Linares, Miles-Páez, Rodríguez-Gil, Shahbaz, Casares, Fariña& KarjalainenLinares et al.2016]Linares16 Linares M.,Miles-Páez P.,Rodríguez-Gil P.,Shahbaz T., Casares J.,Fariña C., Karjalainen R.,2016, preprint, http://adsabs.harvard.edu/abs/2016arXiv160902232L(@eprint arXiv 1609.02232)[LucyLucy1967]Lucy67 Lucy L. B.,1967, , http://adsabs.harvard.edu/abs/1967ZA.....65...89L 65, 89[Marsh, Robinson& WoodMarsh et al.1994]Marsh94 Marsh T. R.,Robinson E. L., Wood J. H.,1994, @doi [] 10.1093/mnras/266.1.137, http://adsabs.harvard.edu/abs/1994MNRAS.266..137M 266, 137[Orosz & HauschildtOrosz & Hauschildt2000]Orosz00 Orosz J. A.,Hauschildt P. H.,2000, , http://adsabs.harvard.edu/abs/2000A[Orosz & van KerkwijkOrosz & van Kerkwijk2003]Orosz03 Orosz J. A.,van Kerkwijk M. H.,2003, @doi [] 10.1051/0004-6361:20021468, http://adsabs.harvard.edu/abs/2003A[Özel & FreireÖzel & Freire2016]Ozel16 Özel F.,Freire P.,2016, @doi [] 10.1146/annurev-astro-081915-023322, http://adsabs.harvard.edu/abs/2016ARA[Özel, Psaltis, Narayan& Santos VillarrealÖzel et al.2012]Ozel12 Özel F.,Psaltis D.,Narayan R., Santos Villarreal A.,2012, @doi [] 10.1088/0004-637X/757/1/55, http://adsabs.harvard.edu/abs/2012ApJ...757...55O 757, 55[Papitto et al.,Papitto et al.2013]Papitto13 Papitto A.,et al., 2013, @doi [] 10.1038/nature12470, http://adsabs.harvard.edu/abs/2013Natur.501..517P 501, 517[Pecaut & MamajekPecaut & Mamajek2013]Pecaut13 Pecaut M. J.,Mamajek E. E.,2013, @doi [] 10.1088/0067-0049/208/1/9, http://adsabs.harvard.edu/abs/2013ApJS..208....9P 208, 9[Phillips, Shahbaz& PodsiadlowskiPhillips et al.1999]Phillips99 Phillips S. N.,Shahbaz T., Podsiadlowski P.,1999, @doi [] 10.1046/j.1365-8711.1999.02357.x, http://adsabs.harvard.edu/abs/1999MNRAS.304..839P 304, 839[RobertsRoberts2013]Roberts13 Roberts M. S. E.,2013, in van Leeuwen J.,ed.,IAU Symposium Vol. 291, Neutron Stars and Pulsars: Challenges and Opportunities after 80 years. pp 127–132 (@eprint arXiv 1210.6903), @doi10.1017/S174392131202337X[Romani & SanchezRomani & Sanchez2016]Romani16 Romani R. W.,Sanchez N.,2016, @doi [] 10.3847/0004-637X/828/1/7, http://adsabs.harvard.edu/abs/2016ApJ...828....7R 828, 7[Romani, Filippenko& CenkoRomani et al.2015]Romani15 Romani R. W.,Filippenko A. V., Cenko S. B.,2015, @doi [] 10.1088/0004-637X/804/2/115, http://adsabs.harvard.edu/abs/2015ApJ...804..115R 804, 115[Schroeder & HalpernSchroeder & Halpern2014]Schroeder14 Schroeder J.,Halpern J.,2014, @doi [] 10.1088/0004-637X/793/2/78, http://adsabs.harvard.edu/abs/2014ApJ...793...78S 793, 78[ShahbazShahbaz1998]Shahbaz98 Shahbaz T.,1998, @doi [] 10.1046/j.1365-8711.1998.01618.x, http://adsabs.harvard.edu/abs/1998MNRAS.298..153S 298, 153[ShahbazShahbaz2003]Shahbaz03b Shahbaz T.,2003, @doi [] 10.1046/j.1365-8711.2003.06258.x, http://adsabs.harvard.edu/abs/2003MNRAS.339.1031S 339, 1031[Shahbaz, Groot, Phillips, Casares, Charles& van ParadijsShahbaz et al.2000]Shahbaz00 Shahbaz T.,Groot P.,Phillips S. N.,Casares J.,Charles P. A.,van Paradijs J.,2000, @doi [] 10.1046/j.1365-8711.2000.03341.x, http://adsabs.harvard.edu/abs/2000MNRAS.314..747S 314, 747[Shahbaz, Zurita, Casares, Dubus, Charles, Wagner& RyanShahbaz et al.2003]Shahbaz03a Shahbaz T.,Zurita C.,Casares J.,Dubus G.,Charles P. A., Wagner R. M., Ryan E.,2003, @doi [] 10.1086/346001, http://adsabs.harvard.edu/abs/2003ApJ...585..443S 585, 443[Shahbaz, Casares, Watson, Charles, Hynes, Shih& SteeghsShahbaz et al.2004]Shahbaz04 Shahbaz T.,Casares J.,Watson C. A.,Charles P. A.,Hynes R. I.,Shih S. C., Steeghs D.,2004, @doi [] 10.1086/426504, http://adsabs.harvard.edu/abs/2004ApJ...616L.123S 616, L123[Shahbaz, Watson& DhillonShahbaz et al.2014]Shahbaz14 Shahbaz T.,Watson C. A., Dhillon V. S.,2014, @doi [] 10.1093/mnras/stu267, http://adsabs.harvard.edu/abs/2014MNRAS.440..504S 440, 504[Stappers, van Kerkwijk, Bell& KulkarniStappers et al.2001]Stappers01 Stappers B. W.,van Kerkwijk M. H.,Bell J. F., Kulkarni S. R., 2001, @doi [] 10.1086/319106, http://adsabs.harvard.edu/abs/2001ApJ...548L.183S 548, L183[Storn & PriceStorn & Price1995]Storn95 Storn R.,Price K.,1995, Differential Evolution - A simple and efficient adaptive scheme for global optimization over continuous spaces[Strader, Chomiuk, Sonbas, Sokolovsky, Sand, Moskvitin& CheungStrader et al.2014]Strader14 Strader J.,Chomiuk L.,Sonbas E.,Sokolovsky K.,Sand D. J., Moskvitin A. S., Cheung C. C.,2014, @doi [] 10.1088/2041-8205/788/2/L27, http://adsabs.harvard.edu/abs/2014ApJ...788L..27S 788, L27[Tang et al.,Tang et al.2014]Tang14 Tang S.,et al., 2014, @doi [] 10.1088/2041-8205/791/1/L5, http://adsabs.harvard.edu/abs/2014ApJ...791L...5T 791, L5[VrugtVrugt2016]Vrugt2016 Vrugt J. A.,2016, @doi [Environmental Modelling Software] http://dx.doi.org/10.1016/j.envsoft.2015.08.013, 75, 273[Wang, Archibald, Thorstensen, Kaspi, Lorimer, Stairs& RansomWang et al.2009]Wang09 Wang Z.,Archibald A. M.,Thorstensen J. R.,Kaspi V. M.,Lorimer D. R.,Stairs I., Ransom S. M.,2009, @doi [] 10.1088/0004-637X/703/2/2017, http://adsabs.harvard.edu/abs/2009ApJ...703.2017W 703, 2017[Welsh, Horne& GomerWelsh et al.1995]Welsh95 Welsh W. F.,Horne K., Gomer R.,1995, @doi [] 10.1093/mnras/275.3.649, http://adsabs.harvard.edu/abs/1995MNRAS.275..649W 275, 649[van Kerkwijk, Breton& Kulkarnivan Kerkwijk et al.2011]Kerkwijk11 van Kerkwijk M. H.,Breton R. P., Kulkarni S. R.,2011, @doi [] 10.1088/0004-637X/728/2/95, http://adsabs.harvard.edu/abs/2011ApJ...728...95V 728, 95[van Staden & Antoniadisvan Staden & Antoniadis2016]Staden16 van Staden A. D.,Antoniadis J.,2016, @doi [] 10.3847/2041-8213/833/1/L12, http://adsabs.harvard.edu/abs/2016ApJ...833L..12V 833, L12
http://arxiv.org/abs/1708.07355v1
{ "authors": [ "T. Shahbaz", "M. Linares", "R. P. Breton" ], "categories": [ "astro-ph.HE", "astro-ph.SR" ], "primary_category": "astro-ph.HE", "published": "20170824111944", "title": "Properties of the redback millisecond pulsar binary 3FGL J0212.1+5320" }
mymainaddress,mysecondaryaddress,mythirdaddress]J. Tanakamycorrespondingauthor [mycorrespondingauthor]Corresponding author. [email protected] [mymainaddress]RCNP, Osaka University, 10-1, Mihogaoka, Ibaraki, Osaka 567-0047, Japan [mysecondaryaddress]Department of Physics, Konan University, Higashinada, Kobe, Hyogo 658-8501, Japan [mythirdaddress]Institute für Kernphysik, Technische Universität Datmstadt, 64289 Darmstadt, Germany3address]R. Kanungo [3address]Astronomy and Physics Department, Saint Mary's University, Halifax, Nova Scotia B3H 3C3, Canada4address]M. Alcorta [4address]TRIUMF, Vancouver, V6T2A3, Canadamymainaddress]N. Aoi5address]H. Bidaman [5address]Department of Physics, University of Guelph, Guelph N1G2W1, Canada 5address]C. Burbadge 4address]G. Christian 4address]S. Cruz 4address]B. Davids 5address]A. Diaz Varela 4address,6address]J. Even [6address]KVI-CART, University of Groningen, 9747 AA Groningen, The Netherlands 4address]G. Hackman 6address]M.N. Harakeh 4address]J. Henderson 7address]S. Ishimoto [7address]High Energy Accelerator Research Organization (KEK), Ibaraki 305-0801, Japan 3address]S. Kaur 3address]M. Keefe 4address,8address]R. Krücken [8address]Department of Physics and Astronomy, University of British Columbia, Vancouver, BC V6T 1Z1, Canada 4address]K.G. Leach 4address]J. Lighthall 4address,9address]E. Padilla Rodal [9address]Instituto de Ciencias Nucleares, UNAM, Mexico City 04510, Mexico 3address]J.S. Randhawa 4address]P. Ruotsalainen 3address,4address]A. Sanetullaev 4address]J.K. Smith 3address]O. Workman 10address,mymainaddress]I. Tanihata [10address]School of Physics and Nuclear Energy Engineering and IRCNPC, Beihang University, Beijing 100191, ChinaProton inelastic scattering off a neutron halo nucleus, ^11Li, has been studied in inverse kinematics at the IRIS facility at TRIUMF. The aim was to establish a soft dipole resonance and to obtain its dipole strength. Using a high quality 66 MeV ^11Li beam, a strongly populated excited state in ^11Li was observed at E_x=0.80 ± 0.02 MeV with a width of Γ= 1.15 ± 0.06 MeV. A DWBA (distorted-wave Born approximation) analysis of the measured differential cross section with isoscalar macroscopic form factors leads to conclude that this observed state is excited in an electric dipole (E1) transition. Under the assumption of isoscalar E1 transition, the strength is evaluated to be extremely large amounting to 600∼2000 Weisskopf units, exhausting 4%∼14% of the isoscalar E1 energy-weighted sum rule (EWSR) value. The large observed strength originates from the halo and is consistent with the simple di-neutron model of ^11Li halo.^11LiNeutron haloSoft-dipole resonanceProton inelastic scatteringIsoscalar E1Understanding the ^11Li structure is a landmark in studies of the halo nuclei <cit.>. The two valence neutrons in ^11Li have a very low separation energy, forming a low-density halo. As a collective excitation of the two-neutron halo, the soft-dipole resonances in ^11Li are expected to appear at low excitation energies <cit.> and by nature of the excitation should have both isovector and isoscalar components. The soft-dipole resonances in ^11Li have been predicted to appear at around 0.7 MeV and 2.7 MeV by the cluster-orbital shell model (COSM) that is constructed in a microscopic framework as the three-body system ^9Li + n + n. These low-lying states are predicted to exhaust 8% of the isovector E1 EWSR <cit.>. Several Coulomb-dissociation experiments have been performed at relatively high bombarding energies on Pb targets to reveal the E1 strength at low excitation energy. In the early 1990's, experiments at MSU <cit.> at 24 MeV/u and RIKEN <cit.> at 64 MeV/u reported the experimental results showing a strong dipole strength at low excitation energy.Later, measurements at GSI <cit.> at 280 MeV/u reported that there were dipole states centered at 1.0 ± 0.1 MeV and 2.4 ± 0.1 MeV in ^11Li, and that these two states exhausted 8% of the isovector E1 EWSR, in good agreement with the COSM prediction. The experiment at RIKEN <cit.>, on the other hand, indicated a peak at lower energy ∼ 0.6 MeV that was interpreted as a soft-dipole excitation. The isovector B(E1) value integrated over E_rel < 3 MeV was obtained to be 1.42(18) e^2fm^2, which was the largest E1 strength observed so far for a low-lying dipole state. However, since the direct breakup mechanism is dominant for such Coulomb dissociation measurements, as discussed for ^11Be <cit.>, it was considered that this low-energy E1 peak reported in Ref. <cit.> corresponds to a direct breakup to the continuum. Though the prediction of COSM model is a resonant excited state, the enhancement observed in Coulomb dissociation arises from the small separation energy in ^11Li as predicted in Ref. <cit.>.On the other hand, in the missing-mass method nuclear excitations are observed at backward scattering angles, where the effects due to the Coulomb dissociation process are negligible.Therefore, a resonant state, if it exists, should be observed more clearly in the missing-mass spectrum due to the absence of a large E1 breakup effect. The excitation energies for ^11Li obtained from several reactions are summarized in Fig. <ref>. A pion-induced double-charge-exchange ^11B(π^-,π^+) reaction <cit.>, a pion-capture reaction ^14C(π^-,pd) <cit.> and ^11Li(p,p^') <cit.> experiments reported an excited state at around 1 MeV. However, the ^10Be(^14C,^13N) and ^14C(^14C,^17F) experiments showed that there is a state at an excitation energy of 2.47 MeV <cit.>. Some of these experiments were performed with very poor resolution, and some had low statistics. Reliable information on the width and the transition strength has not been obtained so far. In order to study the resonant structure and its strength, high-statistics data with good resolution are therefore required. The recent ^11Li(d,d^') experiment showed a clear peak structure at around 1 MeV excitation energy <cit.>.The angular distribution indicated that the excitation is due to an isoscalar E1 transition.The leading order operator for the isoscalar-dipole excitation is the operator e/2r^3Y_1 <cit.>. In stable nuclei, the strength connected with this operator is relatively small compared with these of other types of multipole excitation partly because the e/2r^3Y_1 matrix element is suppressed by the rapid fall off of the density at large radial distances. However, since a halo nucleus has a long density-distribution tail, a strong isoscalar E1 transition strength is expected to come from a combined effect of the large r (radius) and the r^3 factor in the operator. This strong transition strength would be a good indication that the low-lying state results from the excitation of the halo. Since a similar low-lying dipole resonance is not observed in ^9Li, then the two-neutron halo in ^11Li is the origin of this resonance. Proton inelastic scattering at low incident energy is the simplest reaction for extracting the dipole transition strength. In addition, (p,p^') is a complementary reaction to (d,d^') that is necessary to establish the dipole resonance. For solving the long-standing controversy regarding the low-lying dipole resonances in ^11Li, we performed a high statistics and high resolution ^11Li(p,p^') measurement to determine the strength of the soft-dipole resonance.The experiment was performed at the IRIS facility at TRIUMF in Canada. A high-quality ^11Li beam at 6 MeV/u from the ISAC II facility was incident on a solid hydrogen target with a thickness of ∼150 μm. The target is formed on a 5 μm Ag foil backing with the foil facing the incoming beam. Therefore, the scattered protons from the H_2 target reach the detectors unhindered. Using a Δ E-E detector system consisting of a Si-strip detector array and CsI(Tl) detectors, an excitation energy resolution of 170 keV (σ) was achieved under a low-background condition. Figure <ref> shows a schematic drawing of the experimental setup and the measured spectra for particle identification of hydrogen and Li isotopes. The recoil protons were detected by Telescope A, consisting of 100 μm thick annular Si-strip detectors <cit.> and annular-CsI(Tl) detectors.Inelastically scattered ^11Li*(excited state of ^11Li) decays into ^9Li and two neutrons. ^9Li ions are detected by Telescope B, consisting of two layers of 60 μm thick and 500 μm thick annular Si-strip detectors. By using the energies and polar angle information of the recoil protons, the ^11Li missing-mass spectrum was obtained. The coincident measurement with ^9Li, emitted after the ^11Li decay, improves the selection of the ^11Li(p,p^') reaction channel. Moreover, a coplanarity gate on the relative azimuthal angle between proton and ^9Li, which was defined by ϕ_^11Li-p=180^∘± 22.5^∘, decreased the background from the non-resonant decay events. After a two-body reaction of inelastic scattering, the ^9Li decay residue from the excited state of ^11Li is emitted in almost the same direction as the excited ^11Li nucleus because the decay energy is much smaller than the mass of ^9Li. On the other hand, the ^9Li from the direct breakup of ^11Li due to interaction with the proton target will in general not necessarily be emitted in the same direction as ^11Li, which is determined by the four-body final state phase space.The obtained excitation-energy spectra with their energy-dependent detection efficiencies are shown in Fig. <ref>. These efficiencies were calculated by taking into account both the coincidence-gate efficiency and the coplanarity-gate efficiency resulting from the detector geometry and the angular spread from the decay of ^11Li to ^9Li. The ^11Li(p,p^') spectra at the different angles were fitted to obtain the resonant energy, the width and the differential cross sections. A Breit-Wigner function F(E_r) with an energy-dependent width Γ(E_r) was employed to fit the spectra assuming a resonant state near the particle decay threshold.The function F(E_r) is expressed as, F(E_r)=Γ(E_r)/(E_x-E_0)^2+Γ^2(E_r)/4,where E_r is the relative energy of decay particles, E_0 is the excitation energy of the resonant peak observed in ^11Li. The relationship between these variables is E_x=E_s+E_r, where E_s is the 2n separation energy. The width Γ(E_r) is a function of energy defined as Γ(E_r)≡ g√(E_r), where g is a fitting parameter. The experimental energy resolution was taken into account by folding the Breit-Wigner function with a Gaussian of σ=170 keV, which was obtained from fitting the elastic scattering peak in the ^11Li excitation-energy spectrum. The peak position and the resonant width were determined consistently by fitting all the spectra at the different scattering angles to be E_0=0.80 ± 0.02 MeV and Γ(E_r)=1.15 ± 0.06 MeV.Differential cross sections of the elastic scattering obtained from detection of either proton or ^11Li are plotted in Fig. <ref>. In addition to the statistical uncertainties of the data, the total systematic uncertainties were estimated to be ±7%. The contributions to the systematic uncertainties consist of 4.8% coming from the target thickness and 5.0% coming from the absolute counting of the incident beam. The optical potentials were obtained from the proton elastic scattering data assuming the following form: U(r) =-V_vf(r,r_v,a_v)+4(ħ/m_πc)^21/rd/dr{V_sof(r,r_so,a_so)}l·s+V_C(r_C)+i4a_sd/dr{W_sf(r,r_s,a_s)}- iW_wf(r,r_w,a_w)where the Woods-Saxon potential shape f(r,r_i,a_i)={1+e^(r-r_iA^1/3)/a_i}^-1 was used. The obtained optical potential parameter sets with the imaginary part having only the volume term (Set V) and with only the surface imaginary term (Set S) are listed in Table <ref>. The inelastic-scattering differential cross sections (empty red squares with error bars) were compared to DWBA predictions using the code, CHUCK3 <cit.>, shown in Fig <ref>.Different form factors were used for different multipolarities of transition Δ L between the ground state and the observed excited state. The optical potential for the exit channel was assumed to be the same for the entrance channel. For the very low bombarding energy and the low-Z of the hydrogen target, Coulomb excitation of isovector dipole strength, in particular, is expected to be negligible at the backward center-of-mass angle at which measurements were made in this experiment. Furthermore, the isoscalar excitation was expected to be dominant in case of the present low-energy ^11Li(p,p^') experiment <cit.>. For Δ L=2 (quadrupole) and Δ L=3 (octupole), the form factors obtained in the surface vibrational model <cit.> were used. In such models, the nuclear shape vibrates according to quadrupole or octupole deformations without changing the density. For Δ L=0, the breathing-mode form factor was used <cit.>. It changes the nuclear size and the density changes by conserving the number of nucleons. For Δ L=1, the Harakeh-Dieperink form factor <cit.> and the Orlandini form factor <cit.> were used. These form factors for Δ L=1 were introduced to describe the isoscalar E1 excitations with the e/2r^3Y_1 operator. The Harakeh-Dieperink form factor is obtained from a sum-rule approach model (doorway dominance model) and is most appropriate for the collective 3ħω compression mode exhausting the largest fraction of the isoscalar dipole EWSR. Instead, the Orlandini form factor is determined to describe 1ħω excitations with a very small fraction of the isoscalar E1 EWSR. The DWBA calculations were performed for both optical parameter sets V and S. The results of the calculations are summarized in Table <ref>. The absolute value was normalized to fit the experimental data. The Δ L = 0, 2, 3 angular distributions show negative slopes at θ_CM ∼ 90^∘, which are different from the data. Only the Δ L=1 calculations show distributions that are in closest agreement with the data. The calculation using the volume imaginary potential provides the best fit to the data, both using the Harakeh-Dieperink (H-D) and Orlandini (O) form factors.The transition strength was evaluated from the best-fit amplitude of the DWBA calculations.The transition strength to the 0.80 MeV state was extremely large amounting to 600 ∼ 2000 Weisskopf units (W.u.) and 4% ∼ 14% of the isoscalar E1 EWSR.This large strength can be qualitatively understood as the feature of the isoscalar E1 operator e/2r^3Y_1 together with the spatially extended neutron-halo structure. The strength enhanced by this effect is well estimated by introducing a di-neutron weakly bound in the square-well potential. Using the extended halo distribution, it was found that the transition strength is ∼ 2.8 W.u., which is of the similar order of magnitude as the present experimental result. In the actual calculation, the operator e/2{r^3-5/3⟨r^2|r⟩}Y_1, corrected for the center-of-mass motion, was used to calculate the transition rate. This simple model estimation shows that the strength comes from the low-density halo far outside of the square-well potential.The observed peak in the ^11Li(p,p^') spectrum is slightly lower in energy and a bit wider than the one found in the ^11Li(d,d^') experiment. Dipole transitions from the 3/2^- ground state of ^11Li can lead to states with spins of 1/2^+, 3/2^+ and 5/2^+. If two or more of these states are populated through this soft dipole excitation in the (p,p^') reaction, and if they are relatively closely spaced, the observed state in (p,p^') experiment could include two unresolved dipole states. This may account for the small difference in peak position seen compared to (d,d^').In summary, a low-energy dipole excitation state at 0.80 MeV in ^11Li has been identified in low-energy proton inelastic scattering off ^11Li. The measured angular distribution of the differential cross sections is consistent with predictions using the form factor for an isoscalar E1 excitation mode. A very large isoscalar E1 transition probability of 1.1 ∼ 3.8× 10^3 e^2fm^6 is deduced, exhausting 4 ∼ 14% of the isoscalar E1 EWSR. This large dipole strength is found to originate from the halo and is consistent with a simple di-neutron model for ^11Li. The results bring new information on the soft dipole excitation. The Current derivation of E1 strength assumes that the observed peak is described only by an isoscalar excitation. While it is true that the (p,p') reaction can excite both isoscalar and isovector modes, as described above, the isoscalar excitation may by expected to be dominant here. However future theoretical developments considering both isoscalar and isovector descriptions for the soft-dipole excitation should lead to a deeper understanding, which is beyond the scope of this article.The experiment was partly supported by the grant-in-aid program of the Japanese government under the contract number 23224008 and 14J03935. The work is supported by NSERC, Canada foundation for innovation and Nova Scotia Research and Innovation Trust. TRIUMF receives funding via a contribution through the National Research Council Canada.The support of the PRChina government and Beihang University under the Thousand Talent program is gratefully acknowledged. J.T. gratefully acknowledges the support by the Hirao Taro Foundation of the Konan University Association for Academic Research. Discussions with Dr. K. Ogata and Dr. T. Matsumoto are gratefully acknowledged.§ REFERENCES9Tanihata19852676I. Tanihata et al., Phys. Rev. Lett. 55, 2676 (1985).Hansen1987409P. G. Hansen and B. Jonson, Europhys. Lett. 4, 409 (1987). Ikeda1992355K. Ikeda, Nucl. Phys. A538, 355c (1992). Suzuki1990599Y. Suzuki et al., Nucl. Phys. A517, 599 (1990).Ieki1993730K. Ieki et al., Phys. Rev. Lett. 70, 730 (1993). Shimoura199529S. Shimoura et al., Phys. Lett. B 348, 29 (1995). Zinser1997151M. Zinser et al., Nucl. Phys. A619, 151 (1997).Nakamura2006252502T. Nakamura et al., Phys. Rev. Lett. 96, 252502 (2006).Nakamura1994296T. Nakamura et al., Phys. Lett. B 331, 296 (1994). Smith2008202501M. Smith et al., Phys. Rev. Lett. 101, 202501 (2008). Cavallaro2017012701M. Cavallaro et al., Phys. Rev. Lett. 118, 012701 (2017). Kobayashi1992343T. Kobayashi, Nucl. Phys. A538, 343c (1992). Gornov19984325M. G. Gornov et al., Phys. Rev. Lett. 81, 4325 (1998). Kelley201288J.H. Kelley et al., Nucl. Phys. A880, 88 (2012). Korsheninnikov1996R537 A. A. Korsheninnikov et al., Phys. Rev. C 53, R537 (1996). Korsheninnikov19972317 A. A. Korsheninnikov et al., Phys. Rev. Lett. 78, 2317 (1997). Bohlen19957H. G. Bohlen et al., Z. Phys.A 351, 7 (1995). Kanungo2015192502R. Kanungo et al., Phys. Rev. Lett. 114, 192502 (2015). Harakeh2001M. N. Harakeh and A. van der Woude, Giant Resonances, Oxford University Press (2001). Davinson2000350T. Davinson et al., Nucl. Instr. Meth. Phys. Res. A 454, 350 (2000). Kunz1977P. D. Kunz, CHUCK- A Coupled-Channel Code, University of Colorado (1977) unpublished; modified by J.R. Comfort and M.N. Harakeh. Love19811073 W. G. Love et al., Phys. Rev. C 24, 1073 (1981).Bohr1975A. Bohr and B. R. Mottelson, Nuclear StructureII, World Scientific Publishing Co. (1975). Satchler1987215G. R. Satchler, Nucl. Phys. A472, 215 (1987). Harakeh19812329M. N. Harakeh and A. E. L. Dieperink, Phys. Rev. C 23, 2329 (1981). Orlandini198221G. Orlandini et al., Phys. Lett. B 119, 21 (1982).
http://arxiv.org/abs/1708.07719v1
{ "authors": [ "Junki Tanaka" ], "categories": [ "nucl-ex" ], "primary_category": "nucl-ex", "published": "20170825130120", "title": "Halo-induced large enhancement of soft dipole excitation of 11Li observed via proton inelastic scattering" }
AIP/123-QEDDepartment of Physics, Indian Institute of Science Education and Research,Dr. Homi Bhabha Road, Pune, Maharashtra-411008, IndiaDepartment of Condensed Matter Physics and Material Science, Tata Institute of Fundamental Research, Dr. Homi Bhabha Road, Mumbai 400 005, IndiaAuthor to whom correspondence should be addressed. Electronic mail: [email protected] Department of Physics, Indian Institute of Science Education and Research,Dr. Homi Bhabha Road, Pune, Maharashtra-411008, India Centre for Energy Science, Indian Institute of Science Education and Research,Dr. Homi Bhabha Road, Pune, Maharashtra-411008, IndiaThe magnetic, thermodynamic and dielectric properties of the γ - Fe_2WO_6 system is reported. Crystallizing in the centrosymmetric Pbcn space group, this particular polymorph exhibits a number of different magnetic transitions, all of which are seen to exhibit a finite magneto-dielectric coupling. At the lowest measured temperatures, the magnetic ground state appears to be glass-like, as evidenced by the waiting time dependence of the magnetic relaxation. Also reflected in the frequency dependent dielectric measurements, these signatures possibly arise as a consequence of the oxygen non-stoichiometry, which promotes an inhomogeneous magnetic and electronic ground state.Magnetic and dielectric investigations of γ - Fe_2WO_6 Sunil Nair December 30, 2023 ======================================================The area of magneto-dielectrics - which pertains to the coupling between the magnetic and dielectric properties - has seen a renaissance in the recent past. This is partly due to emergence of the area of magnetoelectric multiferroics, where magnetic and polar orders co-exist. In these systems, the onset of ferroelectric order typically results in a pronounced dielectric anomaly, which can then be tuned by the application of an external magnetic field<cit.> . However, the phenomena of magnetodielectricity is more generic, since it is not constrained by the stringent symmetry considerations which are a prerequisite for the observation of either magnetoelectricity or multiferroicity.A number of potential applications varying from spin-charge transducers to magnetic sensors can be envisaged using magnetodielectric materials <cit.> and this area of research is continuously driven by the investigation of different material and structural classes which could exhibit these properties, especially near room temperatures. Strongly correlated magnetic oxides offer a natural playground for the investigation of such phenomena,since many of them exhibit an insulating (or at-least a semiconducting) antiferromagnetic ground state. Moreover, the large coupling between the spin, charge and lattice degrees of freedom observed in many of these systems is an added advantage, and typically contributes towards a larger magneto-dielectric effect <cit.>. With the dielectric constant being susceptible to changes in the magnetic structure, it is not surprising that a number of magnetic oxides exhibit magneto-dielectricity in the vicinity of their magnetic transitions. Here we report on the magnetic, thermodynamic and dielectric investigation of a relatively unexplored Iron-Tungsten-Oxygen system (Fe_2WO_6). In addition to a complex set of magnetic transitions, including a low temperature glass like magnetic state, we also observe the existence of a finite magneto-dielectric coupling, persisting right up to room temperatures.The chemical phase diagram of the Fe-W-O system is characterized by the presence of a number of polymorphic modifications, which makes the selective synthesis of Fe_2WO_6 non-trivial <cit.> .Prior structural investigations have revealed that Fe_2WO_6 can exist in three distinct structures, depending on their synthesis conditions <cit.> . Labeled as α, β and γ- Fe_2WO_6, these polymorphs are typically stabilized as a function of increasing reaction temperatures, with ill-defined phase boundaries. For instance, α-Fe_2WO_6 crystallizing in the orthorhombic columbite (Pbcn) symmetry is stabilized at reaction temperatures lower than 800C. At reaction temperatures between 750-900C, the monoclinically distorted β-Fe_2WO_6 is favored, whereas at reaction temperatures in excess of 900C, the γ phase is reported to be stabilized. This high temperature γ phase is known to crystallize in the tri-α-PbO_2 structure, where the orthorhombic Pbcn symmetry of the α- Fe_2WO_6 phase is preserved, but with a tripling of the unit cell along one of the crystallographic directions (a'=a, b'=3b, c'=c). Polycrystalline specimens of γ- Fe_2WO_6 were synthesized using the standard solid state ceramic method. An equimolar mixture of previously preheated Fe_2O_3 (Sigma Aldrich, ≥ 99%) and WO_3 (Alfa Aesar, ≥ 99.8%) precursors, were thoroughly ground for several hours using a dry ball mill. The fine and homogenous mixture was pressed into pellets and loaded into a preheated alumina boat. The charge was slowly heated to 800C, kept there for 24 hours and then gradually cooled down to room temperature. These pellets were repeatedly reground, pelletized and sintered for several times at 950C in air. After about 100 hours at this temperature we obtained a well crystallized single phase of γ- Fe_2WO_6. Phase purity was confirmed using X-Ray powder diffraction measured using a Bruker D8 Advance diffractometer with Cu K_α source, and Rietveld refinement was carried out using the Fullproof refinement program<cit.>. These γ- Fe_2WO_6 specimens were observed to slightly degrade with time, though no appreciable changes were observed in their XRD patterns taken after a few months. Specific heat and magnetization measurements were performed using a Quantum Design PPMS and a MPMS-XL SQUID magnetometer respectively. Temperature dependent dielectric measurements were performed in the standard parallel plate geometry, using a NOVOCONTROL (Alpha-A) High Performance Frequency Analyzer. Measurements were typically done using an excitation ac signal of 1V at frequencies varying from 100 Hz to 100 kHz. Magneto-dielectric measurements were performed by using the Manual Insertion Utility Probe of the MPMS-XL magnetometer. The Rietveld refinement of room temperature X-ray diffraction pattern of our specimen is as shown in Fig.(<ref>). A good fit, corresponding to goodness of fit value (R_wp/R_e ) of 1.85 could be obtained, confirming a single phase γ- Fe_2WO_6 crystallizing in the Pbcn space group, with lattice parameters a = 4.575(1)Å, b = 16.747(4)Å, c = 4.965(1)Å, and α=β=γ=90. A schematic of this tri-α-PbO_2 columbite structure is depicted in the inset of Fig.(<ref>), and comprises of layered corner-shared FeO_6 octahedra, with Fe occupying two distinct crystallographic sites. Preliminary magnetic characterization of the Fe_2WO_6 polymorphs have been reported earlier, and appears to depend acutely on the synthesis conditions <cit.>. For instance, both the α and γ-Fe_2WO_6 are reported to exhibit two magnetic transitions, with a high temperature transition at T_1 ≈ 240-260K, and a lower temperature transition T_2 ≈ 200-220K. In addition, the presence of an additional low temperature feature at ≈ 20K has also been reported in both these polymorphs. However, the β-Fe_2WO_6 is reported to exhibit a solitary magnetic transition at 260K. The dc magnetic susceptibility of ourγ-Fe_2WO_6 specimen, as measured in the Zero Field Cooled (ZFC) and Field Cooled (FC) measuring protocols is shown inset of Fig.<ref>(a). As is evident from the main panel of <ref>(a), two distinct transitions at 280K, and 228K can be discerned from the magnetization measurement performed at low magnetic fields of the order of 10 Oe.An increase in the applied magnetic fields appears to broaden the high temperature transitions <ref>(b), without a pronounced change in the transition temperatures.An early neutron diffraction investigation of γ-Fe_2WO_6 has suggested that the magnetic structure comprises of ferromagnetic (100) planes coupled antiferromagnetically, with the spins lying along the (001) direction<cit.> . It was also speculated that the magnetic space group (Pbc'n') could allow for a finite ferromagnetic component along one of the crystallographic axes. Magnetic field isotherms measured in our specimen at different temperatures indicate that though the magnetization does not saturate up to the highest measured field of 7 Tesla, a finite opening of the loop exists all the way down to the lowest measured temperatures (inset of <ref>(a)). This suggests that a weak ferromagnetic component co-exists along with the antiferromagnetic order which appears to set in at 280K. On further reducing the temperature, the magnetization increases continuously, and exhibits a cusp like feature at ≈ 22K, which possibly corresponds to the low temperature transition that some earlier reports have alluded to. Interestingly, even up to magnetic fields up to 1 Tesla <ref>(b), a bifurcation in the FC and ZFC magnetization is observed at very low temperatures, indicating that this low temperature feature could possibly be associated with a magnetic glass like state. A Curie Weiss fit to the high temperature DC magnetization data (inset of <ref>(b)) at temperatures in excess of 450 K, giving an effective magnetic moment per Fe ion of 1.22μ_β. The spin only moment of (low spin) Fe^3+ ion is 1.73μ_β, implying that a finite fraction of the Fe ions are in the low spin (S =0) Fe^2+ state. An early Electron Paramagnetic Resonance (EPR) measurement has also speculated on the possibility of the existence ofFe^2+ ions in the Fe_2WO_6 system <cit.>, indicating that oxygen non-stoichiometry could be endemic to this class of materials. This low temperature state was further evaluated by means of magnetic relaxation measurements. These measurements were performed by cooling the system from room temperatures (in the zero field cooled protocol) down to the lowest temperature of 2 K, followed by soaking the system at different waiting times (100, 1000 and 5000 seconds). The variation in the DC magnetization as a function of time was then recorded, on the application of a magnetic field of 100 Oe. Fig.<ref>(a) shows the evolution of magnetization as a function of time inγ- Fe_2WO_6. We observe that the data fits well to a stretched exponential of the form M(t) = M_0-M_gexp(-t/τ)^β which has been used earlier in fitting the magnetic relaxation in a number of spin/cluster glasses <cit.>. Here, β is a temperature dependent exponent, τ is the time constant, and M_0 and M_g refer to the contributions from the long range ordered and glassy components of the system respectively. These fits are depicted as solid lines in Fig.<ref>(a), and the values of M_0 and β deduced from our fitrange from 0.00167 to 0.00175 emu/gm and 0.35 to 0.41 respectively for different waiting times. The small value of M_0 in our case could be a consequence of the fact that the long range ordered component in our case is antiferromagnetic<cit.>. The magnetic viscosity (S (t)=1/H(∂ M/∂ (lnt)))<cit.> as deduced for measurements with different waiting times is depicted in Fig.<ref>(b). As is expected for glassy systems, a peak in S(t) which shifts as a function of the waiting time is observed, reinforcing the glass like nature of this low temperature state. As mentioned earlier, theγ- Fe_2WO_6 specimens appear to slightly change with time. Our measurements indicate that even after a few months after synthesis, the temperatures of the high temperature magnetic transitions remain relatively invariant. However, the temperature of the low temperature glass like transition appears to shift to lower temperatures as a function of the elapsed time. Specific heat measurements on γ-Fe_2WO_6 reveals a solitary feature in the vicinity of the high temperature transition, as is shown in the main panel of Fig.(<ref>). The fact that the other two transitions are not picked up in the heat capacity measurements indicate that the change in entropy associated with these transitions is quite small. The inset of Fig.(<ref>) shows the linear fit to the low temperature specific heat using the equation C/T = γ + β T^2, giving values of 12.7 ±0.6 mJ/mol K^2 and 2.52 ±0.07 mJ/mol K^2for γ and β respectively. The value of the Debye temperature (θ_D) using the relation θ_D^3 = 234R/β (with R being the universal gas constant) gives a value of 91.7 K. To the best of our knowledge, there have been no reports of dielectric measurements of any of the Fe_2WO_6 polymorphs. Fig.(<ref>) depicts the real part of the dielectric constant as a function of temperature, as measured at frequencies varying from 0.1 to 10 kHz. A pronounced dielectric anomaly is observed in the vicinity of the the high temperature magnetic transition at 280K, indicating a substantial coupling between the spin and electronic degrees of freedom in this system. Though the observed dielectric anomaly is reminiscent of that seen in many multiferroic systems, the overall resistivity values are quite low, especially in the high temperature regime, where these features are observed. Thus, the possibility of a magnetically induced ferroelectric state can be ruled out. In this context, it is interesting to note that systems of the form RFeWO_6 have recently been reported to be multiferroic <cit.>. However, these RFeWO_6 systems crystallize in the polar Pna2_1 structure, and exhibit low temperature multiferroic transitions between 15-18K, which involve a complex interplay between the Fe and the magnetic R ions. Fig.<ref>(a) depicts the magnetic field dependence of the real part of the dielectric susceptibility ϵ'(T) as measured in γ-Fe_2WO_6 at 0 and 5 Tesla magnetic fields. A suppression of ϵ'(T) is observed on the application of the magnetic field. This data is replotted in the form of the magneto-dielectric effect (|Δϵ/ϵ_0| %) in<ref>(b), and shows that a finite magneto dielectric effect is observed right up to room temperatures. Though the extent of magneto-dielectricity as measured at 10 kHz is of the order of 5 %, the fact that |Δϵ/ϵ_0| decreases with the probing frequency suggests that the contribution of the magnetoresistive component could be appreciable. The inset shows the variation of the same quantity in the region of the low temperature glass transition., showing that the low temperature glass like transition is also seen to be associated with a discontinuity in the magneto-dielectric effect, in-spite of the fact that the values of |Δϵ/ϵ_0| are very small. The temperature dependence of the imaginary part of the dielectric susceptibility exhibits a pronounced frequency dependent peak, the peak position of which shifts to higher temperatures with increase in the probing frequencies as shown in Fig.(<ref>). These is typical of a charge relaxation process, and similar signatures have been observed in a number of other other strongly correlated oxides, especially within the magnetic state of the phase separated manganites <cit.>. In the mixed valent manganites, electronic phase separation typically gives rise to regions with ferromagnetic and antiferromagnetic correlations, the competition between which is known to result in glassiness in both the magnetic and electronic sectors. This electronic phase separation which results in phase co-existence over a wide range of length scales then manifests itself in the form of frequency dependent dielectric properties. In the case of theγ-Fe_2WO_6 system investigated here, this dispersion observed in the dielectric susceptibility could arise as a consequence of the oxygen non-stoichoimetry. This could also be intimately tied with the low temperature magnetic glass state which sets in at lower temperatures. For instance, early Electron Paramagnetic Resonance (EPR) measurements have speculated on the possibility of magnetic clusters (presumably arising fro anti-site disorder) in the Fe_2WO_6 polymorphs<cit.> . More recent measurements <cit.> have also indicated the presence of short range correlations extending well above the magnetic ordering temperatures in the β-Fe_2WO_6 polymorph, indicating that this cluster formation could be generic to the Fe-W-O systems.In summary, we report on the synthesis and characterization of the γ-Fe_2WO_6 system. Crystallizing in the non-polar Pbcn space group, this system exhibits a number of magnetic transitions as a function of decreasing temperatures, with long range antiferromagnetic order setting in at temperatures as high as 280K. These transitions, especially the high temperature ones, are also observed to be associated with a finite magneto-dielectric effect. The presence of a low temperature magnetic glass like state is also inferred from aging and magnetic viscosity measurements. These presumably arise as a consequence of the freezing of magnetic clusters, the existence of which is also suggested by the frequency dependent peak in the imaginary part of the measured dielectric susceptibility.§ ACKNOWLEDGMENTSThe authors thank D. Buddhikot for help in heat capacity measurements. J.K. acknowledges DST India for support through a SERB-NPDF file no. PDF/2016/000911 . S.N. acknowledges DST India for support through grant no. SB/S2/CMP-048/2013.
http://arxiv.org/abs/1708.07703v2
{ "authors": [ "Soumendra Nath Panja", "Jitender Kumar", "Luminita Harnagea", "A. K. Nigam", "Sunil Nair" ], "categories": [ "cond-mat.str-el" ], "primary_category": "cond-mat.str-el", "published": "20170825115653", "title": "Magnetic and dielectric investigations of $γ$ - Fe${_2}$WO${_6}$" }
Cluster mass and age in the multiple population context.A. Bragaglia et al. Abundance analysis in NGC 6535A. Bragaglia, [email protected] INAF-Osservatorio Astronomico di Bologna, via Gobetti 93/3, I-40129 Bologna, Italy INAF-Osservatorio Astronomico di Padova, vicolo dell'Osservatorio 5, I-35122Padova, Italy Monash Centre for Astrophysics, School of Physics and Astronomy, Monash University, Melbourne, VIC 3800, Australia Department of Physics and Astronomy, Macquarie University, Sydney, NSW 2109, Australia To understand globular clusters (GCs) we need to comprehend how their formation process was able to produce their abundance distribution of light elements. In particular, we seek to figure out which stars imprinted the peculiar chemical signature of GCs. One of the best ways is to study the light-element anti-correlations in a large sample of GCs that are analysed homogeneously. As part of our spectroscopic survey of GCs with FLAMES, we present here the results of our study of about 30 red giant member stars in the low-mass, low-metallicity Milky Way cluster NGC 6535. We measured the metallicity (finding [Fe/H]=-1.95, rms=0.04 dex in our homogeneous scale) and other elements of the cluster and, in particular, we concentrate here on O and Na abundances. These elements define the normal Na-O anti-correlation of classical GCs, making NGC 6535perhaps the lowest mass cluster with a confirmed presence of multiple populations. We updated the census of Galactic and extragalactic GCs for which a statement on the presence or absence of multiple populations can be made on the basis of high-resolution spectroscopy preferentially, or photometry and low-resolution spectroscopy otherwise; we also discuss the importance of mass and age of the clusters as factors for multiple populations.NGC 6535: the lowest mass Milky Way globular cluster with a Na-Oanti-correlation?Based onobservations collected at ESO telescopes underprogramme 093.B-0583Table 2 is only available in electronic form at the CDS via anonymousftp to cdsarc.u-strasbg.fr (130.79.128.5) or viahttp://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/???/???A. Bragaglia1,E. Carretta1, V. D'Orazi2,3,4, A. Sollima1, P. Donati1, R.G. Gratton2, S. Lucatello2 ========================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION Once considered as a good example of simple stellar populations (SSP), Galactic globular clusters (GCs)are currently thought to have formed in a complex chain of events, which left a fossil record in their chemical composition, in particular in their light elements He, C, N, O, Mg, Al, and Na<cit.>.Spectroscopically, almost all Milky Way (MW) GCs studied host multiple stellar populations that can be traced by the anti-correlatedvariations of light elements C and N <cit.>, O and Na, and Mg and Al (see e.g. and references therein forour FLAMES survey of more than 25 MW GCs). Photometrically, light element variations manifest in colour-magnitude diagrams (CMD) as sequence broadening and splitting <cit.> which are particularly evident when appropriate combinations of UV filters (tracing CN, OH, NHbands) are used.In normal, massive GCs at least two populations coexist. One has a composition that is indistinguishable from field stars of similar metallicity and is believed to be the long-lived remnant of the first generation (FG) formed in the cluster. The other, with a modified composition, is the second generation (SG), polluted by the most massive stars of the FG with ejecta from H burning at high temperature <cit.>.Unfortunately, the question of which FG stars produced the gas of modified composition is still unsettled (see e.g. <cit.>, and <cit.>). Hence we still do not fully understand how GCs, and their multiple populations, (MPs) formed.To attack the problem, we need to combine spectroscopic, photometric, and astrometric observations (yielding abundances, kinematics, and their spatial distribution in GCs) with theoretical modelling (stellar evolution, formation, and chemo-dynamical evolution of clusters).On the observational front, it is crucial to study clusters covering the widest range of properties, that is mass, metallicity, age, structural parameters, and environment. While almost all MW GCs studied thus far show MPs, there are a few exceptions, such as Ruprecht 106 <cit.>, Palomar 12 <cit.>, and Terzan 7 <cit.>. Interestingly, the last two are also associated with the disrupting Sagittarius dwarf galaxy (Sgr dSph), the closest extragalactic environment. As of today, MPs have been detected in several extragalactic GCs. High-resolution spectroscopy has been used for old, massive clusters of the Magellanic Clouds (MCs); see <cit.> for the Large Magellanic Cloud (LMC), and <cit.> for the Small Magellanic Cloud (SMC), or the Fornax dwarf spheroidal <cit.>. Low-resolution spectroscopy and photometry have been employed to study younger clusters in the MCs or the Fnx dSph; see e.g. <cit.> discussed in Sect. <ref>. While these clusters can be used to explore a possible dependence on age and environment of the MPs, these clusters are comparable in mass to the bulk of the MW GCs and do not allow us to evaluate the impact of the environment on the low-mass end of the GC mass distribution. We found that, apparently, there is an observed minimum cluster mass for appearance of theNa-O anti-correlation, i.e. of SG (see ,Fig. 1 in , and Sect. <ref> for further discussion). This is an important constraint for GC formation mechanisms because it indicates the mass at which we expect that a GC is able to retain part of the ejecta of the FG.It is important to understand if this limit is real or due to the small statistics, since only a handful of low-mass clusters have been studied and only a few stars in each were spectroscopically observed.The problem of statistics can be bypassed using low-resolution spectroscopy and photometry, which can reach fainter magnitudes andlarger samples than high-resolution spectroscopy; however, this means that only C and N— plus O with Hubble Space Telescope (HST) far-UV filters, and possibly He— can be studied. To increase the sample studied with high-resolution spectroscopy, we obtained data for a few old and massive open clusters <cit.> and low-mass GCs, which are also connected with the Sgr dSph <cit.>. We concentrate here on the low-mass GCNGC 6535, which has M_V=-4.75 <cit.>. This would be the lowest present-day mass GC in which an Na-O anti-correlation is found, since the present record holder, Palomar 5, has M_V=-5.17,after shedding a good fraction of its mass, as witnessed by its tidal tails (e.g. ). In Section 2 we present literature information on the cluster, in Section 3 we describe the photometric data,spectroscopic observations, and derivation of atmospheric parameters. The abundance analysis is presented in Section 4 and a discussion on the light-element abundances is given in Section 5. Kinematics is discussed inSection 6 and the age and mass limits for the appearance of multiple populations in Section 7. A summary and conclusion are given in Section 8. § NGC 6535 IN LITERATURENGC 6535 is a low-concentration, low-mass GC; its absolute visual magnitude, a proxy for present-day mass, is M_V=-4.75, the King-model concentration is c=1.33, and the core radius and half-mass radius are r_c=0.36, r_h=0.85<cit.>. NGC 6535 is locatedtowards the centre of the MW, at l= 27.18, b= 10.44, and suffers from severe field contamination (see Fig. <ref>). Notwithstanding its present small Galactocentric distance, the cluster is metal-poor (R_GC=3.9 kpc, [Fe/H]=-1.79; from ) and is considered a halo GC. While there are several photometric papers in the literature, it has never been studied with high-resolution spectroscopy before.After the first colour-magnitude diagram (CMD) obtained by <cit.>, the cluster was studied by <cit.> and <cit.>. <cit.> used the V,V-I ground-basedCCD data of this and more than 30 other GCsto determine its age and found it coeval with the bulk of GCs.<cit.>, using data obtained within the Hubble Space Telescope ACS Treasury Program on GCs <cit.>, placed NGC 6535 among the young clusters.An old absolute age was instead derived by <cit.> once again using the ACS data and theoretical isochrones; these authors found a value of 12.75 Gyr, which, combined with its low metallicity, placed NGC 6535 among the halo, accreted clusters in the age-metallicity plot.<cit.> presented the photometric data we used in our selection of targets. They observed NGC 6535 with HST (Cycle 6, proposal 6625), using the Wide-Field Planetary Camera 2 (WFPC2), F555W, and F814W filters. These authors centred the cluster on the PC chip. Since the small field of view of the WPFC2 did not cover the entire cluster, they supplemented it with ground-based data (0.9m Dutch telescope, La Silla, Chile, program 59.E-0532)obtaining five 3.7× 3.7 fields in the V and I filters.Using ACS Treasury Program data for GCs, <cit.> studied the HBs and their relations with cluster parameters; referring to their Fig. 7, we would expect an interquartile range (IQR) [O/Na] that is larger than about 0.5dex(see Sect. <ref>). NGC 6535 is also included in the HST UV Legacy survey <cit.>; referring to their Fig. 16, the cluster displays a normally split RGB (i.e. presence of MPs) and a general paucity of stars. This is confirmed by <cit.> who found a fraction of first-generation stars of 54 ± 8 % using a total sample of 62 RGB stars.<cit.>included NGC 6535 among the 20 Galactic GCs they investigated to find tidal tails; asfor most of their sample, the cluster shows evidence ofinteractions and shocks with the Galactic plane/bulge in the form of tidal extensions aligned with the tidal field gradient. <cit.> used the HST data described above and measured an unusually flat (i.e. bottom-light) present-day stellar mass function, which is at odds with the exceptionally high mass-to-light ratio based on dynamical mass calculations. This could be due to very strong external influence leading to mass stripping or to large amounts of dark remnants; however, they were unable to clearly pinpoint the cause. Finally, <cit.> proposed that NGC 6535 has the kinematic and photometric characteristic of what they calla 'dark cluster', i.e. a cluster in which the majority of the mass is presently locked in an intermediate black hole.The spectroscopic material is much less abundant. <cit.> used a low-resolution, integrated spectrum to measure a radial velocity (RV) of -126±14 km s^-1 and determine [Fe/H]=-1.75 (σ=0.15). The same technique was used by <cit.> to obtain RV=-159±15 km s^-1. A very different value is reported by <cit.>,who give an average RV of -215.27±0.54 km s^-1, based on unpublished MMT and CTIO echelle spectra. <cit.> observed the near-IR calcium triplet region in giant stars in 52 GCs (seven stars in NGC 6535); they found RV=-204.8±14.0km s^-1 and derived the cluster metallicity as [Fe/H]=-1.78±0.07 (on thescale) and -1.51±0.10 (on thescale). Finally, <cit.>, in an effort to study carbon depletion in giants of 11 GCs, took low-resolution spectra of two stars in NGC 6535finding [C/Fe]=-0.58 and -0.29, respectively.§ OBSERVATIONS AND ANALYSISTo select our targets for FLAMES we used the photometry by <cit.>, downloading the catalogue from ViZier.The V,V-I CMD is shown in Fig. <ref>, lower panel. As expected from its Galactic position, thefield stars contamination is conspicuous, but the cluster RGB and HB are visible, especially when restricting to the very central region covered by HST.We converted the positions, given as offsets with respect to the cluster centre, to RA and Dec using stars in theHST Guide Star Catalogue-II for the astrometric conversion.[We used the code CataXcorr, developed by Paolo Montegriffo at the INAF - Osservatorio Astronomico di Bologna; see http://www.bo.astro.it/∼paolo/Main/CataPack.html] We then selected stars on and near the RGB and allocated targets with the ESO positioner FPOSS. The observed targets are indicated in Fig. <ref>, upper panel. Given the small field of view available and the positioner restrictions we were able to observe only 45 stars (and about 30 sky positions). §.§ FLAMES spectraNGC 6535 was observed with the multi-objectspectrograph FLAMES@VLT <cit.>. The observations were performed in service mode; a log is presented in Table <ref>.We used the GIRAFFE high-resolution set-ups HR11 and HR13 (R=24200 and 22500, respectively), which contain two Na doublets and the [O i]line at 6300 Å, plus several Mg lines. The GIRAFFE observations of 38 stars were coupled with the spectra of 7 stars obtained with the high-resolution (R=47000) UVES 580nm set-up (λλ≃4800-6800 Å). Information on the 45 stars (ID, coordinates, V, V-I, K, RV) is given in Table <ref>.The spectra were reduced using the ESO pipelines for UVES-FIBRE and GIRAFFE data; they take care of bias and flat-field correction, order tracing, extraction, fibre transmission, scattered light, and wavelength calibration. We then used IRAF[IRAF is distributed by the National Optical Astronomical Observatory, which are operated by the Association of Universities for Research in Astronomy, under contract with the National Science Foundation.] routines on the 1-D, wavelength-calibrated individual spectra to subtract the (average) sky, correct for barycentric motion, combine all the exposures for each star, and shift to zero RV.The region near the [O i] line was corrected for telluric lines contamination before combining the exposures.The RV was measured via DOOp <cit.>, an automated wrapper for DAOSPEC <cit.> on the stacked spectra; the average heliocentric value for each star is given in Table <ref>, together with the rms. We show in Fig. <ref> the histogram of the RVs; the cluster signature is evident and we identified 29 of the 45 observed targets as cluster members on the basis of their RV. Their average RV is -214.97 km s^-1(with σ=2.22), which is in very good agreement with the value -215.1±0.5 reported by <cit.>.§.§ Atmospheric parameters Only 30 stars (the 29 members plus one of more uncertain status) were retained for further analysis.We retrieved their 2MASS magnitudes <cit.>, which were used to determine the temperature, K mag, 2MASS identification, and quality flag are given in Table <ref>.Following our well-tested procedure <cit.>, effective temperatures T_ eff for our targets were derived with an average relation between apparent K magnitudes and first-pass temperatures from V-K colours and the calibrations of <cit.>. This method permits us to decrease the star-to-star errors in abundances due to uncertainties in temperatures, since magnitudes are less affected byuncertainties than colours. The adopted reddening E(B-V)=0.34 and distance modulus (m-M)_V=15.22 are from the <cit.> catalogue; the input metallicity [Fe/H]=-1.79 is from <cit.> and <cit.>. Gravities were obtained from apparent magnitudes and distance modulus, assuming thebolometric corrections from <cit.>. We adopted a mass of 0.85 M_⊙for all stars and M_ bol,⊙ = 4.75 as the bolometric magnitude for the Sun, as in our previous studies.We measured the equivalent widths (EW) of iron and other elements via the code ROSA <cit.>, adopting a relationship between EW and FWHM (for details, see ).We eliminated trends in the relation between abundances from Fe i lines and expectedline strength <cit.> to obtain values of the microturbulent velocity v_t.Finally, using the above values we interpolatedwithin the <cit.> grid of model atmospheres (with the option for overshooting on) to derive the final abundances, adopting for each star the model with the appropriate atmospheric parameters and whose abundances matched those derived fromFe i lines. The adopted atmospheric parameters ( T_ eff, log g, [Fe/H], and v_t) are presented in Table <ref>.§ ABUNDANCESIn addition to iron, we present here results for the light elements O, Na, and Mg for the entire sample of stars.The abundance ratios for these elements are given in Table <ref>, together with number of lines used and rms scatter. For the seven stars observed with UVES, we also provide abundances for Si, Ca, Ti i,ii, Sc ii, Cr i,ii, Mn, and Ni; these abundances are presented in Table <ref>. Since the cluster presents some peculiarities, it is important to see whether these also extend to the whole pattern of abundances. Some of the peculiarities of the cluster include that it is metal poor but located in the inner Galaxy, it is perhaps the lowest mass GC to show MPs, and the Na-O anti-correlation seems more extended than expected from its present-day mass; see next Section.The cluster mean elemental ratios are plotted in Fig. <ref> in comparison to field stars and other GCs of our FLAMES survey of similar metallicity. We do not see any anomalous behaviour; NGC 6535 seems to be a normal (inner) halo GC; see also below. Hence, in the present paper we dwell only on the light element abundances and we defer any further discussion on heavier species and all neutron-capture elements to future papers.The abundances were derivedusing EWs. The atomic data for the linesand solar reference valuescome from <cit.>.The Na abundances were corrected for departure from local thermodynamical equilibrium according to <cit.>, as in all the other papers of our FLAMES survey.To estimate the error budget we closely followed the procedure described in <cit.>. Table <ref> (for UVES spectra) and Table <ref> (for GIRAFFE spectra) provide the sensitivities of abundance ratios to errors in atmospheric parameters and EWs and the internal and systematic errors. For systematic errors we meanthe errors that are different for the various GCs considered in our series and that produce scatter in relations involving differentGCs; however, they do not affectthe star-to-star scatter in any given GC.The sensitivities were obtained by repeating the abundance analysis for all stars,while changing one atmospheric parameter at the time, then taking the average; this was carried out separately for UVES and GIRAFFE spectra. The amount of change in the input parameters used in the sensitivity computations is given in the Table header. The derived Fe abundances do not show any trend with temperature and gravity and the neutral and ionized species give essentially the same value, as do results based on GIRAFFE or UVES spectra (see Table <ref>). In fact, we obtain the following mean values:[Fe/H]i=-1.952±0.006 (rms=0.036),[Fe/H]ii=-1.921±0.008 (rms=0.046) dex for the 7 UVES stars; and[Fe/H]i=-1.963±0.003 (rms=0.053),[Fe/H]ii=-1.953±0.005 (rms=0.069) dex for the 22 GIRAFFE stars. This the first high-resolution spectroscopic study of this cluster, and hence no real comparison with previous determinations is possible. However, the metallicity we find is in reasonable agreement with that based on CaT <cit.>. The metallicity is also consistent withthe value in <cit.>, which comes from <cit.>. In that paper we built a metallicity scale based on GCs observed at high resolution (NGC 6535 was not among these GCs) and recalibrated other scales. Several studies have shown that the age-metallicity relation of MW GCs is bifurcated with one sequence of old, essentially coeval GCs at all metallicities and a second sequence of metal-richer GCs; see e.g. <cit.> and references therein. These sequences are populated by clusters associated with the halo and the disk, respectively (see Fig. 2 in , in which Rup 106, Ter 7, Pal 12, and NGC 6791 are highlighted, which are discussed in Sect. <ref>). The old age and the confirmed low metallicity, place NGC 6535 in the sequence of halo, accreted GCs; see e.g. <cit.>.§NA-O ANTI-CORRELATION Unfortunately, we could only obtain upper limits in O abundance for many stars, but this did not compromise the main goal of our work, i.e. detecting (or not) MPs in this low-mass cluster. Indeed, NGC 6533 turned out to be a normal MW GC, showing the usual anti-correlation between O and Na; see Fig. <ref>.This would not have been evident using only the seven stars observed with UVES. We need to be cautious in relying on small number statistics toexclude the presence of MPs (e.g. Terzan 7, Pal 12).The extension of the Na-O anti-correlation in NGC 6535 can be measured with the interquartile ratio of [O/Na] <cit.>; from the 26 stars with both Na and O abundances, we find IQR[O/Na]=0.44.This valueis close to the expectation based on HST photometry of its horizontal branch <cit.>, but looks large for the present cluster mass, according to the relation between IQR[O/Na] and absolute V magnitude M_V we found (see Fig. <ref>, upper panel). However, this is true also in other cases in which a strong mass loss is suspected, such as NGC 288, M71, and NGC 6218. In <cit.> we suggested that cluster concentration (c) also plays a role and may explain part of the scatter in the IQR-M_V relation. If we plot the residuals around this relation against c, we see a neat anti-correlation (Fig. <ref>, lower panel). The outliers are three post-core collapse GCs, for which the concentration parameter is arbitrarily set at 2.5 since they do not fit a King luminosity profile, and NGC 6535, which again indicates some peculiarity for this cluster.Using the separation defined in <cit.> of FG and SG stars inthe P, I, and E populations (i.e.primordial, intermediate, and extreme, respectively), we see that NGC 6535 does not harbour any E star but has a large number of I stars. The lack of E stars is consistent with the low mass of the clusterand also suggests that its initial mass should not have been too large. The fraction of P (FG) and I (SG) stars turns out 31±10% and 69±16%, respectively, which is in line with the other GCs of our survey <cit.>. From our extensive FLAMES survey, we derived an almost constant <cit.> fraction of FG to SG stars with FG stars constituting about one-third of the present-day cluster population, according to Na and O abundances (confirmed also by ). This is valid at least for the more massive GCs (Terzan 8 is an exception, see ), even with some variation, from about 25% to 50%. Also, a different fraction could be obtained with aseparation based on photometry; see the case of NGC 288 discussed in<cit.> or M 13 <cit.>. <cit.> used the HST UV Legacy survey to characterize MPs using what they called a “chromosome map". They were able to measure the fraction of FG stars (indicated as 1G) and found it variable from cluster to cluster, from ∼ 8% to ∼ 67%. However, their median value is 36% (rms=0.09) and if we compare their values and what we obtain from our Na-O sample for the GCs in common, 1G and P fractions generally follow a one-to-one relation (Carretta et al., in preparation). For NGC 6535, their 54±8 is only marginally inconsistent with our 31±10% value for 1G/P stars.According to <cit.> there is a correlation of the MP phenomenon with cluster mass with higher mass GCs showing a lower frequency of 1G stars. While thisclosely matches what we found for the extension of the Na-O anti-correlation, larger in more massive GCs (, see also Fig. <ref>), we do not find a relation between the fraction of FG stars and cluster mass (or age), when the separation in populations is based on Na and O.Also because photometry and high-resolution spectroscopy do not seem to tell exactly the same story, it is interesting to compare theirresults for NGC 6535. We downloaded the HST UV Legacy early-release data from the Mikulski Archive for Space Telescopes (MAST) and cross-matched the catalogue with our stars, finding only 15 objects in common. Figure <ref> shows a diagram plotting the pseudo-colour C_F275W,F336W,F438W = F275W-2× F336W+F438Wagainst F336W magnitude with our stars indicated by large open circles.We did not apply any quality selection to the photometry or try differential reddening correction, and therefore theRGB separation in two branches is less evident than in <cit.>.We may see that P and I separatealong different sequences, although not perfectly. To better appreciate this, we used a line to straighten the RGB and compute the difference in pseudo-colour, i.e. Δ(C_F275W,F336W,F438W). The left-hand panel of Fig. <ref> shows the result; all P stars (with one exception) are confined to the right side and I stars tend to occupy the left side of the RGB, but are more spread out. If we consider Na abundance (right-hand panel of Fig. <ref>), we see that five of the P stars share the same value (mean [Na/Fe]=-0.02, rms=0.11). The exception is exactly at the border of our P, I separation and may also be a misclassification. Instead, at about the same sodium, I stars may have a wide range of Δ(C_F275W,F336W,F438W) (that is, a range in C, N abundance), another manifestation that Na and N are correlated, but do not tell exactly the same story, as previously stated, e.g. by <cit.>.§ INTERNAL KINEMATICS In spite of the relatively small number of cluster members the present datasetrepresents the most extensive set of radial velocities for NGC 6535 and is very useful to study the internal kinematics of this cluster.As a first step, we test the presence of systemic rotation. For this purpose,in Fig. <ref> the radial velocities of the 29 bona fide members are plotted against their position angles. The best-fit sinusoidal curve indicates a rotation amplitude of A_rot sin i=1.28 ± 2.71 km s^-1, compatible with no significant rotation.We then used our radial velocity dataset to estimate the dynamical mass of the system. For this purpose we fitted the distribution of radial velocities with both a single mass <cit.> model and a multi-mass King-Michie models <cit.>. In particular, for each model we tuned the model mass to maximize the log-likelihood L=-∑_i=1^N((v_i-⟨ v ⟩)^2/(σ_i^2+ϵ_i^2)+ln (σ_i^2+ϵ_i^2)),where N(=29) is the number of available radial velocities, v_i and ϵ_i arethe radial velocity of the i-th star and its associated uncertainty, and σ_i is the line-of-sight velocity dispersion predicted by the model at the distance from the cluster centre of the i-th star.The best-fit single mass model provides a total mass of 2.12×10^4 M_⊙. For the multi-mass model we chose a present-day mass function of single stars with a positive slope of α=+1 (for reference, a Salpeter mass function has α=-2.35). Such a peculiar mass function is indeed necessary to reproduce the paucity of low-mass stars observed in the HST CMD presented by <cit.>. We adopted the prescriptions for dark remnants and binaries of Sollima et al. (2012) assuming a binary fraction of 4% and a flat distribution of mass ratios. The derived mass turned out to be 2.21×10^4 M_⊙ with a typical uncertainty of ∼ 7.8×10^3 M_⊙. This value is larger than found by <cit.>, who have Log(mass)=3.53; the difference comes from the different methods used (kinematics versus photometric profiles). In any case, even our value places NGC 6535 among the presently less massive MW GCs.By assuming the absolute magnitude M_V=-4.75 listed for this cluster in the Harris catalogue(, 2010 edition), the corresponding M/L_V ratio are 3.12 and 3.25 for the single mass and multi-mass models, respectively, whilelarger M/L ratios (3.78 and 3.95) areinstead obtained if the integrated magnitude M_V=-4.54 by <cit.> is adopted. Such a large M/L ratio is consistent with what was already measured in other low-mass clusters of our survey <cit.>.This evidence can be interpreted as an effect of the strong interaction of this cluster with the Galactic tidal field affecting the M/L in a twofold way: i) the tidal heating inflates the cluster velocity dispersion spuriously increasing the derived dynamical mass, and ii) the efficient loss of stars leads to an increased relative fraction of remnants contributing to the mass without emitting any light.Dedicated N-body simulations are needed to understand the relative impact of the two above-mentioned effects.§ AGE AND MASS LIMITS FOR MULTIPLE POPULATIONS As already put forward in the Introduction, there seems to be an observational lower limit to the mass of a cluster to display the Na-O anti-correlation <cit.>. We used the absolute magnitude in the V band, M_V, as a proxy for mass, since it is available for all MW GCs <cit.> while the mass is not. However, the majority of clusters for which information on the presence (or, very rarely, absence) of anti-correlations between light elements is available have high mass. We then decided to observe systematically lower mass GCs along with old and massive OCs to obtain and analyse homogeneously samples of stars as large as those for the other clusters of our survey. We obtained data on a few objects (see Introduction), but also other groups were active on the same line and therefore the situation improved in recent years.We revise here the census of clusters for which indication on MPs has been obtained. Starting from the clusters in <cit.> (see their Fig. 3) and the updated table in <cit.>, we scanned the literature up to June 2017 for new results on the anti-correlation of light elements in GCs based on high-resolution spectroscopy (i.e. involving Na, O or Mg, Al). At variance with <cit.>, we also considered evidence of MPs based on photometry or low-resolution spectroscopy (i.e. involving C, N), whenever no high-resolution spectroscopy was available for the particular cluster. For the OCs, we only considered a subsample: the two old, massive OCs Berkeley 39 and NGC 6791[For this cluster, <cit.> proposed the presence of MPs, which we and <cit.> did not confirm. We consider it to be a single stellar population unless otherwise proved by new observations.] observed and analyzed homogeneously by us <cit.>, five OCs in the Gaia-ESO Survey <cit.>, one in APOGEE <cit.>, and four more from different groups. While not exhaustive, the OC list comprises only clusters in which large samples of stars were analysed; for more data on OCs and MPs, see <cit.>.The result of this process is presented in Table <ref> and Fig. <ref> for the MW clusters. The table lists all the MW GCs (plus several OCs) for which a statement on MPs can be made. The Tablealso presents the clusters' metallicity and M_V <cit.>, relative age <cit.>, mass <cit.>, a reference to the first paper(s) discussing MPs or to our homogeneous survey, if available, and a flag indicating whether MPs are actually present.We gathered information on 90 GCs out of 157 in <cit.>, i.e. about 57% of the known MW GCs. Of these clusters, 77 show definitely the presence of MPs, 63 on the basis of Na,O and/or Mg,Al anti-correlations, and 14 on the basis of C,N variations. For eight more clusters the answer is uncertain, but generally the positive option is favoured. Finally, only a handful of cases seem to host SSPs (but note Pal 12, for which two answers are provided by different methods); all of these have a low M_V, exceptRuprecht 106. On the contrary, all the OCs are SSPs.Figure <ref> shows graphically the same information, using M_V and relative age (in a scale where 1=13.5 Gyr). The plot confirms that NGC 6535 is the smallest GC showing MPs and that the bulk of old, high-mass GCs hosts MPs. Figure <ref> is similar, but shows the mass, taken from <cit.>, for homogeneity sake; in fact, there are the two different values for NGC 6535. Even if the information is available forfewer GCs, the picture is the same: cluster mass is surely one of the main drivers for the presence/absence of MPs.However, a secondfactor seems important, the cluster age: with the exception of E3, all other SSP clusters are young.To constrain models for cluster formation and test pollution of the SG by pristine stars, it is fundamental tostudy the presence of MPs as a function of bothmassand age.Of course, we must take into account that age and mass seem to be related, at least in the MW, since massive clusters tend to be old (see Figs. <ref> and <ref>). This means that we need to resort to extragalactic clusters, where we find young(er) and massive clusters, at variance with the MW. By limiting the study to clusters in which individual stars can be resolved and observed,some clusters in nearby dwarfs have been studied. We represent thecollected information on the extragalactic clusters in Table <ref>, where we took metallicity from the individual papers, mass, and age from <cit.>. Figure <ref> plots age versus mass for these LMC, SMC, and Fnx dSph clusters.<cit.> obtained high-resolution spectra of three stars in each of three metal-poor GCs in Fnx dSph, finding MPs. <cit.> confirmed the finding by means of HST near-UV and optical photometry, thereby adding a fourth cluster. <cit.> found MPs in three old GCs in the LMC on the basis of their Na, O distribution. In all these cases the SG does not seem to be as predominant as in the MW GCs of similar mass and age. When however younger ages are considered,<cit.> did not find MPs in intermediate-age LMC clusters, seemingly reproducing what we see in the MW (but see below).<cit.> discovered the presence of MPs in NGC 121, the only SMC cluster as old as the classical MW GCs, using mostly near-UV HST photometry. This was the first SMC cluster to display the presence of MPs; the discovery was confirmed byusing a similar HST filter set. Interestingly, <cit.> found a dominant FG (about 65%), which could be due either to a smaller formation of SG stars or to a smaller loss of FG stars than in MW GCs of similar mass and metallicity. These authors also acquired high-resolution spectra of five giants, but they seem to belong only to FG, according to their Na, O and Al, Mg abundances. This may be due to small number statistics, especially in the presence of a dominant FG, or to the decoupling of “C,N" and “Na,O" effects. The latter effect was touched upon in <cit.> and we plan to discuss the issue in a forthcoming paper.Other MC clusters have indeed been found to host MPs on the basis of their (UV-visual) CMDs and/or CN, CH band strengths; both methods rely ultimately on the increased N and decreased C abundance in SG stars. Apart from the increasing number of studied clusters, the interesting part is that MPs are found in younger and younger clusters. For instance, <cit.> detected MPs in the SMC cluster Lindsay 1 (age about 8 Gyr) using low-resolution spectra and CN, CH bands; <cit.>, using photometry, confirmed the result and added two more not-so-old SMC clusters, NGC 339 and NGC 416 (ages between 6 and 7.5 Gyr). These authors also measured the fraction of SG in the three clusters, which always resulted to be lower than the average MW value, at least to the values based on the Na, O anti-correlation (36%, 25%, and 45% in Lindsay 1, NGC 339, and NGC 416, respectively). These clusters are massive, about 10^5 M_⊙, and are filling the gap between the classical old MW clusters and the intermediate-age and young SMC, LMC clusters, where no indication of MPs has been found; see <cit.>, <cit.>, and Table <ref>. However, the situation evolved recently, with the discovery of split sequences in the LMC cluster NGC 1978 (Lardo, priv. comm.; ), which is only 2 Gyr old (translating to a redshift z∼ 0.15). This cluster had been considered a SSP by <cit.>, looking at Na and O.The presence of MPs also at young ages (i.e. in clusters formed at redshiftswell past the epoch of GC formation) seems at odds with what we find in the MW (see Table <ref> and Figs. <ref>,<ref>) and needs to be explained. More observations of the same clusters using both photometry/CN bands and high-resolution spectroscopyare also required to clarify if they are really tracing the same phenomenon. § SUMMARY AND CONCLUSIONS As part of our large, homogeneous survey of MW GCs with FLAMES, we observed NGC 6535, which is possibly the lowest mass cluster in which MPs are present (see Fig. <ref>). We collected our data just before the first CMDs of the HST UV Legacy Survey were made public and we may confirm spectroscopically that stars belonging to different stellar populations co-exist in this cluster.We measured abundances of Fe and other species, but concentrated here only on the light elements. The extension of the Na-O anti-correlation has been measured, with IQR[O/Na]=0.44, in line with the idea that the cluster has suffered a strong mass loss. Using the Na, O distribution, we found that FG and SG are about 30% and 70%, respectively, while the separation based on UV-optical filters is close to 50-50% <cit.>. This difference is not unusual when comparing fractions based essentially on O and Na orC and N, respectively. Using the RVs and King models, we estimated the cluster mass; even if our result is larger than in <cit.>, we confirm that NGC 6535 has a low present-day mass.To put NGC 6535 in context, we collected all available literature information on the presence (or absence) of MPs in clusters. Using either M_V or mass, we confirm that this cluster lies near the lower envelope of MW GCs hosting MPs (seeFigs. <ref>, <ref>). In addition to mass, age also seems to play a role in deciding upon the appearance of MPs and this puts important constraints to models of cluster formation. While all young MW clusters (open clusters) studied so far seem to be SSPs, some younger extragalactic clusters have been found to host MPs. To really advance in our comprehension of the MP phenomenon, further observations are required and a strong synergy between theory/models and photometry, low-resolution, and high-resolution spectroscopy is to be encouraged. This research has made use of Vizier and SIMBAD, operated at CDS, Strasbourg, France,NASA's Astrophysical Data System, and TOPCAT (http://www.starlink.ac.uk/topcat/). The Guide Star Catalogue-II is a joint project of the Space Telescope Science Institute and the Osservatorio Astronomico di Torino. Space Telescope Science Institute is operated by the Association of Universities for Research in Astronomy, for the National Aeronautics and Space Administration under contract NAS5-26555. The participation of the Osservatorio Astronomico di Torino is supported by the Italian Council for Research in Astronomy. Additional support is provided by European Southern Observatory, Space Telescope European Coordinating Facility, the International GEMINI project, and the European Space Agency Astrophysics Division. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts.99 [Alonso et al.(1999)]alonsoAlonso, A., Arribas, S., &Martínez-Roger, C. 1999, , 140, 261 [Alonso et al.(2001)]alonso2Alonso, A., Arribas, S., & Martínez-Roger, C. 2001, , 376, 1039 [Anthony-Twarog & Twarog(1985)]attAnthony-Twarog, B. J., & Twarog, B. A. 1985, , 291, 595 [Barbuy et al.(2016)]barbuy16Barbuy, B., Cantelli, E., Vemado, A., et al. 2016, , 591, A53 [Askar et al.(2017)]askar17Askar, A., Bianchini, P., de Vita, R., et al. 2017, , 464, 3090 [Bastian et al.(2013)]bastianBastian, N., Lamers, H. J. G. L. M., de Mink, S. E., et al. 2013, , 436, 2398 [Bastian & Lardo(2015)]bastian_lardoBastian, N., & Lardo, C. 2015, , 453, 357 [Bastian et al.(2015)]bastian15 Bastian, N., Cabrera-Ziri, I., & Salaris, M. 2015, , 449, 3333[Beccari et al.(2013)]beccari13Beccari, G., Bellazzini, M., Lardo, C., et al. 2013, , 431, 1995 [Boberg et al.(2015)]boberg15Boberg, O. M., Friel, E. D., & Vesperini, E. 2015, , 804, 109 [Boberg et al.(2016)]boberg16Boberg, O. M., Friel, E. D., & Vesperini, E. 2016, , 824, 5 [Böcek Topcu et al.(2015)]bocektopcu15Böcek Topcu, G., Afşar, M., Schaeuble, M., & Sneden, C. 2015, , 446, 3562 [Bragaglia et al.(2001)]bragaglia01Bragaglia, A., Carretta, E., Gratton, R. G., et al. 2001, , 121, 327 [Bragaglia et al.(2012)]be39Bragaglia, A., Gratton, R. G., Carretta, E., et al. 2012, , 548, AA122 [Bragaglia et al.(2014)]bragaglia6791Bragaglia, A., Sneden, C., Carretta, E., et al. 2014, , 796, 68 [Bragaglia et al.(2015)]bragaglia6139Bragaglia, A.,Carretta, E., Sollima, A., et al. 2015, , 538, A69[Çalışkan et al.(2012)]caliskan12Çalışkan, Ş., Christlieb, N., & Grebel, E. K. 2012, , 537, A83[Cantat-Gaudin et al.(2014a)]doopCantat-Gaudin, T., Donati, P., Pancino, E., et al. 2014a, , 562, AA10 [Cantat-Gaudin et al.(2014b)]m11Cantat-Gaudin, T., Vallenari, A., Zaggia, S., et al. 2014b, , 569, A17 [Carretta(2006)]carretta06 Carretta, E.2006, , 131, 1766[Carretta(2016)]heidiCarretta, E. 2016, arXiv:1611.04728 [Carretta & Gratton(1997)]cg97Carretta, E., & Gratton, R. G. 1997, , 121, 95 [Carretta et al.(2007a)]n6752Carretta, E., Bragaglia, A., Gratton, R. G., Lucatello, S., & Momany, Y. 2007a, , 464, 927 [Carretta et al.(2007b)]n6218Carretta, E., Bragaglia, A., Gratton, R. G., et al. 2007b, , 464, 939 [Carretta et al.(2007c)]n6388Carretta, E., Bragaglia, A., Gratton, R. G., et al. 2007, , 464, 967 [Carretta et al.(2009a)]carretta09aCarretta, E., Bragaglia, A., Gratton, R. G., et al. 2009a, , 505, 117 [Carretta et al.(2009b)]carretta09b Carretta, E., Bragaglia, A., Gratton, R.G., Lucatello, S.2009b, , 505, 139 [Carretta et al.(2009c)]carretta09c Carretta, E., Bragaglia, A., Gratton, R.G., D'Orazi, V., Lucatello, S.2009c, , 508, 695 [Carretta et al.(2010a)]zcorriCarretta, E., Bragaglia, A., Gratton, R.G., Recio-Blanco, A.,Lucatello, S., D'Orazi, V., Cassisi, S. 2010a, , 516, 55 [Carretta et al.(2010b)]m54aCarretta, E., Bragaglia, A., Gratton, R. G., et al. 2010b, , 714, L7 [Carretta et al.(2010c)]m54bCarretta, E., Bragaglia, A., Gratton, R. G., et al. 2010c, , 520, A95 [Carretta et al.(2010d)]n1851aCarretta, E., Gratton, R. G., Lucatello, S., et al. 2010d, , 722, L1 [Carretta et al.(2011a)]n1851Carretta, E., Lucatello, S., Gratton, R. G., Bragaglia, A., & D'Orazi, V. 2011a, , 533, A69 [Carretta et al.(2011b)]stromgrenCarretta, E., Bragaglia, A., Gratton, R., D'Orazi, V., & Lucatello, S. 2011b, , 535, AA121 [Carretta et al.(2013b)]n362Carretta, E., Bragaglia, A., Gratton, R. G., et al. 2013b, , 557, A138 [Carretta et al.(2014a)]ter8Carretta, E., Bragaglia, A., Gratton, R. G., et al. 2014a, , 561, AA87 [Carretta et al.(2014b)]n4833Carretta, E., Bragaglia, A., Gratton, R. G., et al. 2014b, , 564, AA60 [Carretta et al.(2015)]m80Carretta, E., Bragaglia, A., Gratton, R. G., et al. 2015, , 478, AA116 [Carretta et al.(2017)]n5634Carretta, E., Bragaglia, A., Lucatello, S., et al. 2017, , 600, A118 [Cohen(1978)]cohen78Cohen, J. G. 1978, , 223, 487[Cohen(1981)]cohenm22Cohen, J. G. 1981, , 247, 869 [Cohen(2004)]cohenpal12Cohen, J. G. 2004, , 127, 1545 [Cohen & Melendez(2005a)]cohen_melendezCohen, J. G., & Melendez, J. 2005a, , 129, 1607 [Cohen & Meléndez(2005b)]m3_m13Cohen, J. G., & Meléndez, J. 2005b, , 129, 303 [Cohen & Kirby(2012)]cohen_kirbyCohen, J. G., & Kirby, E. N. 2012, , 760, 86[Cunha et al.(2015)]cunha15Cunha, K., Smith, V. V., Johnson, J. A., et al. 2015, , 798, L41 [Dalessandro et al.(2012)]2012AJ....144..126D Dalessandro, E.,Schiavon, R. P., Rood, R. T., et al. 2012, , 144, 126 [Dalessandro et al.(2016)]dalessandro16Dalessandro, E., Lapenna, E., Mucciarelli, A., et al. 2016, , 829, 77 [Decressin et al.(2007)]decressin07 Decressin, T., Charbonnel, C., & Meynet, G. 2007, , 475, 859 [De Mink et al.(2009)]demink09 de Mink, S. E., Pols, O. R., Langer, N., & Izzard, R. G. 2009, , 507, L1[Denisenkov & Denisenkova(1989)]dd89 Denisenkov, P. A., & Denisenkova, S. N. 1989, Astronomicheskij Tsirkulyar, 1538, 11 [Denissenkov & Hartwick(2014)]dh14 Denissenkov, P. A., & Hartwick, F. D. A. 2014, , 437, L21 [De Silva et al.(2007)]desilva07De Silva, G. M., Freeman, K. C., Asplund, M., et al. 2007, , 133, 1161 [Donati et al.(2014)]donati14Donati, P., Cantat Gaudin, T., Bragaglia, A., et al. 2014, , 561, A94 [Feltzing et al.(2009)]feltzing09Feltzing, S., Primas, F., & Johnson, R. A. 2009, , 493, 913 [Frank et al.(2015)]frank15Frank, M. J., Koch, A., Feltzing, S., et al. 2015, , 581, A72 [Geisler(1988)]geisler88Geisler, D. 1988, , 100, 687 [Geisler et al.(2012)]geisler12Geisler, D., Villanova, S., Carraro, G., et al. 2012, , 756, L40 [Gilmore et al.(2012)]gilmore12Gilmore, G., Randich, S., Asplund, M., et al. 2012, The Messenger, 147, 25 [Gratton(1988)]rosa Gratton, R.G. 1988, Rome Obs. Preprint Ser. 29[Gratton et al.(1999)]gratton99Gratton, R. G., Carretta, E., Eriksson, K., &Gustafsson, B. 1999,, 350, 955[Gratton et al.(2003)]gratton03 Gratton, R. G., Carretta, E., Claudi, R., Lucatello, S., & Barbieri, M. 2003, , 404, 187[Gratton etal.(2004)]araaGratton, R., Sneden, C., &Carretta, E. 2004, , 42, 385 [Gratton et al.(2006)]gratton06Gratton, R. G., Lucatello, S., Bragaglia, A., et al. 2006, , 455, 271 [Gratton et al.(2007)]gratton07Gratton, R. G., Lucatello, S., Bragaglia, A., et al. 2007, , 464, 953 [Gratton et al.(2010)]grattonhbGratton, R. G., Carretta, E., Bragaglia, A., Lucatello, S., & D'Orazi, V. 2010, , 517, AA81 [Gratton et al.(2012)]gratton12Gratton, R.G., Carretta, E., Bragaglia, A.2012, , 20, 50[Gratton et al.(2015)]gratton15Gratton, R. G., Lucatello, S., Sollima, A., et al. 2015, , 573, A92 [Gunn & Griffin(1979)]gunn_griffinGunn, J. E., & Griffin, R. F. 1979, , 84, 752 [Halford & Zaritsky(2015)]halford_zaritsky Halford M., Zaritsky D., 2015, , 815, 86[Hanke et al.(2017)]hanke17Hanke, M., Koch, A., Hansen, C. J., & McWilliam, A. 2017, , 599, A97 [Harris(1996)]harrisHarris, W. E. 1996, , 112, 1487 [Hesser et al.(1986)]hsm86Hesser, J. E., Shawl,S. J., & Meyer, J. E. 1986, , 98, 403 [Hollyhead et al.(2017)]hollyhead17Hollyhead, K., Kacharov, N., Lardo, C., et al. 2017, , 465, L39 [Johnson et al.(2006)]johnson06 Johnson, J. A., Ivans, I. I., & Stetson, P. B. 2006, , 640, 801[Johnson & Pilachowski(2010)]jonsonomegaJohnson, C. I., & Pilachowski, C. A. 2010, , 722, 1373 [Johnson et al.(2013)]johnson13Johnson, C. I., Rich, R. M., Pilachowski, C. A., & Kunder, A. M. 2013, American Astronomical Society Meeting Abstracts #221, 221, 250.19 [Johnson et al.(2015)]johnson15Johnson, C. I., Rich, R. M., Pilachowski, C. A., et al. 2015, , 150, 63 [Johnson et al.(2016)]johnson16Johnson, C. I., Caldwell, N., Rich, R. M., Pilachowski, C. A., & Hsyu, T. 2016, , 152, 21 [Johnson et al.(2017)]johnson17Johnson, C. I., Caldwell, N., Rich, R. M., et al. 2017, , 842, 24 [Kacharov et al.(2014)]kacharov14Kacharov, N., Bianchini, P., Koch, A., et al. 2014, , 567, A69 [Kayser et al.(2008)]kayser08Kayser, A., Hilker, M., Grebel, E. K., & Willemsen, P. G. 2008, , 486, 437 [King(1966)]king66King, I. R. 1966, , 71, 64 [Koch & McWilliam(2014)]koch_mcwilliamKoch, A., & McWilliam, A. 2014, , 565, A23 [Koch et al.(2009)]koch09Koch, A., Côté, P., & McWilliam, A. 2009, , 506, 729 [Kraft et al.(1992)]kraft92Kraft, R. P., Sneden, C., Langer, G. E., & Prosser, C. F. 1992, , 104, 645 [Kraft et al.(1998)]kraft98Kraft, R. P., Sneden, C., Smith, G. H., Shetrone, M. D., & Fulbright, J. 1998, , 115, 1500 [Krause et al.(2016)]krause16Krause, M. G. H., Charbonnel, C., Bastian, N., & Diehl, R. 2016, , 587, A53 [Kurucz(1993)]kurKurucz, R. 1993, ATLAS9 Stellar Atmosphere Programs and 2 km/s grid. Kurucz CD-ROM No. 13. Cambridge, Mass.: Smithsonian Astrophysical Observatory, 1993., 13,[Langer et al.(1993)]langer Langer, G. E., Hoffman, R., & Sneden, C. 1993, , 105, 301[Lapenna et al.(2015)]lapenna15Lapenna, E., Mucciarelli, A., Ferraro, F. R., et al. 2015, , 813, 97 [Larsen et al.(2012)]larsen12Larsen, S. S., Brodie, J. P., & Strader, J. 2012, , 546, A53 [Larsen et al.(2014)]larsen14Larsen, S. S., Brodie, J. P., Grundahl, F., & Strader, J. 2014, , 797, 15[Lata et al.(2002)]lata02Lata, S., Pandey, A. K., Sagar, R., & Mohan, V. 2002, , 388, 158 [Leaman et al.(2013)]leaman13Leaman, R., VandenBerg, D. A., & Mendel, J. T. 2013, , 436, 122 [Lee(2007)]lee07Lee, J.-W. 2007, Revista Mexicana de Astronomia y Astrofisica Conference Series, 28, 120 [Lee(2015)]lee15 Lee, J.-W. 2015, , 219, 7[Leon et al.(2000)]leon00Leon, S., Meylan, G., & Combes, F. 2000, , 359, 907 [Letarte et al.(2006)]letarte06Letarte, B., Hill, V., Jablonka, P., et al. 2006, , 453, 547 [Liller(1980)]lillerLiller, M. H. 1980, , 85, 1480 [Maccarone & Zureck(2012)]mz12 Maccarone, T. J., & Zurek, D. R. 2012, , 423, 2[Mackey & Gilmore(2003a)]mackey_gilmore03aMackey, A. D., & Gilmore, G. F. 2003a, , 338, 85 [Mackey & Gilmore(2003b)]mackey_gilmore03bMackey, A. D., & Gilmore, G. F. 2003b, , 338, 120[Mackey & Gilmore(2003c)]mackey_gilmore03cMackey, A. D., & Gilmore, G. F. 2003c, , 340, 175 [MacLean et al.(2015)]maclean15MacLean, B. T., De Silva, G. M., & Lattanzio, J. 2015, , 446, 3556 [Magain(1984)]magain Magain, P. 1984, , 134, 189 [Magrini et al.(2015)]magrini15Magrini, L., Randich, S., Donati, P., et al. 2015, , 580, A85 [Majewski et al.(2016)]apogeeMajewski, S. R., APOGEE Team, & APOGEE-2 Team 2016, Astronomische Nachrichten, 337, 863 [Marín-Franch et al.(2009)]marin-franchMarín-Franch, A., Aparicio, A., Piotto, G., et al. 2009, , 694,1498 [Marino et al.(2009)]marinom22Marino, A. F., Milone, A. P., Piotto, G., et al. 2009, , 505, 1099 [Marino et al.(2011)]marinoomegaMarino, A. F., Milone, A. P., Piotto, G., et al. 2011, , 731, 64 [Marino et al.(2015)]marino15Marino, A. F., Milone, A. P., Karakas, A. I., et al. 2015, , 450, 815 [Martell et al.(2008)]martell08 Martell, S. L., Smith,G. H., & Briley, M. M. 2008, , 136, 2522 [Martocchia et al.(2017a)]martocchia17Martocchia, S., Bastian, N., Usher, C., et al. 2017a, , 468, 3150 [Martocchia et al.(2017b)]martocchiasubm Martocchia, S.,Cabrera-Ziri, I., Lardo, C., et al. 2017b, , submitted[Massari et al.(2016)]massari16Massari, D., Lapenna, E., Bragaglia, A., et al. 2016, , 458, 4162 [McLaughlin & van der Marel(2005)]mlvdm05McLaughlin, D. E., & van der Marel, R. P. 2005, , 161, 304 [Mészáros et al.(2015)]meszaros15Mészáros, S., Martell, S. L., Shetrone, M., et al. 2015, , 149, 153[Milone et al.(2012a)]milone12Milone, A. P., Piotto, G., Bedin, L. R., et al. 2012a, , 744, 58[Milone et al.(2014)]milonehb Milone, A. P., Marino,A. F., Dotter, A., et al. 2014, , 785, 21 [Milone et al.(2017)]milone17Milone, A. P., Piotto, G., Renzini, A., et al. 2017, , 464, 3636 [Monelli et al.(2013)]sumoMonelli, M., Milone, A. P., Stetson, P. B., et al. 2013, , 431, 2126 [Mucciarelli et al.(2008)]mucciarelli08Mucciarelli, A., Carretta, E., Origlia, L., & Ferraro, F. R. 2008, , 136, 375 [Mucciarelli et al.(2009)]mucciarelli09Mucciarelli, A., Origlia, L., Ferraro, F. R., & Pancino, E. 2009, , 695, L134 [Mucciarelli et al.(2011)]mucciarelli11Mucciarelli, A., Cristallo, S., Brocato, E., et al. 2011, , 413, 837 [Mucciarelli et al.(2012)]mucciarelli12Mucciarelli, A., Bellazzini, M., Ibata, R., et al. 2012, , 426, 2889 [Mucciarelli et al.(2013)]mucciarelli13Mucciarelli, A., Bellazzini, M., Catelan, M., et al. 2013, , 435, 3667 [Mucciarelli et al.(2014)]mucciarelli1806Mucciarelli, A., Dalessandro, E., Ferraro, F. R., Origlia, L., & Lanzoni, B. 2014, , 793, L6 [Mucciarelli et al.(2016)]mucciarelli16Mucciarelli, A., Dalessandro, E., Massari, D., et al. 2016, , 824, 73 [Muñoz et al.(2017)]munoz17Muñoz, C., Villanova, S., Geisler, D., et al. 2017, arXiv:1705.02684 [Niederhofer et al.(2017a)]niederhofer17aNiederhofer, F., Bastian, N., Kozhurina-Platais, V., et al. 2017a, , 464, 94 [Niederhofer et al.(2017b)]niederhofer17bNiederhofer, F., Bastian, N., Kozhurina-Platais, V., et al. 2017b, , 465, 4159 [Odenkirchen et al.(2001)]odenkirchen01Odenkirchen, M., Grebel, E. K., Rockosi, C. M., et al. 2001, , 548, L165 [O'Malley et al.(2017)]omalley17 O'Malley, E. M., Kniazev, A.,McWilliam, A., Chaboyer, B.2017, arXiv:1706.06962[Overbeek et al.(2015)]overbeek15Overbeek, J. C., Friel, E. D., Jacobson, H. R., et al. 2015, , 149, 15 [Overbeek et al.(2017)]overbeek17Overbeek, J. C., Friel, E. D., Donati, P., et al. 2017, , 598, A68 [Pancino et al.(2010)]pancino10Pancino, E., Rejkuba, M., Zoccali, M., & Carrera, R. 2010, , 524, A44 [Pancino et al.(2017)]pancinogesPancino, E., Romano, D., Tang, B., et al. 2017, , 601, A112 [Pasquini et al.(2002)]pasquiniPasquini, L., Avila, G., Blecha, A., et al. 2002, The Messenger, 110, 1 [Piotto et al.(2015)]piottouv Piotto, G., Milone,A. P., Bedin, L. R., et al. 2015, , 149, 91 [Pryor & Meylan(1993)]pm93Pryor, C., & Meylan, G. 1993, Structure and Dynamics of Globular Clusters, 50, 357 [Randich et al.(2006)]randich06Randich, S., Sestito, P., Primas, F., Pallavicini, R., & Pasquini, L. 2006, , 450, 557 [Roederer et al.(2016)]roederer16Roederer, I. U., Mateo, M., Bailey, J. I., et al. 2016, , 455, 2417 [Rosenberg et al.(1999)]rosenberg99Rosenberg, A., Aparicio, A., Saviane, I., & Piotto, G. 2000, , 145, 451 [Rosenberg etal.(2000)]rosenberg00 Rosenberg, A., Saviane, I., Piotto, G., & Aparicio, A. 1999, , 118, 2306 [Rutledge et al.(1997a)]rutledge97aRutledge, G. A.,Hesser, J. E., Stetson, P. B., et al. 1997a, , 109, 883 [Rutledge et al.(1997b)] rutledge97b Rutledge, G. A., Hesser, J. E., & Stetson, P. B. 1997b, , 109, 907 [Sakari et al.(2011)]sakari11Sakari, C. M., Venn, K. A., Irwin, M., et al. 2011, , 740, 106 [Salinas & Strader(2015)]salinas_straderSalinas, R., & Strader, J. 2015, , 809, 169 [San Roman et al.(2015)]sanroman15San Roman, I., Muñoz, C., Geisler, D., et al. 2015, , 579, A6 [Sarajedini(1994)]sarajediniSarajedini, A. 1994, , 106, 404 [Sarajedini et al.(2007)]treasury Sarajedini, A.,Bedin, L. R., Chaboyer, B., et al. 2007, , 133, 1658 [Sbordone et al.(2005)]sbordoneter7Sbordone, L., Bonifacio, P., Marconi, G., Buonanno, R., & Zaggia, S. 2005, , 437, 905 [Schiavon et al.(2017)]schiavon17Schiavon, R. P., Johnson, J. A., Frinchaboy, P. M., et al. 2017, , 466, 1010 [Skrutskie et al.(2006)]2massSkrutskie, M. F., et al. 2006, , 131, 1163 [Smith & Bell(1986)]smith_bell86Smith, G. H., & Bell, R. A. 1986, , 91, 1121 [Smith et al.(2002)]smithpal15Smith, G. H., Sneden, C., & Kraft, R. P. 2002, , 123, 1502 [Smith et al.(2013)]smith13Smith, G. H., Modi, P. N., & Hamren, K. 2013, , 125, 1287 [Smith(2015)]smith15Smith, G. H. 2015, , 127, 1204[Smolinsky et al.(2011)]smolinsky11 Smolinski, J. P., Martell, S. L., Beers, T. C., & Lee, Y. S. 2011, , 142, 126[Sollima et al.(2012)]sollima12Sollima, A., Nipoti, C., Mastrobuono Battisti, A., Montuori, M., & Capuzzo-Dolcetta, R. 2012, , 744, 196 [Soto et al.(2017)]soto17Soto, M., Bellini, A., Anderson, J., et al. 2017, , 153, 19 [Souto et al.(2016)]souto16Souto, D., Cunha, K., Smith, V., et al. 2016, , 830, 35 [Stetson & Pancino(2008)]daospecStetson, P. B., & Pancino, E. 2008, , 120, 1332 [Tang et al.(2017a)]tang17aTang, B., Geisler, D., Friel, E., et al. 2017a, , 601, A56 [Tang et al.(2017b)]tang17bTang, B., Cohen, R. E., Geisler, D., et al. 2017b, , 465, 19 [Tautvaišienė et al.(2004)]taut04Tautvaišienė,G., Wallerstein, G., Geisler, D., Gonzalez, G.,Charbonnel, C. 2004, , 127, 373[Testa et al.(2001)]testa01Testa, V., Corsi, C. E., Andreuzzi, G., et al. 2001, , 121, 916 [VandenBerg et al.(2013)]vandenberg13 VandenBerg, D. A.,Brogaard, K., Leaman, R., & Casagrande, L. 2013, , 775, 134 [Venn et al.(2004)]venn04Venn, K. A., Irwin, M., Shetrone, M. D., et al. 2004, , 128, 1177 [Ventura et al.(2001)]ventura01 Ventura, P., D'Antona, F., Mazzitelli, I., & Gratton, R. 2001, , 550, L65[Villanova et al.(2013)]villanovarup106Villanova, S., Geisler, D., Carraro, G., Moni Bidin, C., & Muñoz, C. 2013, , 778, 186 [Villanova et al.(2016)]villanova4147Villanova, S., Monaco, L., Moni Bidin, C., & Assmann, P. 2016, , 460, 2351 [Villanova et al.(2017)]villanova17Villanova, S., Moni Bidin, C., Mauro, F., Munoz, C., & Monaco, L. 2017, , 464, 2730 [Yong et al.(2008)]yong08Yong, D., Meléndez, J., Cunha, K., et al. 2008, , 689, 1020-1030 [Yong et al.(2014a)]yong14aYong, D., Alves Brito, A., Da Costa, G. S., et al. 2014a, , 439, 2638 [Yong et al.(2014b)]yong14bYong, D., Roederer, I. U., Grundahl, F., et al. 2014b, , 441, 3396 [Zinn & West(1984)]zw84 Zinn, R., & West, M.J. 1984, , 55, 45
http://arxiv.org/abs/1708.07705v1
{ "authors": [ "A. Bragaglia", "E. Carretta", "V. D'Orazi", "A. Sollima", "P. Donati", "R. G. Gratton", "S. Lucatello" ], "categories": [ "astro-ph.SR", "astro-ph.GA" ], "primary_category": "astro-ph.SR", "published": "20170825120002", "title": "NGC 6535: the lowest mass Milky Way globular cluster with a Na-O anti-correlation? Cluster mass and age in the multiple population context" }
Demet Kirmizibayrak [email protected] Faculty of Engineering and Natural Sciences, Sabancı University, Orhanlı Tuzla, Istanbul 34956, Turkey Faculty of Science and Letters, Istanbul Technical University, 34469, Maslak, Istanbul, Turkey Faculty of Engineering and Natural Sciences, Sabancı University, Orhanlı Tuzla, Istanbul 34956, TurkeyFaculty of Engineering and Natural Sciences, Sabancı University, Orhanlı Tuzla, Istanbul 34956, TurkeyFaculty of Engineering and Natural Sciences, Sabancı University, Orhanlı Tuzla, Istanbul 34956, TurkeyWe present our broadband (2 - 250 keV) time-averaged spectral analysis of 388 bursts from SGR J1550-5418, SGR 1900+14 and SGR 1806-20 detected with the Rossi X-ray Timing Explorer (RXTE) here and as a database in a companion web-catalog. We find that two blackbody functions (BB+BB), sum of two modified blackbody functions (LB+LB), sum of blackbody and powerlaw functions (BB+PO) and a power law with a high energy exponential cut-off (COMPT) all provide acceptable fits at similar levels. We performed numerical simulations to constrain the best fitting model for each burst spectrum and found that 67.6 % burst spectra with well-constrained parameters are better described by the Comptonized model. We also found that 64.7 % of these burst spectra are better described with LB+LB model, which is employed in SGR spectral analysis for the first time here, than BB+BB and BB+PO. We found a significant positive lower bound trend on photon index, suggesting a decreasing upper bound on hardness, with respect to total flux and fluence. We compare this result with bursts observed from SGR and AXP sources and suggest the relationship is a distinctive characteristic between the two. We confirm a significant anti-correlation between burst emission area and blackbody temperature and find that it varies between the hot and cool blackbody temperatures differently than previously discussed. We expand on the interpretation of our results in the framework of strongly magnetized neutron star case.§ INTRODUCTIONMagnetars are neutron stars whose variety of energetic radiation mechanisms are thought to be governed by the decay of their extremely strong magnetic fields (B ∼ 10^14 - 10^15 G; ). Emission of energetic hard X-ray bursts is the most characteristic signature of magnetar-like behavior. There are currently 29 known magnetars (23 confirmed + 6 magnetar candidates), 18 of which (whose spin has also been measured) emitted short duration (lasting only a fraction of a second) but very luminous bursts (; ; ). Magnetar bursts occur sporadically on random occasions, and the total number of bursts varies from a few to hundreds during any given burst active episode of the underlying magnetar. Each burst has the potential of revealing new insights into the burst triggering and radiation emission mechanisms.The principle ingredient of magnetar bursts is some type of disturbance by the extremely strong magnetic fields. According to the magnetar model, the solid crust of a neutron star could fracture when extremely large magnetic pressure builds-up on it (). In this view, the scale of burst energetics would be related to the size of fractured crustal site.suggested that the magnetospheres of these objects are globally twisted. As an alternative burst trigger mechanism,proposed that magnetic reconnection might take place in the twisted magnetosphere of magnetars. This magnetic reconnection is accounted for energetic flares emitted from the Sun. It is important to note that whether the trigger for short magnetar bursts is crustal or magnetospheric is still unresolved.The observed bursts are the end products of their initial triggers. Therefore, photons radiated away as burst might not be the direct consequence of the ignition, but a number of processes in between are likely involved. In the magnetar view, the trigger mechanism leads to the formation of a trapped fireball in the magnetosphere, composed of e^±-pairs as well as photons (see e.g. ; ). Bursts are due to radiation from these trapped pair rich fireballs. Additionally, emerging radiation is expected to be modified as it propagates through strongly magnetized and highly twisted magnetosphere (). It is therefore not straightforward to unfold the underlying mechanism from the burst data.Spectral and temporal studies on magnetar bursts are the most important probes to help distinguish mechanisms that could modify the emerging radiation of bursts. In previous spectral investigations, both thermal and non-thermal scenarios were invoked (e.g., ; ; ; ). In the non-thermal viewpoint (often analytically expressed as a power law with an exponential cut-off), the photons emerging from the ignition region are repeatedly Compton up-scattered by the hot e^±-pairs present in the magnetosphere. The corona of hot electrons may emerge in the inner dynamic magnetosphere due to field line twisting (; ; ; ). Such coronas are expected to be anisotropic due to the intense and likely multipolar magnetic field around the magnetar. The density and optical thickness of the corona, as well as the electron temperature set the characteristics of a spectral cut-off energy. Consequently, the peak energy parameter of the Comptonized (often labeled as COMPT) model is interpreted in relation to the electron temperature. Time integrated spectral analysis of nearly 300 bursts from SGR J1550-5418 result in an average power law photon index of -0.92 and peak energy (E_peak) is typically around 40 keV ().An alternative approach to interpret magnetar burst spectra is the thermal emission due to a short-lived thermal equilibrium of electron-photon pairs, usually described with the sum of two blackbody functions (see e.g., ; ). This dual blackbody scheme approximates a continuum temperature gradient due to the total energy dissipation of photons throughout the magnetosphere. The corona is expected to be hotter at low altitudes than the outer layers. Therefore, the coronal structure suggests that the high temperature blackbody component be associated with a smaller volume than the cold component. Previous studies of magnetar burst spectra with the two blackbody model yields 3-4 keV and 10-15 keV for the temperature of cold and hot blackbodies, respectively (; ; ; )In this paper, we present the results of our systematic time-averaged spectral analysis of a total of 388 single-peak bursts observed with the Rossi X-ray Timing Explorer (RXTE) between 1996-2009 from three magnetars; SGR J1550-5418, SGR 1900+14 and SGR 1806-20. We utilized data collected with both instruments on board RXTE. Therefore, we performed our investigations in a broad energy range of 2-250 keV, which is the widest energy coverage used for the analysis of SGR 1806-20 bursts. We modeled the time-integrated burst spectra with four different photon models, including the sum of two modified blackbody models () which is employed in spectral analysis on these sources for the first time. It is important to note that this work is focused on time-averaged spectral aspects of typical short bursts, since most events analyzed are too weak and not sufficiently long to perform time-resolved spectroscopy and the signal-to-noise ratio of HEXTE data would be too low to constrain any applied models on broadband time-resolved spectroscopy. For longer events included in the analysis, the results may not completely represent the instantaneous burst properties since SGR burst spectra are known to evolve (see e.g. ). § DATA AND BURST SELECTIONFor our broadband spectral investigations, we used data collected with the RXTE mission, which was operational over ∼16 yrs from December 1995 till the end of 2011. Throughout its mission, magnetar bursts were observed by RXTE at many occasions, especially during burst active phases. SGR J1550-5418bursts included in our study were sampled from 179 pointed RXTE observations that were performed between October 2008 and April 2010. SGR 1900+14 bursts were among 432 RXTE observations between June 1998 and December 2010. SGR 1806-20 bursts were observed between November 1996 - June 2011 with a total of 924 pointed RXTE observations. We used data collected from the PCA (Proportional Counter Array) and HEXTE (High Energy X-ray Timing Experiment) instruments carried on-board RXTE. PCA and HEXTE are co-aligned with the same view but operate in different energy ranges so that a broadband energy range analysis using data collected from the two instruments (between 2-250 keV) is possible with an overlap between 15-60 keV. The PCA instrument consisted of an array of five nearly identical proportional counters filled with xenon. Each unit had a collecting area of 1600 cm^2 and was optimally sensitive in the energy range of 2 - 30 keV (). Magnetar burst data collected with PCA provides medium energy resolution (64 or 256 energy channels) and a superb time resolution of 1 μs. HEXTE consisted of two clusters each containing four NaI/CsI scintillation counters. It was sensitive in the energy range 15 - 250 keV. The time resolution of HEXTE was 8 μs and the total collective area of one cluster was 800 cm^2.For the identification of bursts within the data, we performed a two-step burst identification scheme from these three magnetars using RXTE/PCA observations. We first employed a signal-to-noise ratio analysis to roughly identify the time of events, then applied a Bayesian blocks algorithm provided inand the procedure discussed infor final identification and morphological characterization of bursts (the details of the temporal investigation part will be presented elsewhere (Sasmaz Mus et al. in preparation)). In this manner, we identified 179, 432, and 924 bursts from SGR J1550-5418, SGR 1806-20, and SGR 1900+14, respectively. We note that some of these bursts were very weak, consisting of only ∼10 counts. Therefore, we first examined spectra of these bursts at varying intensities, and concluded that we would need at least 80 burst counts for the PCA instrument only (after background subtraction) in order to constrain crucial spectral parameters at a statistically acceptable level. Therefore, we only included single-peak bursts with more than 80 PCA counts in our analysis. §.§ Generation of Burst Spectra We determined the integration time intervals for burst and background spectra using PCA observations as follows. For each burst, we first generated a light curve in the 2-30 keV band with 0.125 s resolution spanning from 100 s before the peak time until 100 s after to extract background information. We defined two nominal background extraction intervals; from 80 to 5 s before the burst, and from 5 to 80 s in the post burst episode. We excluded the time intervals of other short bursts from the background spectral integration (see the bottom panel of Figure 1). We then generated a finer light curve (2 ms resolution) in the same energy interval and selected the burst spectrum time interval. We excluded the time interval(s) during which the count rate exceeded 18000 counts/s/PCU in order to avoid any pulse pile-up related issues (Figure 1, top panel). We used the time intervals obtained from our PCA data to generate HEXTE source and background spectra. At this point, we excluded the bursts that happened while one of the two HEXTE clusters was in "rocking mode", switching to a different direction to obtain background emission, or when no associated HEXTE data were available. For the remaining bursts, we combined the spectra obtained from both clusters. When only one of the clusters were operating during an observation, we extracted spectra using data collected with that particular cluster only. Finally, we grouped the extracted spectra of PCA and HEXTE to ensure that each spectral bin to contain a minimum of 20 burst counts.As a result of the aforementioned eliminations, the final numbers of bursts included in our spectral analysis are: 42 for SGR J1550-5418, 125 for SGR 1900+14 and 221 for SGR 1806-20. In Tables 1 through 3, we present the list of these bursts individually from these three magnetars, together with the durations of extracted spectra with saturated parts excluded (T_Exp). Bursts included have an average duration of 0.46 s (σ =0.28) for SGR J1550-5418, 0.46 s (σ =0.3) for SGR 1900+14 and 0.72 s (σ =0.72) for SGR 1806-20.ccccc Burst times and Durations of SGR J1550-5418 Bursts Included in Our Analysis0ptBurst Start time Start time T_ Exp ObsID ID in METin UTCs 1 475274776.845 2009-01-22T20:46:14 0.197 93017-10-17-002 475275167.175 2009-01-22T20:52:44 0.445 93017-10-17-003 475276110.459 2009-01-22T21:08:27 0.189 93017-10-17-004* 475276179.299 2009-01-22T21:09:36 0.395 93017-10-17-005 475276430.580 2009-01-22T21:13:47 0.447 93017-10-17-00Table 1 is available in the machine-readable format in full. The first 5 of 42 data lines are presented here to demonstrate its form. 1Burst IDs of saturated bursts are marked with asteriks. 2T_ Exp refers the duration of spectral extraction interval. ccccc Burst times and Durations of SGR 1900+14 Bursts Included in Our Analysis0ptBurst Start time Start time T_ Exp ObsID ID in METin UTC s 1 139434279.341 1998-06-02T19:44:42 0.213 30197-02-01-002 139439198.455 1998-06-02T21:06:44 0.291 30197-02-01-003 139607213.183 1998-06-04T19:46:59 0.602 30197-02-01-034* 146928739.386 1998-08-28T13:32:25 0.364 30197-02-03-00 5 146929244.7141998-08-28T13:40:50 0.471 30197-02-03-00 Table 2 is available in the machine-readable format in full. The first 5 of 125 data lines are presented here to demonstrate its form. 1Burst IDs of saturated bursts are marked with asteriks. 2T_ Exp refers the duration of spectral extraction interval. ccccc Burst times and Durations of SGR 1806-20 Included in Our Analysis0ptBurst Start time Start time T_ Exp ObsID ID in METin UTC s 1* 168976265.693 1999-05-10T17:51:05 0.395 40130-04-13-002 173045819.275 1999-06-26T20:16:59 0.148 40130-04-20-003* 208723242.666 2000-08-12T18:40:42 0.250 50142-01-33-004 212193703.626 2000-09-21T22:41:43 0.357 50142-01-43-005* 212194516.810 2000-09-21T22:55:16 4.154 50142-01-43-00Table 3 is available in the machine-readable format in full. The first 5 of 221 data lines are presented here to demonstrate its form. 1Burst IDs of saturated bursts are marked with asteriks. 2T_ Exp refers the duration of spectral extraction interval. § SPECTRAL ANALYSISIn our broadband spectral analysis, we used four models, three of which have been commonly used in describing short magnetar bursts in previous studies: the sum of two blackbody functions (BB+BB), sum of blackbody and power law models (BB+PO) and Comptonized model (COMPT). Additionally, we employed the sum of two modified blackbody functions (LB + LB) as set forth by . Note that the COMPT model is simply a power law with a high energy exponential cut-off expressed as: f = AE^-αexp-E/E_cutwhere, f is the photon flux and A is the amplitude in photons/cm^2/s/keV at 1 keV, E_cut is the cut-off energy (in keV) and α is the photon index. The LB function is a modified version of the blackbody function where the spectrum is flattened at low energies. In terms of the photon flux, the function is expressed as: f = 0.47ϵ^2[exp(ϵ^2/T_b√(ϵ^2+(3π^2/5)T_b^2))-1]^-1where, T_b is the bolometric blackbody temperature (temperature of a blackbody which would have its mean frequency at the peak of the observed spectrum) in keV and ϵ is the photon energy (). To display intrinsic differences of these models, we present in Figure 2 the best fit model spectra generated with the fitted parameters for the event with Burst ID 79 observed from SGR 1806-20. In Figure 3, we present the broadband spectrum of the same burst along with the fit residuals of all these four models. Before performing the joint analysis, we first investigated possible cross-calibration discrepancies between PCA and HEXTE detector responses using a small sample (11) of bursts of various flux levels. In this task, we first introduced a multiplicative constant for HEXTE parameters to account for possible discrepancies. We repeated the same analysis with the same burst sample without this scaling term. We found that the spectral analysis results with and without the constant term are in agreement with each other within 1 σ errors for all these bursts. Therefore, we concluded that the constant term was not needed in the analysis, and proceeded our investigations without including the term to limit the number of fit parameters.Finally, as mentioned above, we used a power law with an exponential cut-off model (COMPT) to represent the non-thermal emission spectrum. However, a different parametrization of the same model has been used in previous studies (see e.g. ; ; ) expressed as: f = Aexp[-E(2+λ)/E_peak](E/E_piv)^λwhere, f is the photon flux in photons/cm^2/s/keV, A is the amplitude with units same as f, E_peak is the energy (in keV) at which the spectral distribution function makes its peak in ν F_ν representation, λ is the photon index (defined as -α where α is the photon index of the COMPT model), and E_piv is the pivot energy fixed at a certain value (20 keV, ). It is then possible to convert the cut-off energy of the COMPT model into the spectral peak energy, E_peak as follows: E_peak = (2 - α)× E_cut in order to be able to compare our results to the previous studies. For all spectral analysis presented here, we used the sum of two blackbody models (BB+BB; ), sum of blackbody and powerlaw model (BB+PO; ) and the cutoff-power law model (COMPT; ) on XSPEC version 12.9.1 with χ^2 minimization. We generated LB+LB model on XSPEC as given in Equation 2. There are 4 free parameters for the BB+BB (hot and cold blackbody temperatures, and their normalizations), LB+LB (same as BB+BB) and BB+PO (photon index, photon index normalization, temperature and temperature normalization) models. For COMPT model, the number of free parameters is 3 (α, E_cut and normalization). For PCA and HEXTE joint spectral analysis, we linked PCA and HEXTE parameters (excluding normalizations) equal, which results in 6 free parameters for BB+BB, BB+PO and LB+LB models and 4 free parameters for COMPT model in joint spectral analysis.§ RESULTSIn general, we find that all of the four models can successfully describe most of the bursts from for all three sources based on the resulting χ^2 statistics. This can be seen in Table 4, in which we present the percentage of spectra that resulted in statistically acceptable fits for each model.Here, we define the fits to be "acceptable" when the probability of obtaining χ^2greater than the resulting χ^2 value based on the χ^2 distribution for the corresponding degrees of freedom (DOF), is greater than 0.2. This means that the fits that do not match this criteria have unacceptably large χ^2 values with a low probability of occuring by chance. We see that all of these models can adequately represent the burst spectra at similar levels. cccc Percentage of acceptable spectral fits based on χ^2 probability for the given DOF 0ptModel SGR J1550-5418 SGR 1900+14 SGR 1806-20BB+PO 73.8 % 72.8 % 66.0 % BB+BB 61.9 % 69.6 % 69.2 % LB+LB 71.4 % 83.2 % 78.7 % COMPT 71.4 % 77.6 % 67.9 %In this section, we present the spectral fit results with errors calculated at 1σfor all bursts for all models. Note that the comparisons among these spectral models to determine the best-describing ones were done using simulations, which is discussed in detail in the next section. In Tables 4-6 we present the spectral fit results of all four models for each burst, for the three sources separately. For our fits, we fixed the interstellar hydrogen column density to 3.4 × 10^22 cm^-2 for SGR J1550-5418 (), 2.36 × 10^22 cm^-2 for SGR 1900+14 () and 6.8 × 10^22 cm^-2 for SGR 1806-20 (; ). Detailed statistical investigations (i.e. generating reliable parameter and fluence distributions) were possible for SGR 1900+14 and SGR 1806-20 due to their large sample size. However, for SGR J1550 5418, the sample size was not sufficiently large to provide reliable distributions for SGR J1550-5418 burst spectral parameters and fluence. It is important to note that our burst samples involve partially saturated bursts, for which the reported burst flux should be taken as a lower bound. §.§ SGR 1900+14 We list the resulting spectral model parameters and fit statistics for each of SGR 1900+14 bursts in Table 5. In order to generate distributions of spectral fit parameters, we selected the events whose spectral fit parameters yielded less than 50% uncertainty. We then modeled the distributions with a Gaussian to determine the mean value. We find that distribution of photon indices peak at 0.86 ± 0.02 with σ = 0.25± 0.02 (see the top panel of Figure 4). The distribution ofE_cut (Figure 4, top panel) yields a mean value of 14.38 ± 1.0 keV with a width of σ = 7.96 ± 1.1 keV. A normal curve fit to the E_peak distribution yields a mean of 17.23 ±1.42 keV for SGR 1900+14 bursts. This is in agreement with , in which they conducted their study using BeppoSAX data in the 1.5-100 keV band and obtained a mean of 15.8 ± 2.3 keV. For the BB+BB model, the mean temperature of the cooler blackbody is 1.76 ± 0.02 keV (σ = 0.3 ± 0.02 keV), and the mean temperature of the hotter blackbody is 6.2 ± 0.2 keV (σ = 4.3 ± 0.2 keV) (See Figure 4, lower panels). We also computed the 2-250 keV flux for all of the bursts, and found that they are between 4.02×10^-9 and 6.9×10^-8 erg cm^-2 s^-1. Finally, we present the fluence distributions for PCA and HEXTE detections in the right panel of Figure 5. We find that the majority of SGR 1900+14 bursts have fluences <10^-8 erg cm^-2 with only a few exceptions. §.§ SGR 1806-20 We report all resulting parameters for SGR 1806-20 bursts in Table 6. On average, spectral model parameters of SGR 1806-20 bursts span narrower intervals compared to those of SGR 1900+14 bursts. We generated spectral parameter distributions for SGR 1806-20 with the same procedure as SGR 1900+14. For the COMPT model, we find a photon index distribution mean of 0.62 ± 0.005(σ = 0.22 ± 0.005). The exponential cut-off energy distribution peaks at 21.1 ± 1.3 keV with σ = 15.58 ± 1.5 keV and the inferred E_peak mean is 32.02± 1.84 keV (see Figure 6, top panels). The BB+BB model yields a mean cooler blackbody temperature of 2.02 ± 0.02 keV with σ = 0.24 ± 0.02 keV. The mean hotter blackbody temperature is 9.6 ± 0.2 keV with σ = 2.7 ± 0.2 keV (Figure 6, bottom panels). On average, the combined unabsorbed 2-250 keV flux of SGR 1806-20 bursts are higher than SGR J1550-5418 and similar to SGR 1900+14 with a range of 4.91×10^-9 - 5.46×10^-8 erg/cm^2/s. Due to the longer average burst duration, burst fluences of SGR 1806-20 events tend to be higher than both SGR 1900+14 and SGR J1550-5418. We present fluence distributions for PCA and HEXTE detections of SGR 1806-20 bursts in the left panel of Figure 5. Please note that for some SGR 1806-20 bursts, an additional normalization term was fixed to to a determined value based on the spectral shape to constrain spectral parameters of some models. For these bursts, the number of free parameters and as a result the degrees of freedom for some models differ (see burst ID : 2 in Table 6 for an example, where BB+PO degrees of freedom differs from BB+BB and LB+LB degrees of freedom by 1 due to a fixed HEXTE normalization term). §.§ SGR J1550-5418In Table 7, we present spectral fit results of SGR J1550-5418 bursts. For this source, we did not construct distribution plots due to the small sample size especially after taking into account the parameter constraint limit. For the COMPT model, the photon index range from -0.28 to 1.77 with a mean of 1.21 while the exponential cut-off energy range from 4.30 keV to 118.26 keV, with an average of 54.46 keV. The average E_peakof SGR J1550-5418 bursts calculated using Equation 4 is 44.59 keV, with a minimum of 20.46 keV and a maximum of 77.04 keV, and is consistent with those found by(39 ± 13 keV) and(45± 2.1 keV) using XRT and GBM data for the same source. The combined unabsorbed flux (in the 2-250 keV band) varies from 3.72×10^-9 to 2.62×10^-8 erg cm^-2 s^-1. For the BB+BB model, the temperature of the cooler component (in keV) range from 1.02 to 2.6 with a mean of 1.76, and from 5.67 to 29.24 with a mean of 13.71 for the hot blackbody component. Note that the parameter ranges and averages presented here are excluding fits where either one of the upper or lower bound errors are not available so that the parameters constraints are known and possible issues due to local χ^2 minima are excluded. c|ccc|ccc|ccc|ccccc Spectral Properties of SGR 1900+14 Bursts.0ptBurst 3c|BB+BB 3c|BB+PO 3cLB+LB 5cCOMPT IDkT_1 kT_2 χ^2 kT Γ χ^2 kT_1 kT_2 χ^2E_cutα χ^2 PCA Flux ^1 HEXTE Flux ^2 (keV) (keV) /DOF (keV)/DOF (keV) (keV) /DOF (keV) /DOF (erg/cm^2/s) (erg/cm^2/s) 1 2.3 ± 0.2 24.2^+7.5_-5.8 22.0/22 2.1^+0.6_-0.4 1.1 ± 0.2 18.8/22 2.6^+0.3_-0.2 32.2^+13.3_-8.7 18.8/22 171.2^-9999_-111.8 1.2 ± 0.1 25.1/24 (2.3± 0.1)×10^-8 (9.6^+3.5_-3.4)×10^-82 1.5 ± 0.3 5.4^+1.9_-0.9 9.54/11 1.8^+10.0_-1.8 1.3^+0.7_-0.6 10.2/11 1.6^+0.5_-0.4 6.5^+4.9_-1.4 9.86/11 83.1^-9999_-68.8 1.3^+0.2_-0.5 10.2/13 (6.4^+0.4_-0.5)×10^-9 (2.3^+2.4_-1.4)×10^-83 1.6 ± 0.6 11.4^-9999_-6.4 9.11/6 1.5^+0.8_-1.5 0.2^+3.0_-0.2 9.10/6 1.7^+0.8_-0.7 32.4^-9999_-26.4 9.08/6 500.0^-9999_-500.0 1.1^+0.3_-0.6 9.95/8 (1.2± 0.2)×10^-9 (2.1± -9999)×10^-104 1.9 ± 0.1 6.4^+1.4_-0.9 25.6/29 2.6 ± 0.3 1.7 ± 0.1 25.0/29 2.3 ± 0.1 10.0^+2.4_-2.1 19.5/29 17.7^+6.1_-3.9 1.1^+0.1_-0.2 27.5/31 (4.7± 0.1)×10^-8 (3.8^+0.6_-0.5)×10^-85 2.1 ± 0.2 7.5 ± 1.3 26.2/25 3.7^+0.3_-0.2 2.7^+0.4_-0.6 30.7/25 2.5^+0.3_-0.2 8.7^+1.7_-1.5 23.8/25 14.9^+4.9_-3.5 0.8 ± 0.2 24.9/27 (2.1± 0.1)×10^-8 (1.9± 0.3)×10^-81PCA Flux Energy Range: 2-30 keV 2HEXTE Flux Energy Range: 15-250 keV 3All errors are reported at 1 σ, '-9999' indicates error information is not available for the given fit 4The number of free parameters are the same for BB+BB, BB+PO and LB+LB models and is 4 for each model and 6 for PCA and HEXTE joint spectral analysis. The number of free parameters is 3 for the COMPT model and is 4 in joint spectral analysis. See Section 3 for a list of free model parameters. Table 5 is available in the machine-readable format in full. The first 5 of 125 data lines are presented here to provide an example of its form.c|ccc|ccc|ccc|ccccc Spectral Properties of SGR 1806-20 Bursts.0ptBurst 3c|BB+BB 3c|BB+PO 3cLB+LB 5cCOMPT IDkT_1 kT_2 χ^2 kT Γ χ^2 kT_1 kT_2 χ^2E_cutα χ^2 PCA Flux ^1 HEXTE Flux ^2(keV) (keV) /DOF (keV)/DOF (keV) (keV) /DOF (keV) /DOF (erg/cm^2/s) (erg/cm^2/s 1 2.7 ± 0.2 11.6 ± 1.6 54.0/31 4.5 ± 0.3 2.1 ± 0.1 49.0/32 3.3 ± 0.3 14.1^+2.0_-1.9 47.5/31 30.7^+8.0_-6.3 0.9 ± 0.1 49.4/33 (3.1± 0.1)×10^-8 (4.1± 0.5)×10^-82 2.6^+0.5_-0.4 9.6^+1.5_-1.3 25.2/16 8.2^+1.5_-2.5 1.2^+0.9_-0.2 25.2/15 3.0^+0.7_-0.8 10.7^+2.8_-1.7 24.8/16 22.3^+9.0_-5.9 0.5 ± 0.2 24.6/17 (2.1± 0.1)×10^-8 (3.2± 0.6)×10^-83 2.6 ± 0.3 14.9^+4.3_-3.5 7.68/13 3.9^+1.1_-0.7 1.5 ± 0.3 12.4/14 3.0 ± 0.4 17.4^+5.4_-3.8 8.36/14 69.0^+97.5_-32.4 1.1 ± 0.2 12.3/15 (1.2± 0.1)×10^-8 (1.8^+0.7_-0.5)×10^-84 3.1^+0.7_-1.0 23.1^-9999_-17.7 0.35/3 3.9^+2.0_-0.5 1.6^+2.2_-0.8 0.36/3 4.1^-9999_-4.1 10.5^+43.3_-7.1 0.48/3 16.1 ±-9999 0.5 ± 0.3 0.88/5 (1.7^+0.2_-0.4)×10^-9 (2.7^+3.3_-1.4)×10^-95 2.7 ± 0.1 10.0 ± 0.6 55.7/63 8.8 ± 0.5 1.0^+0.2_-0.1 81.5/63 3.2 ± 0.3 11.5 ± 0.8 53.9/63 21.9^+2.1_-1.9 0.4 ± 0.1 57.9/65 (9.1± 0.2)×10^-9 (1.2± 0.1)×10^-81PCA Flux Energy Range: 2-30 keV 2HEXTE Flux Energy Range: 15-250 keV 3All errors are reported at 1 σ, -9999 indicates error information is not available for the given fit 4The number of free parameters are the same for BB+BB, BB+PO and LB+LB models and is 4 for each model and 6 for PCA and HEXTE joint spectral analysis. The number of free parameters is 3 for the COMPT model and is 4 in joint spectral analysis. See Section 3 for a list of free model parameters. 5Note that for a some bursts, one normalization term was fixed to constrain spectral parameters and therefore the number of free parameters and the corresponding degrees of freedom may differ. Table 6 is available in the machine-readable format in full. The first 5 of 221 data lines are presented here to provide an example of its form.c|ccc|ccc|ccc|ccccc Spectral Properties of SGR J1550-5418 Bursts.0ptBurst 3c|BB+BB 3c|BB+PO 3cLB+LB 5cCOMPT IDkT_1 kT_2 χ^2 kT Γ χ^2 kT_1 kT_2 χ^2E_cutα χ^2 PCA Flux ^1 HEXTE Flux ^2 (keV) (keV) /DOF (keV)/DOF (keV) (keV) /DOF (keV) /DOF (erg/cm^2/s) (erg/cm^2/s) 1 2.0^+0.4_-0.3 9.2^-9999_-3.1 3.21/3 2.1^+0.7_-0.3 0.6^+1.1_-2.0 3.23/3 2.5 ± 0.5 27.6^-9999_-20.6 3.31/3 10.1^+50.6_-5.5 0.6^+0.8_-1.0 5.64/5 (1.3± 0.2)×10^-8 (3.7^+4.5_-1.4)×10^-92 2.2 ± 0.2 29.2^+12.8_-8.9 12.1/8 2.1^+0.3_-0.2 1.1^+0.7_-0.6 13.9/8 2.6^+0.4_-0.3 36.2^+23.1_-12.9 12.3/8 4.3^+2.3_-1.3 -0.3^+0.7_-0.8 17.9/10 (6.2± 0.7)×10^-9 (2.1^+0.6_-0.5)×10^-93 1.6 ± 0.2 7.6^+1.4_-1.2 6.22/8 7.6^+1.1_-1.5 1.5 ± 0.1 9.26/8 1.7 ± 0.3 8.6^+1.7_-1.4 6.57/8 25.6^+13.0_-7.9 1.2 ± 0.2 10.1/10 (2.0± 0.2)×10^-8 (2.2^+0.5_-0.4)×10^-84 2.1 ± 0.2 20.7^+4.5_-3.4 17.4/14 2.0 ± 0.3 1.2 ± 0.2 12.6/14 2.5 ± 0.3 27.7^+7.6_-5.5 13.8/14 500.0^-9999_-212.9 1.5 ± 0.1 21.5/16 (1.4± 0.1)×10^-8 (3.4± -9999)×10^-85 1.7 ± 0.2 20.1^+2.6_-2.3 20.6/17 0.6^+0.5_-0.4 1.2 ± 0.1 22.4/17 1.7 ± 0.3 25.1^+3.8_-3.3 16.8/17 500.0^-9999_-327.1 1.3^+0.1_-0.2 24.4/19 (1.4± 0.1)×10^-8 (4.6^+0.4_-0.8)×10^-81PCA Flux Energy Range: 2-30 keV 2HEXTE Flux Energy Range: 15-250 keV 3All errors are reported at 1 σ, -9999 indicates error information is not available for the given fit 4The number of free parameters are the same for BB+BB, BB+PO and LB+LB models and is 4 for each model and 6 for PCA and HEXTE joint spectral analysis. The number of free parameters is 3 for the COMPT model and is 4 in joint spectral analysis. See Section 3 for a list of free model parameters. Table 7 is available in the machine-readable format in full. The first 5 of 42 data lines are presented here to provide an example of its form.§.§ Companion Web-catalog of Magnetar Burst Spectral and Temporal Characteristics We also made the results of our analyses available at a companion web-catalog which includes general properties (Burst time, total photon counts, peak counts and 5 PCU plots where the amount of time excluded due to saturation for saturated bursts can also be found), temporal analysis results (including burst duration with start and end times obtained with Bayesian Blocks algorithm for single peak and multi-peak bursts) and spectral analysis results (BB+BB temperatures, BB+PO temperature and photon index, COMPT photon index and cut-off energy and LB+LB temperatures for single-peak bursts). Note that our spectral analysis involves single-peak bursts only. Additionally, the web-catalog provides all temporal and spectral data in FITS format for interested researchers to download and perform their independent investigations. The address of the companion web-catalog is http://magnetars.sabanciuniv.edu§ SIMULATIONS FOR MODEL COMPARISONS Even though one of the four models that were employed to fit the broadband X-ray spectra of magnetar bursts yields the minimum reduced χ^2 value, it is statistically not possible to disregard the alternatives simply by a Δχ^2 test. This issue becomes more complicated given the fact that the COMPT model involves one less free parameter than thermal models, resulting in, on average, one more degrees of freedom. The additional degree of freedom enhances the fitting power of COMPT in the cases where the spectrum could be statistically represented by two or more models. In such cases, competing models can be better compared by simulations based on fit results. To achieve this objective, we performed extensive simulations for each burst as follows: Overall the COMPT model is expected to perform the best in fitting magnetar burst spectra based on number of parameters. Therefore, we took the COMPT model as the null hypothesis and generated 1000 spectra using the resulting COMPT fit parameters for each burst whose COMPT model parameters were sufficiently constrained (parameter error to be less than 50% of the parameter). As an alternative hypothesis (test model), we selected one of the three thermal models, whose reduced χ^2 value was the smallest. Note that the remaining models have equal number of parameters and a simpleΔχ^2 test is applicable for comparison.When the test model did not provide well-constrained parameters (i.e. less than 50% errors), we have selected the next model with the least reduced χ^2 value to be the test model. If none of the test models provided well-constrained parameters, we did not perform the simulation for that event. We found that four out of 42 events examined for SGR J1550-5418, 21 out of 125 events examined for SGR 1900+14 and 77 out of 221 bursts from SGR 1806-20 provided such well-constrained parameters for the seed and test models, and were included in our simulations. We have then fit the 1000 generated spectra for each burst with the COMPT model and its alternative test model.We used a significance level (i.e. probability of rejecting the null hypothesis given that it is true) of 0.05 for each burst included in the simulation. For a χ^2 distribution with dof = 1 (since the degrees of freedom on the test and seed model differ by one in each case), this corresponds to a χ^2 value of 3.84. Therefore, we defined our rejection region of the null hypothesis (i.e. when we accept the test model) as the region where the test model χ^2 is less than the seed model χ^2 by at least 3.84. We suggest that if a truly Comptonized spectrum in fact provides better fit statistics within 0.05 significance, then our fit results where COMPT provides lower χ^2 values indicates the true emission mechanism most likely is Comptonized rather than thermal at least on time-averaged spectra. Similar to the procedure in , we define our p-value to be the fraction of simulated spectra better fitted by COMPT model with the 0.05 significance. If the p-value exceeds 0.9, we conclude that the COMPT model provides better fit statistics than the test model when it is the underlying emission mechanism. In Figure 7, we present the distributions for the difference between seed model (COMPT) χ^2 and test model χ^2 for three example events of SGR 1806-20 with different test models for a visual description of simulation results. Our rejection region of COMPT model (null hypothesis) is where χ^2_COMPT - χ^2_Test Model ≥ 3.84, as described above. We also list the resulting p-values for the entire sample in Table 8.We found that the COMPT model is the most frequently preferred model based on simulation results: The COMPT model provides significantly better fit statistics in more than 90% of trials for 16/19 events compared to the BB+PO model, 12/17 events compared to the BB+BB model and 41/66 events compared to the LB+LB model. Overall, COMPT provides statistically better fits to more than 67.6% of cases against its alternatives.Also, LB+LB model emerges as the best fitting model within thermal models, providing better fits than BB+BB and BB+PO models for the majority (66 out of 102) of events. It is important to reinstate that LB+LB was selected as the test model in each of these cases because itprovided the lowest reduced χ^2 value among the thermal models with well-constrained parameters and that it is possible to compare the thermal models with a simple Δχ^2 test since these models have the same number of parameters. To check whether the simulation procedure forms a bias towards COMPT, we repeated the same procedure with BB+BB as the null hypothesis (seed model) for one event with COMPT as the alternative hypothesis (test model). In this reverse simulation scenario, we similarly defined our rejection region for the null hypothesis as when COMPT χ^2 value was less than BB+BB χ^2 by at least χ^2_0.05,1 = 3.84. BB+BB model was accepted in 100% of trials when it acted as the seed model. COMPT model was accepted in 99.7% of trials when COMPT was the seed model in the original simulation for the same event. By comparing these results, we concluded that the simulation procedure accepts the inherent emission mechanism within the level of significance with no bias towards any model. Therefore, we continued the simulations with COMPT as the seed model.ccc|ccc|ccc|cccP-values of Simulated Bursts0pt3c|SGR J1550-5418 3c|SGR 1900+14 3cSGR 1806-20 3cSGR 1806-20-continuedTest p-value Burst ID Test p-value Burst ID Test p-value Burst IDTest p-value Burst IDModel Model Model ModelBB+PO 0.961 24 BB+BB 0.930 85 BB+BB 0.961 20 LB+LB 0.879 46 BB+PO 0.984 28 BB+BB 0.997 101 BB+BB 0.848 31 LB+LB 0.910 47 LB+LB 1.000 26 BB+BB 0.921 102 BB+BB 0.870 44 LB+LB 0.941 49 LB+LB 1.000 36 BB+BB 0.930 124 BB+BB 0.996 69 LB+LB 1.000 50 BB+PO 0.895 112 BB+BB 0.897 70 LB+LB 0.915 52 LB+LB 0.946 4 BB+BB 0.935 82 LB+LB 0.926 66 LB+LB 0.865 5 BB+BB 0.887 93 LB+LB 0.828 67 LB+LB 0.939 81 BB+BB 0.948 152 LB+LB 0.943 68 LB+LB 0.870 84 BB+BB 0.978 153 LB+LB 1.000 74 LB+LB 0.942 87 BB+BB 0.881 170 LB+LB 0.864 78 LB+LB 0.952 88 BB+BB 0.929 185 LB+LB 0.980 79 LB+LB 0.863 99 BB+BB 0.945 186 LB+LB 0.928 80 LB+LB 0.837 103 BB+BB 0.985 188 LB+LB 0.880 81 LB+LB 0.919 105 BB+PO 0.910 1 LB+LB 0.901 89 LB+LB 0.927 106 BB+PO 0.940 6 LB+LB 0.909 95 LB+LB 0.892 108 BB+PO 0.919 28 LB+LB 0.919 99 LB+LB 0.938 114 BB+PO 0.935 35 LB+LB 0.864 108 LB+LB 0.926 115 BB+PO 0.902 37 LB+LB 0.873 110 LB+LB 0.996 116 BB+PO 0.893 55 LB+LB 0.811 129 LB+LB 0.955 122 BB+PO 0.965 56 LB+LB 0.992 131 LB+LB 0.915 123 BB+PO 0.922 83 LB+LB 0.860 132 BB+PO 0.910 92 LB+LB 0.934 134 BB+PO 0.911 98 LB+LB 0.944 135 BB+PO 0.898 102 LB+LB 1.000 136 BB+PO 0.996 103 LB+LB 0.988 138 BB+PO 0.934 105 LB+LB 0.926 140 BB+PO 1.000 133 LB+LB 0.969 141 BB+PO 0.988 149 LB+LB 0.875 142 BB+PO 0.932 168 LB+LB 0.983 148 LB+LB 0.893 5 LB+LB 0.916 155 LB+LB 0.821 7 LB+LB 0.866 158 LB+LB 0.883 10 LB+LB 0.942 164 LB+LB 0.842 22 LB+LB 1.000 169 LB+LB 0.966 23 LB+LB 0.891 172 LB+LB 0.840 26 LB+LB 0.978 177 LB+LB 0.846 30 LB+LB 0.875 182 LB+LB 0.848 34 LB+LB 0.921 183 LB+LB 0.911 39 LB+LB 0.867 184 LB+LB 0.949 41 § DISCUSSION The models involving non-thermal and thermal emission processes are commonly discussed in the context of magnetar bursts (e.g. ; ; ). In the Comptonization viewpoint, the photons emerging from the ignition point are repeatedly upscattered by the e^±-pairs present in the corona. The density and optical thickness of the corona, the incoming photon distribution and the electron temperature set a spectral break point for energy, realized as the peak energy parameter for the power law shaped Comptonized spectrum. For magnetar bursts, the Comptonized spectrum resembles the models for accretion disks and AGN, but the underlying mechanism differs due to the strong magnetic field of magnetars. The corona of hot electrons may emerge in the inner dynamic magnetosphere due to field line twisting as discussed by , ,and . The magnetospheric corona could give rise to a similar Comptonization process and therefore upscatter emergent photons. This type of corona are expected to be anisotropic due to the intense and likely multipolar magnetic field around the magnetar. The anisotropy of the corona sets a different slope for the emission spectrum. The emission spectrum however is similar in its exponential tail and peak energy, which is now controlled by the thermal and spatial parameters of the corona in the magnetosphere. The distinction between persistent and burst emissions are further explained in this model as the bursts may be triggered closer to the surface where the density of e^±-pairs is high, and persistent < 10 keV signals may originate at higher altitudes with a lower e^± density ().The alternative approach to interpret magnetar burst spectra is the thermal emission due to a short-lasting plasma of electron-photon pairs in quasi-thermal equilibrium, usually described with the superposition of two blackbody functions (see e.g., ; ). This dual blackbody scheme approximates a continuum temperature gradient due to the total energy dissipation of photons throughout the magnetosphere. The corona is expected to be hotter at low altitudes than the outer layers. Therefore, the coronal structure suggests that the high temperature blackbody component be associated with a smaller volume than the cold component.report a strong negative correlation between emission area and blackbody temperature indicating that if the underlying emission mechanism is quasi-thermal with gradually changing temperatures and if the dual blackbody is a well approximation of the continuum gradient as expected, the relationship between temperature and coronal structure is in fact apparent in the spectrum. Modeling the corona with such temperature gradient is a hard task especially in the hotter zone due to its anisotropic structure, the intense magnetic field, the twisted magnetosphere geometry and polarization-dependence of the scattering process. Temporal and spectral studies on broad energy ranges as a result help get a better view of the underlying structure as well as to distinguish between non-thermal and thermal models as in our investigations which suggest a non-thermal model describes spectra best for majority of bursts. Although the sum of two blackbody functions is commonly used to describe magnetar burst spectra, it was shown that the spectrum of photon flux per unit energy band may be flat at energies lower than the temperature when the magnetic field is not too high (B < 10^15 G). In this model, the spectrum of photons escaping the bubble formed during the burst are considered. The emergent spectrum was shown byto be close to the blackbody spectrum although the observed radiation within the bubble comes from photons with different temperatures throughout the bubble. In a hot, optically thick bubble in a strong magnetic field, the photon energy is well below the excitation energy of the first Landau level and Compton scattering dominates. Considering SGR burst bubbles in such a scenario, photons with ordinary orthogonal (O-mode) polarization will go under much more scatterings than the extraordinary linearly polarized (E-mode) photons whose cross-sections are strongly hindered (). Because of this dependence of scatterings on radiation cross-section which in turn depend on frequency in a strong magnetic field, at low energies the burst spectrum alters from the blackbody spectrum and may be observed as flat.In our investigations, the LB+LB model emerged as the most frequently chosen test model. That is, compared to the other two models with the same degrees of freedom (namely BB+PO and BB+BB), the LB+LB model shows the best test statistics for 48 out of 77 bursts for SGR 1806-20, 16 out of 21 for SGR 1900+14 and 2 out of 4 for SGR J1550-5418 when < 50% parameter error constraints are enforced. This is suggestive that the modified blackbody scheme where a frequency dependent scattering cross-section distorts the blackbody emission at low energies explains burst emission mechanisms better than a thermal emission resulting from short lasting electron-positron pairs through a gradient of temperature at least on time-averaged burst spectra.§.§ Spectral CorrelationsWe have also explored whether there exists any correlations between magnetar burst spectral parameters. We investigated spectral correlations of COMPT model because our simulation results show that COMPT model describes majority of well-restricted bursts better described by COMPT model and BB+BB model parameters for comparison purposes.In the correlation analyses, we chose unsaturated bursts that have well-restricted (< 50 % of the parameter) parameter and flux errors for the given model fit (COMPT model for Section 6.1.1 and BB+BB model for Section 6.1.2) out of all bursts included in spectral analysis. For the COMPT model, our correlation analysis consists almost entirely of SGR 1806-20 and SGR 1900+14 bursts (total of 63) except for one SGR J1550-5418 bursts due to error restrictions. §.§.§ COMPT ModelFor the COMPT model, we checked for correlations including only unsaturated bursts that yield well-constrained(< 50 % of the parameter) errors for all parameters and flux. For these bursts, we find a weak positive correlation (Spearman's rank correlation coefficient, ρ = 0.54, chance probability = 4.05 × 10^-6) between photon index (defined as α in Equation 1) and total fluence (see Figure 8, right panel). Note that we do not find any significant correlation between E_peak and total flux (ρ = 0.22, chance probability = 0.075).We find that although the correlation between photon index and total flux is not significant (ρ= 0.43, chance probability = 4.06 × 10^-4), there is a positive lower bound trend on photon index with respect to total flux (see Figure 8, left panel). That is, for the lower-flux bursts, photon index spans a wider range (-1.1 to 1.3) while the photon index range is higher for bursts with higher total flux. We take that the negative photon index (-α) is a good indicator of the hardness of burst spectra since E_peak values of these bursts are narrowly distributed (a gaussian fit on E_peak distribution yields σ = 10.6 ± 0.7 for SGR 1806-20 andσ = 8.8 ± 1.1 for SGR 1900+14, see also Figures 4 and 6) and since E_peak does not significantly depend on burst flux. Thus, in low flux ranges, we find that the bursts can be spectrally much harder, while with increasing flux, bursts are confined to softer spectra with a clear lower bound (indicated with the dashed line in Figure 8, left panel) on photon index. Our results spectrally confirm the results of , in which they found that high fluence SGR 1806-20 and SGR 1900+14 bursts tend to be softer (i.e., anti-correlation between hardness and energy fluence) where hardness ratio is defined as the ratio of photon counts in 10-60 keV to 2-10 keV bands. Additionally,also found an anti-correlation between hardness and fluence for SGR J1550-5418 bursts inferred from the anti-correlation they find between E_peak and burst fluence from broadband (8-200 keV) spectral analysis results. On the other hand,noted a positive correlation between hardness and fluence for the bursts from another magnetar candidate, AXP 1E 2259+586. Overall, we confirm the findings ofand see that SGR 1806-20 and SGR 1900+14 bursts have opposite trends between hardness and fluence in similar fluence ranges (2.4 × 10^-9 - 6.3 × 10^-8 erg/cm^2) to those of AXP 1E 2259+586 () and similar trends to those of SGR J1550-5418 ().We suggest this relationship may be a distinctive characteristic between AXP and SGR bursts. §.§.§ BB+BB ModelIn the thermal emission viewpoint, a temperature gradient throughout the emitting surface is assumed. This gradient from the burst ignition point has been represented in previous studies by two (hot and cooler) blackbody components (see e.g., ). We also checked the relationship between thermal emission areas and temperatures between of the hot and cold blackbody components, expressed as R^2 = FD^2/σ T^4where R is the radius of the thermal emitting region, D is the distance to source and F is the average total flux per event, and T is the temperature. Here, we used the distances of 5 kpc for SGR J1550-5418 (), 12.5 kpc for SGR 1900+14 () and 8.7 kpc for SGR 1806-20 (). We have found a significant anti-correlation between thermal emission area and temperature for both blackbody components. We present in the upper left panel of Figure 9, the R^2 (km^2) vs. kT (keV) trend for all unsaturated bursts with well-constrained parameters from all three sources. The dashed lines represent the R^2 ∝ T^-3 trend proposed in previous studies (e.g. ; ) and the solid line represents the R^2 ∝ T^-4 trend that is expected from blackbody emission. These trends are drawn for comparison only. The cooler blackbody components (kT_1, shown in black) are well separated from the hot blackbody (kT_2, shown in red) components for all sources, similar to the results byandfor SGR J1550-5418 andfor SGR 1900+14. This verifies the thermal emission model that starts from the hot and narrower ignition point and extends to a wider area where it is cooled down (i.e. hotter corona at lower altitudes than the outer layers). To check the extent of this verification, we employed a Spearman rank order correlation test on the emission area vs. temperature trend, and found a correlation coefficient of -0.933 with a chance probability close to nil using all bursts. The correlation coefficients are -0.953, -0.962, -0.942 when the Spearman Correlation test is employed individually for SGR J1550-5418, SGR 1900+14 and SGR 1806-20, respectively (with chance probabilities ∼ 0).We then employed single power law and broken power law fits on emission area vs. blackbody temperature data. For SGR 1900+14 and SGR 1806-20 bursts, the reduced χ^2 values for the broken power law (Table 9, column 6) fits are less than single power law reduced χ^2 values (Table 9, column 8). This suggest the emission area vs. temperature behavior significantly differs between the cool and hot blackbody components. Moreover, the hot blackbody component is associated with a narrower emission area than the cool component. Thus, we re-confirm that our findings on this strong anti-correlation between temperature and emission area verify the thermal emission model approximated by a sum of two blackbodies where emission starts from a hot and narrower ignition point within the bubble and extends to a cooler outer layer associated with higher area. To check whether similar relationships hold for different burst intensity levels, we grouped the bursts into three intervals based on their total flux and employed a single and broken power law fit for each group in the same flux interval individually. We show a color-coded scatterplot of different flux values for SGR 1806-20 in the upper right panel of Figure 9. The dashed lines represent the best fit broken power law trends. The color coded arrows mark the power law break energy for each intensity groups. We present the complete fit results in Table 9. Note that kT uncertainties arepresented in Table 9 and Figures 9b,c and d. R^2 uncertainties are propagated ignoring covariance terms and therefore are overestimated.We find that as burst intensity increases, power law index and power law break (in kT) tend to increase for all three groups. We note that the power law trends between the temperature and emitting area are similar for cool and hot blackbody components at flux values below 10^-7.9 erg/cm^2/s; namely, a single power law represents the trends for both cooler and hot components. For the highest flux group, the power law trends for the cool and hot components differ significantly. In line with this result, the difference between broken power law indexes as well as reduced χ^2 values of single and broken power law fits increase with increasing burst intensity, indicating that the temperature and emission are relation between the two blackbody components differ more with increasing burst intensity.cccccccc Single and Broken Power Law Fit Results with Corresponding Flux Intervals0pt2c|4c|Broken Power law 2c|Single Power law Sourcelog_10 Flux Low-Energy High Energy kT_break χ^2 /DOF Index χ^2 /DOF Interval (erg/cm^2/s) Index IndexSGR 1806-20 -8.31, -7.91 -3.79 ± 0.03 -3.78 ± 0.03 5.26 0.93 -3.2 ± 0.08 0.98SGR 1806-20 -7.91, -7.77 -4.04 ± 0.03 -3.91 ± 0.04 7.54 0.4 -3.23 ± 0.07 0.53SGR 1806-20 -7.77, -7.26 -7.59 ± 0.02 -5.57 ± 0.04 8.08 0.87 -3.42 ± 0.06 2.15 SGR 1900+14 -8.4, -7.16 -5.44 ± 0.01 -4.00 ± 0.04 6.01 2.12 -3.6 ± 0.08 2.84 SGR J1550-5418 -8.43, -7.58 -3.55 ± 0.03 -3.43 ± 0.03 7.58 0.92 -3.29 ± 0.07 0.88We could not split the burst sample of SGR J1550-5418 and SGR 1900+14 into intensity groups since these sources have much fewer unsaturated bursts with well-constrained parameters. For SGR J1550-5418 bursts (see Figure 9, lower left panel), the power indices of the low and high temperature components are very similar, suggesting a single power law trend could represent the whole sample. A single power law fit on SGR J1550-5418 yields R^2 ∼ T^-3.3 ± 0.07 with a reduced χ^2 of 0.88. Note that bursts from this magnetar have flux values less than 10^-7.58 erg/cm^2/s. In the case of SGR 1900+14 bursts (Figure 9, lower right panel), we find a significant change of the power law trends for two blackbody components, in line with the case of higher intensity bursts from SGR 1806-20. Note that on average SGR 1900+14 bursts have higher flux values than that of SGR J1550-5418.These relations between emission area and temperature significantly differ from the relations discussed inwhere the emitting area decreases more with increasing temperature for the hot blackbody component. However, we note that the flux range analyzed byfor SGR J1550-5418 (above 10^-6.5 erg/cm^2/s) is much higher from the flux range covered in our investigation (∼ 10^-9-10^-8 erg/cm^2/s). It is possible that the area vs. kT behavior of the two blackbody components differ more significantly with increasing flux than previously discussed or that the relationship differs between sources or burst episodes. It remains likely that the trend between lower and upper power law indices show opposite behaviors depending on burst intensity. We suggest that the power law trends of the cool and hot components are the same in the flux regime ∼ 10^-8 erg/cm^2/s. Above this flux level, the cooler component shows more decrease in emission area with increasing temperature than that of the hot component (as in the high intensity case of SGR 1806-20 (Figure 9, upper right panel)). In the much higher flux regime (above ∼ 10^-7 erg/cm^2/s), an opposite behavior takes place, as presented by ; namely, the emission area of the hot component drops more than that of the cool component with temperature.§ CONCLUDING REMARKS We presented our time-averaged spectral analysis results of a total of 388 bursts from SGR J1550-5418, SGR 1900+14 and SGR 1806-20 as machine readable tables in this paper and as a database in a companion web-catalog at <http://magnetars.sabanciuniv.edu>. Our spectral analysis show BB+BB, BB+PO, LB+LB and COMPT models all provide acceptable fits at similar levels. We further conducted numerical simulations to contrain the best-fitting model. Based on the simulation results, we suggest COMPT provides significantly better fits with sufficiently constrained parameters. We suggest the inherent emission mechanism is likely non-thermal or at least not purely thermal for majority of bursts included in our study within 2-250 keV range. It is important to note that since our analysis covers time-averaged spectra only, these results may not fully represent instantaneous burst properties. Excluding COMPT fit results, we find that LB+LB model, which is employed in SGR spectral analysis for the first time here, describes majority of bursts with well-restricted parameters better than BB+BB and BB+PO models.We find that the photon index is positively correlated with fluence and has an increasing lower bound trend with respected to burst flux, suggesting that bursts have a decreasing upper bound limit on hardnesswith respect to increasing flux. This behavior is similar to the previously reported anti-correlation between hardness and fluence for SGR J1550-5418 bursts, and confirms the anti-correlation between hardness and fluence for SGR 1806-20 and SGR 1900+14 bursts. Since it was shown that AXP 1E 2259+586 bursts show an opposite trend (positive correlation between burst hardness and fluence), we suggest that the relationship between hardness and burst fluence is a distinctive behavior between AXP and SGR bursts.We confirm a significant anti-correlation between blackbody temperatures (hot and cold) and burst emission areas. Overall, emission area decreases with increasing blackbody temperature in all cases, verifying the thermal emission model from a hot bubble where emission radiates from a hot ignition point to a colder and wider emission area. Examining the same relation in different flux intervals of SGR 1806-20 bursts, we found that above the flux regime of∼ 10^-8 erg cm^-2 s^-1, the emission area decreases more rapidly with respect to the temperature for the cooler blackbody.This is in contrast to what have been reported previously (e.g. ). We suggest area vs. kT behaviors of the two blackbody components may differ more significantly with increasing flux than previously discussed. It is also possible that the trend between lower and upper power law indices for the cool and hot blackbody components show opposite behaviors at different burst intensities.D.K., Y.K. and S.S.M acknowledge support from the Scientific and Technological Research Council of Turkey (TÜBİTAK, grant no: 113R031) [Beloborodov & Thompson(2007a)]beloborodovthompson2007a Beloborodov, A. M., Thompson, C. 2007a, , 657, 967 [Beloborodov & Thompson(2007b)]beloborodovthompson2007b Beloborodov, A. M., Thompson, C. 2007b, , 308, 631 [Bibby et al.(2008)]bibby2008Bibby, J. L.,Crowther, L. A., Furness, J. P., et al. 2008, , 386, L23 [Davies et al.(2009)]davies2009Davies, B.,Figer, D. F., Kudritzki, R. P., et al. 2009, , 707, 844 [Feroci et al.(2004)]feroci2004 Feroci, M., Caliandro, G. A., Massaro, E., et al. 2004, , 612, 408 [Gavriil et al.(2004)]gavriil2004 Gavriil, F. P., Kaspi, V. M., & Woods, P. M. 2004, , 607, 959[Gogus et al.(2007)]gogus2007 Gogus, E., Woods, P. M., Kouveliotou, C., et al. 2007, Progress of Theoretical Physics Supplement, 169, 12 [Gogus et al.(2011)]gogus2011 Gogus, E., Guver, T., Ozel, F., et al. 2007, , 728, 160[Gogus et al.(2001)]gogus2001 Gogus, E., Kouveliotou, C., Woods, P. M., et al. 2001, , 558, 228[Halpern et al.(2008)]halpern2008 Halpern, J. P., Gotthelf, E. V., Reynolds, J., et al. 2008, , 676, 1178 [Israel et al.(2008)]israel2008 Israel, G. L., Romano, P., Mangano, V., et al. 2008, , 685, 1114 [Israel et al.(2010)]israel2010 Israel, G. L., Esposito, P., Rea, N., et al. 2010, , 408, 1387 [Jahoda et al.(2006)]jahoda2006 Jahoda, K., Markwardt, C. B., Radeva, Y., et al. 2006, , 163, 401 [Kaspi & Beloborodov(2017)]kaspi2017 Kaspi, V. M., Beloborodov, A. 2017, ArXiv e-prints, arXiv:1703.00068 [Lin et al.(2013)]lin2013 Lin, L., Gogus, E., Kaneko, Y., et al. 2013, , 778, 105[Lin et al.(2011)]lin2011 Lin, L.,Kouveliotou, C., Baring, M. G., et al. 2011, , 739, 87 [Lin et al.(2012)]lin2012 Lin, L., Gogus, E., Baring, M. G., et al. 2012, , 756, 54 [Lyubarsky(2002)]lyub2002 Lyubarsky, Y. E. 2002, , 332, 199 [Lyutikov(2003)]lyutikov2003 Lyutikov, M. 2003, ,346, 540 [Manchester et al.(2005)]manchestercatalogue Manchester, R. N., Hobbs, G.,B., Teoh, A., et al. 2005, , 129, 1993 [Mereghetti et al.(2015)]mereghetti2015 Mereghetti, S., Pons, J. A., & Melatos, A.2015, , 191, 315 [Nobili et al.(2008)]nobili2008 Nobili, L., Turolla, R., & Zane, S.2008, , 389, 989 [Olive et al.(2003)]olive2003 Olive, J. F., Hurley, K., Dezalay, J. P., et al. 2003,American Institute of Physics Conference Series, Vol. 662, Gamma-Ray Burst and Afterglow Astronomy 2001: A Workshop Celebrating the First Year of the HETE Mission, ed. G. R. Ricker & R. K. Vanderspek, 82?87 [Scargle et al.(2003)]scargle2003 Scargle, J. D., Norris, J. P., Jackson, B., et al. 2003, ,764, 167 [Thompson & Beloborodov(2005)]thompsonbeloborodov2005 Thompson, C., Beloborodov, A. M.,2005, , 634, 565 [Thompson & Duncan(1995)]thompsonandduncan95 Thompson, C., Duncan, R. C.,1995, , 275, 255 [Thompson et al.(2002)]thompsonlyutikovkulkarni2002 Thompson, C., Lyutikov, M. & Kulkarni, S. R. 2002, , 574, 332 [Tiengo et al.(2010)]tiengo2010 Tiengo, A., Vianello, G., Esposito, P., et al. 2010, , 710, 227 [Turolla et al.(2015)]turolla2015 Turolla, R., Zane, S. & Watts, A. L. 2015, Reports on Progress in Physics, 78, 116901 [van der Horst et al.(2012)]vanderhorst2012 van der Horst, A. J., Kouveliotou, C., Gorgone, N. M., et al. 2012, , 749, 122 [Woods et al.(2007)]woods2007Woods, P. M., Kouveliotou, C., Finger, M. H., et al. 2007, , 654, 470 [Younes et al.(2014)]younes2014Younes, G., Kouveliotou, C., van der Horst, A. J., et al. 2014, , 785, 52
http://arxiv.org/abs/1708.08067v1
{ "authors": [ "Demet Kirmizibayrak", "Sinem Sasmaz Mus", "Yuki Kaneko", "Ersin Gogus" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170827084222", "title": "Broadband Spectral Investigations of Magnetar Bursts" }
Critical behavior of a chiral superfluid in a bipartite square lattice]Critical behavior of a chiral superfluid in a bipartite square lattice^1 Zentrum für Optische Quantentechnologien and Institut für Laserphysik, Universität Hamburg, 22761 Hamburg, Germany ^2 The Hamburg Centre for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg, Germany ^3 Department of Physics, National Chung-Hsing University, Taichung 40227, Taiwan ^*[email protected] We study the critical behavior of Bose-Einstein condensation in the second band of a bipartite optical square lattice in a renormalization group framework at one-loop order. Within our field theoretical representation of the system, we approximate the system as a two-component Bose gas in three dimensions. We demonstrate that the system is in a different universality class than the previously studied condensation in a frustrated triangular lattice due to an additional Umklapp scattering term, which stabilizes the chiral superfluid order at low temperatures. We derive the renormalization group flow of the system and show that this order persists in the low energy limit. Furthermore, the renormalization flow suggests that the phase transition from the thermal phase to the chiral superfluid state is first order.§ INTRODUCTIONUnconventional Bose-Einstein condensates (BECs) whose order parameter space is not simply the usual U(1) symmetry have been extensively studied. Examples from the field of ultracold atoms include Floquet engineered Bose gases <cit.> or spinor Bose gases <cit.>, where the order parameter space has an additional Ising component or even more complex symmetry groups. The experimental realization of such systems in ultracold atomic systems are ideal to investigate phase transitions of those complex orders due to well-defined and tunable nature of these systems. Recently, BECs that break time-reversal (TR) symmetry have attracted increased attention from theorists <cit.> and from experimentalists <cit.>. According to Feynman's “no-node" theorem <cit.>, such states cannot be a ground state of a conventional bosonic Hamiltonian with short-range interactions, since breaking TR symmetry inevitably leads to a wave function with a node in real space. Therefore, to create BECs without TR symmetry, the assumptions of the no-node theorem have to be circumvented. One approach uses a long-lived metastable state of ultracold bosons in bands of higher orbitals <cit.>. Experimentally, a BEC in a p-band has been realized by first populating particles in a staggered pattern in a checkerboard lattice and then suddenly changing the potential shape. Due to the large anharmonicity in the energy spectrum, the life time of the metastable BEC is longer than 100ms <cit.>. Since this BEC is not a ground state, breaking TR symmetry is not in contradiction with the no-node theorem. Indeed, a complex coherent superposition of the two p-band condensates (i.e., p_x ± i p_y order) that breaks TR symmetry has been realized by carefully tuning the lattice parameters <cit.>. Since such a state hosts spatially staggered orbital currents, it is dubbed a chiral superfluid. Similar p_x ± i p_y paring has been proposed for the A phase of superfluid ^3H <cit.> and for Sr_2RuO_4 <cit.>.In this paper, we investigate the stability of the chiral condensate and its critical behaviors by a renormalization group (RG) analysis. The analysis addresses the competition of chiral and non-chiral condensation, and the critical behavior. While the chiral superfluid has been confirmed experimentally, it is still important to know how stable and general the state is. In particular, since the energies of the competing non-chiral BEC and of the chiral BEC are close at the mean-field level, it is not trivial which of the two BECs becomes dominant at low temperatures. We find that the stable condition of the chiral BEC is always preserved at low energy scales, and the transition is expected to be first-order. The paper is organized as follows. In EFT we develop the field theoretical description of the mixed orbital model in a bipartite optical square lattice in the low energy limit. MF is devoted to a mean-field analysis of the effective model. In rg, we study the critical behavior of the model in the framework of a one-loop RG calculation. In conc, we conclude. The details of calculations not covered in the main texts are summarized in appendix.§ EFFECTIVE FIELD THEORY The system that we consider here is sketched in lattice(a) and described by the Hamiltonian, H_0= ∫ dz {∑_r,i b^†_i(r,z)(-ħ^2∂^2_z/2m_0-μ_ 3D) b_i(r,z)+H^ xy_0},with the tight-binding model of a bipartite optical square latticeH^ xy_0 = J∑_r[b_1^†(r,z)b_2(r+d_1,z)+b_1^†(r,z)b_3(r+d_2,z)-b_1^†(r,z)b_2(r-d_1,z)-b_1^†(r,z)b_3(r-d_2,z)+ h.c.].Here r = √(2) a (n_x, n_y) with n_x,y∈ℤ, d_1/2=(a/√(2),± a/√(2)). b_i(r,z) with i=1,2,3 represent the annihilation operators of bosons at the s, p_x and p_y orbitals respectively. We assume that bosons move freely along the z-direction. The hopping amplitudes between neighboring p-orbitals, J_∥ and J_⊥, are set to be zero for simplicity (see appendix for a more general discussion). Converting the orbital representation into a band representation, we obtain three bands. The metastable BEC in experiments is loaded in the lowest band, whose dispersion is given byϵ(k,k_z)=-2J√(1-cos(√(2)k_x)cos(√(2)k_y))+ħ^2 k_z^2/2m_0,where k=(k_x,k_y) and we set a=1 in the following calculations. We illustrate the lowest band in momentum space in lattice(c). We note that there are two energetic minima at k_1=(π/√(2),0) and k_2= (0,π/√(2)). These energetic minima are degenerate, thus giving rise to the ℤ_2 symmetry of the noninteracting Hamiltonian. At low temperatures, bosons predominantly occupy momentum states near the two minima, and then condense below a critical temperature. To describe the critical behavior, we expand the bosonic operators near the two minima asb_α(r,z) = 1/√(N)∑_ke^ik·ru_α(k)ϕ(k,z)≃ 2a^2/√(N)∑_j=1,2e^ik_j·ru_α j∫_|q_j|<Λ_qd^2q_j/4π^2 e^iq_j·rϕ_j(q_j,z)≡ √(2)a∑_j=1,2ψ_j(r,z)u_α j e^ik_j·r,where ψ_j(r,z)≡√(2)a/√(N)∫_|q_j|<Λ_qd^2q_j/4π^2e^iq_j·rϕ_j(q_j,z) with ϕ(k_j+q_j,z)≡ϕ_j(q_j,z). Λ_q is the momentum cut-off, and N is the number of the unit cells. The kernel u_α(k_j)=u_α j represents the projection of the wave function of orbital α on the wave function of the lowest band in the vicinity of the minimum k_j. These are given by (u_11,u_12)=(-i/√(2),i/√(2)), (u_21,u_22)=(1/2, -1/2) and (u_31,u_32)=(1/2,1/2). We use the field decomposition to approximate the full Hamiltonian of (<ref>) by an effective Hamiltonian with two components, H_0^ eff=∑_j=1,2∫ d^3Rψ^†_j(R)[-ħ^2/2m^*∇_ R^2-μ_j]ψ_j(R),where R=(r,z), μ_j being the chemical potential of the j-th component and the effective mass being m^*=(m_ xy^2m_0)^1/3. As an example, we describe the experimental parameters of<cit.>; for ^87Rb atoms and for typical laser intensities that were used, we have m_0≃0.2m_ xy and m_ xy=2√(2)ħ^2/(λ_L^2J) with the laser wavelength λ_L=1064 nm and J/E_ rec=0.13. We note that to simplify the RG analysis we use an isotropic effective model, and the momentum cut-off in the field decomposition sets the energy cut-off of the effective Hamiltonian as ϵ_Λ=ħ^2Λ_q^2/2m^*.We further consider the on-site interaction; see <cit.>, which gives the following terms,H_I = ∫ dz∑_r{U_s/2n_s(R)[n_s(R)-1] +U_p/2n_p(R)[n_p(R)-1 ]-U'_p/2[ L_z^2(R) - n_p(R) ] },where n_s =b_1^† b_1, n_p=b_2^†b_2+b_3^†b_3, and L_z=i(b^†_2b_3-b^†_3b_2) being an angular momentum operator. U_s is the on-site interaction among s-orbitals. U_p and U'_p are intra- and inter-orbital interactions among p-orbitals. In the tight-binding approximation, the strength of the on-site interactions can be calculated from the contact interaction and the Wannier functions <cit.>; for the details, see appendix. In the standard harmonic approximation, the on-site interactions follow U'_p= U_p/3. The precise ratio between U_s and U_p depends on the depth of the optical potential since the harmonic frequency of the s-orbital sites are different from the one of the p-orbital sites. For a moderately deep potential, we find U_s ∼ U_p. Within the field-theory approximation field, we represent the effective interaction as H^ eff_I=∫ d^3R{g̃_1/2ψ_1^†(R)ψ_1^†(R)ψ_1(R)ψ_1(R)+g̃_2/2ψ_2^†(R)ψ_2^†(R)ψ_2(R)ψ_2(R)+g̃_12ψ_1^†(R)ψ_2^†(R)ψ_2(R)ψ_1(R)+g̃_u/2[ψ_1^†(R)ψ_1^†(R)ψ_2(R)ψ_2(R)+ H.c.]},where g̃_j (j=1,2) is the intra-component interaction andg̃_12 is the inter-component one. In this expression, there is an additional term with the interaction strength g̃_u, which is an Umklapp term. This additional scattering process is not present in the previously studied triangular lattice system <cit.>, which demonstrates that these two systems are in different universality classes. Our model is more general in the sense that three coupling constants flow independently under the renormalization equations. The Umklapp interaction allows interchange of bosons between the two components by lattice assisted collisions. In other words, the effective interaction only enforces conservation of the total boson number, in stead of the boson number of each component as, for instance, in the frustrated triangular optical lattice <cit.>. Equivalently, we only have one global U(1) symmetry, instead of two U(1) symmetries for each component. The bare values of the coupling constants in terms of U_s, U_p and U'_p are:g̃_1/2 =2 U_s+U_p + 3 U_p'/8,g̃_12= 2U_s+U_p-U_p'/4,g̃_u = 1/2g̃_12. The full symmetry of the effective action is U(1)×ℤ_2 ×Θ, where the U(1) symmetry corresponds to the invariance of the model under the global phase shift for both components, ℤ_2 is the exchange of the two components, and Θ is the time-reversal symmetry. § MEAN-FIELD THEORYBefore proceeding to the RG analysis, we study the ground state for the bare interactions within a zero-temperature approach. We assume that the bosons perfectly condense at the two energetic minima so that a many-body trial wave function is represented as,|Ψ⟩_θ,ϕ=1/√(N!)[cosθψ_1^†+e^iϕsinθψ_2^†]^N|0,0⟩,where N is the total number of bosons, and |m,n⟩ stands for bosons' occupation numbers m(n) at momentum k_1(k_2)respectively. We will use the angles θ and ϕ as variational parameters. θ determines the relative population of the two minima and ϕ denotes the relative phase of the two-component condensates. Using the trial wave function, we compute the energy of the interacting effective Hamiltonian Hieff as⟨ H^ eff_I⟩_θ,ϕ=N(N-1){g̃_1/2cos^4θ+g̃_2/2sin^4θ+g̃_12cos^2θsin^2θ +g̃_ucos^2θsin^2θcos2ϕ}.First we note that for the frustrated triangular optical lattice, the Umklapp interaction does not occur (g̃_u = 0), and we have g̃_1=g̃_2 ≈ 2g̃_12, which follows from the common origin of these terms, i.e., the contact interaction between the atoms. In this case, the minimum of (<ref>) occurs at θ=0 or π/2. From (<ref>), this means that bosons will condense in one of the energetic minima to break the ℤ_2 symmetry <cit.>. However, in the square bipartite lattice that we study in this paper, the Umklapp interaction g̃_u>0 exists due to the bare on-site repulsive interactions U_s ∼ U_p ≫ U'_p >0. In this case, another energetic minimum may appear at (ϕ, θ)=(±π/2, π/4) in (<ref>); this corresponds to a chiral superfluid state |Ψ⟩= [ψ_1^†± i ψ_2^†]^N|0,0⟩ /√(2N!) given by a complex coherent superposition of two single particle states. This state breaks the time-reversal symmetry Θ, i.e., the chiral ℤ_2 symmetry, in addition to the U(1) continuous symmetry of the phase (The situation is similar to the fully frustrated XY models <cit.>). Comparing this to the single condensate at θ = 0, we find that the chiral superfluid state occurs whenG_1 ≡g̃_0-g̃_12+g̃_u>0,where we set g̃_1=g̃_2=g̃_0. The above condition is also discussed in <cit.>. In terms of the interaction parameters in (<ref>), we find g̃_0-g̃_12+g̃_u = U'_p/2 >0, and thus a chiral superfluid order occurs for any repulsive interaction. Even if we include non-zero values of J_⊥ and J_∥, we find that the condition is still satisfied as long as the energetic minima are located at k_1=(π/√(2),0) and k_2= (0,π/√(2)) (see appendix). However, the energy difference between the normal and chiral condensates is of the order of ∼ U_p' at the mean-field level, and at low temperatures the coupling constants get renormalized under the RG flow. Then it is nontrivial which superfluid order emerges at low temperatures. To study this problem more systematically, and to study the critical behavior in the low-energy limit, we apply the renormalization group method in the next section.§ ONE-LOOP RENORMALIZATION GROUP METHODFollowing the RG method employed in <cit.> and ignoring quantum fluctuations, we calculate the RG equations at one-loop order as <cit.>, dμ_Λ/dl = 2μ_Λ-T_Λ F(μ_Λ)(2 g_0 + g_12), dg_0/dl = ϵ g_0-T_ΛF(μ_Λ)^2 ( 5g_0^2 + g_12^2 + g_u^2 ),dg_12/dl = ϵ g_12-T_ΛF(μ_Λ)^2 ( 4g_0g_12+2g_12^2+4g_u^2),dg_u/dl = ϵ g_u-T_ΛF(μ_Λ)^2 (2g_0g_u+4g_12g_u),where l= ln(Λ_q/Λ_b) is the logarithm of the ratio between the bare momentum cutoff Λ_q and the running cutoff Λ_b. Here ϵ = 4-d with d being the spatial dimension of the system, and ϵ = 1 for our three-dimensional model. We have also defined dimensionless parameters,μ_Λ=μ_1/ϵ_Λ=μ_2/ϵ_Λ, T_Λ=k_BT/ϵ_Λ, g_i=g̃_i Λ_q^3 /(2π^2ϵ_Λ), and F(μ_Λ) = 1/(1-μ_Λ). If g_u=0, the flow equations become the ones of the two-component ϕ^4-theory <cit.>. In the critical regime, μ_Λ≪ 1, where we can approximate F(μ_Λ) ≈ 1, [This corresponds to the lowest order of the ϵ-expansion <cit.>. The effect of higher order contributions to the fixed points is discussed in appendix.] the RG equations exhibit four fixed points, except the trivial one, (μ^*_Λ,g^*_0,g^*_12,g^*_u)=(0,0,0,0),(μ^*_Λ,g^*_0,g^*_12,g^*_u) = (1/5,1/5T_Λ,0,0), (μ^*_Λ,g^*_0,g^*_12,g^*_u) = (1/4,1/6T_Λ,1/6T_Λ,0), (μ^*_Λ,g^*_0,g^*_12,g^*_u) = (1/5,1/10T_Λ,1/5T_Λ,1/10T_Λ), (μ^*_Λ,g^*_0,g^*_12,g^*_u) = (1/5,1/10T_Λ,1/5T_Λ,-1/10T_Λ). All these fixed points are unstable, indicating that the system undergoes a first-order transition. In flowD(a), we show the fixed points as red points for g_u≥0.§.§ Basic structures of RG equationsWhile the full RG equations are complicated and can only be solved numerically,the basic structure of the equations give useful insight in the RG flow. In particular, we find three separatrix surfaces of the RG equations. RG flow cannot pass through these surfaces, and therefore the asymptotic behavior of the RG flow is severely constrained by the initial condition. The first separatrix is g_u=0; the formal solution of (<ref>) isg_u(l) = g_u (0)exp{∫_0^l dl' [ 1-T_ΛF(μ_Λ)^2 (2 g_0 + 4g_12) ] },which explicitly shows that g_u does not change signs under the RG. In flowD(b), we show the flow diagram on the g_u=0 surface, which is also introduced in <cit.>. The two fixed points (g^*_0,g^*_12,g^*_u)=(0,0,0) and (1/(5T_Λ),0,0) are unstable, and the one at (1/(6T_Λ),1/(6T_Λ),0) is marginally unstable.The second separatrix surface is G_1≡ g_0-g_12+g_u = 0. The RG equation forG_1 is dG_1/dl=G_1-T_ΛF(μ_Λ)^2G_1(5g_0+g_12-3g_u).We emphasize that G_1 > 0 coincides with the mean-field stable condition for a chiral superfluid, chiralC, and therefore, for general repulsive interactions that give G_1 >0 as an initial condition, a chiral superfluid order always persists in the low-energy limit. In flowD(c), we illustrate the flow diagrams on the G_1=0 plane for g_u>0. There are two unstable fixed points, fp2 and fp3, on the plane, in addition to the trivial one (g^*_0,g^*_12,g^*_u)=(0,0,0). In particular the ray from (g^*_0,g^*_12,g^*_u)=(0,0,0) to (1/10T_Λ,1/5T_Λ,1/10T_Λ) separates the RG flows into two parts. For initial conditions g_0>g_u on the G_1=0 plane, we find that system eventually flows into the fixed point (g^*_0,g^*_12,g^*_u)=(1/6T_Λ,1/6T_Λ,0), where the system is reduced to the standard two-component ϕ^4 theory. However, if the initial conditions deviate from the G_1=0 plane by an arbitrarily small amount, g_u will grow to a large positive value, and thus the fixed point fp2 is actually unstable. For initial conditions g_0 < g_u on the G_1=0 plane, the flow first approaches (1/10T_Λ,1/5T_Λ,1/10T_Λ) and then runs away to larger positive values of g_u.Finally the third separatrix surface is G_2 ≡ g_0-g_12-g_u =0, and its RG equation isdG_2/dl=G_2-T_ΛF(μ_Λ)^2G_2(5g_0+g_12+3g_u).The flow equation is similar to (<ref>), and it guarantees that no flow passes through the G_2=0 plane. As illustrated in flowD(d), we find that the Umklapp interaction g_u always grows up on the G_2=0 plane. The increase of g_u is also found when the initial couplings deviate from the G_2 plane as we discuss below.§.§ RG flows for the effective modelNow let us analyze the RG equations for the relevant parameter regime of our model. For our effective model, we have U_s ∼ U_p ∼ U'_p/3 >0, and the RG flow is constrained to the space of g_u>0, G_1 >0 and G_2 <0. Due to this constraint, as we will show below, the possible phases of our effective model are either the thermal gas phase (μ_Λ→ - ∞) or the chiral superfluid phase (μ_Λ→ + 1). The phases are determined by the non-universal nature of the RG flows and initial conditions. First, when μ_Λ remains positive, a typical flow of coupling constants behaves as in RG_flow. For small positive initial interactions, a typical flow has monotonically increasing g_u, which can also be seen from (<ref>). For the evolution of g_0 and g_12, there are three regimes that we can characterize: * 0 < l < l_1 (the initial regime): the linear terms in RG equations are dominant, and g_0 and g_12 gradually increase.* l_1 < l < l_2 (the intermediate regime): the quadratic terms become more important with increasing g_u, which eventually make g_0 and g_12 negative. * l_2 < l< l_3 (the asymptotic regime): the quadratic terms give asymptotically diverging behaviors, while the coupling constants are still smaller than the unity. Of course, we should stop the RG flow before any of the coupling constant becomes order of unity near l_3, above which the cubic terms become dominant. In the asymptotic regime, μ_Λ approaches one, and the quadratic terms in the RG equations become dominant. Ignoring the linear terms, the flow can be analyzed by an ansatz g_i (l) = k g̅_i/(1- kl), where k is the inverse length scale at which these coupling constants diverge <cit.>. We find that the only asymptotic flow constrained in the space of g_u>0, G_1 >0 and G_2 <0 is given by (g̅_0, g̅_12, g̅_u) = 1/T_ΛF(μ_Λ)^2( -1/10, -1/5, 1/10).This implies that G_1 steadily increases and thus stabilizes the chiral superfluid order, chiralC. At the same time, the quartic interactions g_0 and g_12 are renormalized to negative values. This indicates the breakdown of the quartic effective field theory, and we need to include higher order interactions such asg_6 ∫ d^3R{ |ψ_1 (R)|^6 + |ψ_2 (R)|^6 },which is generated from three quartic vertices after one-loop renormalization. We note that at tree-level the g_6 contribution is marginal, and stabilizes the system <cit.>. At one-loop order, one contribution to the RG equation for g_6 is d g_6/ dl ≃- 24 T_Λ F(μ_Λ)^2 g_0 g_6. Since g_0 flows to negative values, the above contribution further stabilizes the system. Second, when the chemical potential becomes negative, F(μ_Λ)=1/(1-μ_Λ) gets suppressed, making the linear terms in the RG equations more dominant. Therefore, typical RG flows give a simple scaling behavior, (μ_Λ,g_i ) ∼(- e^2l,e^l ). This corresponds to the thermal gas phase without condensation. §.§ Phase diagram The RG analysis can be also used to study the critical behaviors of the phase transition between the two phases. In phaseD, we illustrate the phase diagram as a function of U_s and U_p. The pink region represents the thermal gas phase, in which μ_Λ flow to negative values under RG transformations. Th blue region is the chiral p_x± i p_y superfluid order with time-reversal symmetry breaking, where μ_Λ approaches one. As μ_Λ(0) increases or T_Λ decreases, the phase boundary is shifted to enlarge the superfluid region. We note that along the asymptotic flow asymptotic, the mean-field free energy can be written by two order parameters P_±≡ψ_1 ± i ψ_2 asF^Λ_ MF∼μ_Λ (|P_-|^2 + |P_+|^2) - g̅ (|P_-|^4 + |P_+|^4) + 𝒪 (P_-^6, P_+^6),where g̅ >0 is the single coefficient characterizing the asymptotic flow asymptotic, and the last term is the higher order correction stabilizing the system. This form suggests that the transition is first order. Another indirect support for this scenario is obtained by considering the strong coupling limit for g_0 and g_u, while g_12 is set to 0. In this case, the model has effectively two XY spins on each space point that are orthogonal to each other due to the strong g_u interaction. Such a model is known as Stiefel's V_2,2 model, and Monte Carlo studies show that it undergoes a first order transition <cit.>. For moderate interaction strength, we speculate that the first order nature becomes weaker. Clarifying if the transition remains first order for weak interactions seems to require an extensive numerical simulations, which are beyond the scope of the paper. [The difficulty of determining the order of transitions is a common problem in the frustrated spin systems, whose effective Ginzburg-Landau models are similar to ours <cit.>.]To detect this first order transition in experiments, we suggest two possibilities. The first one consists of measuring the condensate fraction as a function of temperature, preferably in a box potential <cit.>. The measured temperature dependence will approach a non-zero jump at the condensation temperature for large systems. As a second approach we suggest to measure the spatial evolution of a phonon pulse in a condensate in a smoothly varying trap <cit.>. Here, the phonon velocity will vary as the phonon pulse approaches the interface of the condensate and the thermal gas. For a second order transition the pulse velocity will smoothly approach zero, whereas for a first order transition the velocity will approach a non-zero value before the pulse is reflected, which gives a clear indication of a first order transition.§ CONCLUSIONS In this paper we have investigated the critical behavior of unconventional Bose-Einstein condensates in the second band of an optical lattice. We have demonstrated that an Umklapp process between the two minima of the dispersion stabilizes a chiral superfluid state that breaks time reversal symmetry, first at the mean-field level and then within a renormalization group calculation. The latter shows that this stability is always persistent at low energy scales after integrating out thermal fluctuations. We obtain this result by identifying three separatrix planes in the RG flow, which constrain the low energy behavior to a stable regime. Furthermore, the RG flow suggests that the phase transition of the chiral superfluid state to the thermal state is of first order, in contrast to the usual second order transition of a conventional condensate.We thank A. Hemmerich for helpful discussions on the experimental aspects of the system. J.O., R.H., and L.M. acknowledge the support from the Deutsche Forschungsgemeinschaft (through SFB 925 and EXC 1074) and from the Landesexzellenzinitiative Hamburg, which is supported by the Joachim Herz Stiftung. W.M.H. especially acknowledges the support from Ministry of Science and Technology, Taiwan through Grant No. MOST 104-2112-M-005-006-MY3. § EFFECTIVE INTERACTIONS FOR GENERAL HOPPING AMPLITUDES AND INTERACTIONSIn this appendix, we start from a single particle picture in a bipartite optical lattice to derive the hopping amplitudes and interaction parameters of a Bose-Hubbard model. We then show that the condition for the chiral superfluid state G_1 = g̃_0 - g̃_12 + g̃_u >0 in (<ref>) is preserved as long as the band has two minima at k_1=(π/√(2),0) and k_2= (0,π/√(2)). §.§ Derivation of Hubbard parametersWe start from a following potentialV(𝐫) = - V_0| cos[k_0 (x+y)/√(2)]+ e^iβcos[k_0 (x-y)/√(2)]|^2.This has two local minima in a unit cell (see app1(a)): one is a shallow local minimum hosting a s-orbital like Wannier state (denoted as A sites), and the other is a deep minimum hosting two p-orbital like Wannier sates (denoted as B sites). k_0 = π/a with a being the distance between neighboring A and B sites (lattice(b)). The energy difference between local minima at A sites and B sites is Δ V = E_A^0 - E_B^0 = - 4 V_0cos(β). We tune β so that the doubly degenerate first excited states in site B is close to the ground state in site A; in particular, we choose β so that the three bands are exactly degenerate at the Γ point in the following. After solving the single particle Schrödinger equation [ ħ^2 ∇^2/2 m+ V(𝐫) ] ψ^n_𝐤 (𝐫) = E^n_𝐤ψ^n_𝐤 (𝐫),we obtain a band dispersion as app1(b). We fit the obtained dispersion by the following tight-binding modelH^ xy_0= J ∑_𝐫∈ A[b_1^†(r,z)b_2(r+d_1,z)+b_1^†(r,z)b_3(r+d_2,z)-b_1^†(r,z)b_2(r-d_1,z)-b_1^†(r,z)b_3(r-d_2,z)+ h.c.] - J_⊥∑_𝐫∈ B, ν = x,y[ b_2^†(r,z)b_3(r+e_ν,z)+b_3^†(r,z)b_2(r+e_ν,z)+ h.c.] - J_∥∑_𝐫∈ B, ν = x,y∑_i=2,3[ b_i^†(r,z)b_i(r+e_ν,z)+ h.c.] + ϵ_A ∑_𝐫∈ A b_1^†(r,z) b_1(r,z) +ϵ_B ∑_𝐫∈ B∑_i=2,3 b_i^†(r,z) b_i(r,z),where e_1 = (√(2) a,0) and e_2 = (0, √(2)a). The first term is already given in (<ref>). The degeneracy at the Γ point is achieved by setting ϵ_B = ϵ_A + 4 J_∥. The fitted band dispersion is plotted in app1(b) as dots. The fitted parameters as functions of the potential depth V_0 is given in app2. We note that when the potential is relatively deep, the hopping between p-orbitals (J_⊥ and J_∥) is much smaller than the one between s- and p-orbitals, J. Thus, in practice, we can ignore J_⊥ and J_∥. Now we use the Bloch wave functions for the obtained band dispersions to construct localized Wannier functions. Here we employ a simple projection approach <cit.>. These Wannier functions give the bare interactions of a Bose-Hubbard model asU_s = g ∫ dz |w_z(z)|^4∫ d^2 r|w_1( r)|^4,U_p = g∫ dz |w_z(z)|^4∫ d^2 r|w_2/3( r)|^4, U'_p = g∫ dz |w_z(z)|^4∫ d^2 r|w_2( r)|^2|w_3( r)|^2,where w_z(z) is the Wannier function of a harmonic trap along the z-axis, and w_i( r), i=1,2,3 are the Wannier functions of the s-, p_x and p_y-orbitals on the xy-plane respectively. g is the contact interaction strength. The obtained values are plotted in app3. We find that U_s ∼ U_p ∼ U'_p/3 for moderately strong potential depth.§.§ Effective interactions in field-theory approximationsIn this subsection, we show that the condition for the chiral superfluidity, chiralC, is satisfied in general based on the Hubbard parameters determined above. With a general dispersion in (<ref>) and ϵ_B = ϵ_A + 4 J_∥, the projection of the Wannier orbitals to the two minima in the lowest band becomes(u_11, u_12)=1/√(2+|λ|^2)(λ, λ^* ),(u_21, u_22)=1/√(2+|λ|^2)(1, -1) ,(u_31, u_32)=1/√(2+|λ|^2)(1,1).with λ = i Δ_J - √(2 J^2 + Δ_J^2)/J and Δ_J = J_⊥- J_∥.With (<ref>), we can show that the g̃_0-g̃_12+g̃_u= 4U'_p J^4[2 J^2+Δ_J^2] [ J^2- Δ_J (√(2 J^2+Δ_J^2)-Δ_J )]/3 (Δ_J (√(2 J^2+Δ_J^2)-Δ_J )-2 J^2)^4.For J ≫ J_∥, J_⊥, the above quantity is always positive. Therefore, even when the system deviates from the simple limit of J_⊥ = J_∥ = 0, the condition of the chiral superfluidity is still satisfied.§ THE Ε-EXPANSION ANALYSIS OF THE FIXED POINTSIn rg, we have shown the fixed points that correspond to the lowest order of the ϵ-expansion <cit.>. To estimate the high-order effect in the ϵ-expansion, we investigate the fixed points of the RG equations without expanding F (μ_Λ). Finding the zeros of the right-hand sides of muFlow-guFlow leads to four fixed points, except the trivial one, (μ^*_Λ,g^*_0,g^*_12,g^*_u)=(0,0,0,0),(μ^*_Λ,g^*_0,g^*_12,g^*_u) = (ϵ/5+ϵ,5 ϵ/(5+ϵ)^2 T_Λ,0,0), (μ^*_Λ,g^*_0,g^*_12,g^*_u) = (ϵ/4+ϵ,8 ϵ/3(4+ϵ)^2 T_Λ,8 ϵ/3(4+ϵ)^2 T_Λ,0), (μ^*_Λ,g^*_0,g^*_12,g^*_u) = (ϵ/5+ϵ,5 ϵ/2(5+ϵ)^2 T_Λ,5 ϵ/(5+ϵ)^2 T_Λ,5 ϵ/2(5+ϵ)^2 T_Λ), (μ^*_Λ,g^*_0,g^*_12,g^*_u) = (ϵ/5+ϵ,5 ϵ/2(5+ϵ)^2 T_Λ,5 ϵ/(5+ϵ)^2 T_Λ,-5 ϵ/2(5+ϵ)^2 T_Λ),Taking the lowest order in ϵ and setting ϵ = 1 recovers the fixed points in fp1-fp4. We emphasize that the above fixed points still lie on the three separatrix planes g_u=0, G_1=0 and G_2=0 in the g_0-g_12-g_u space. This fact guarantees the persistence of the chiral superfluid in the low energy limit even when we include the higher order terms in ϵ in our one-loop RG study. 10 url<#>1#1urlprefixURL Struck2011 Struck J, Ölschläger C, Targat R L, Soltan-Panahi P, Eckardt A, Lewenstein M, Windpassinger P and Sengstock K 2011 Science 333 996Struck2013 Struck J, Weinberg M, Ölschläger C, Windpassinger P, Simonet J, Sengstock K, Höppner R, Hauke P, Eckardt A, Lewenstein M and Mathey L 2013 Nat. Phys. 9 738Parker2013 Parker C, Ha L and Chin C 2013 Nat. Phys. 9 769Clark2016 Clark L W, Feng L and Chin C 2016 Science 354 606Kawaguchi2012 Kawaguchi Y and Uedaa M 2012 Phys. Rep. 520 253Isacsson2005 Isacsson A and Girvin S M 2005 Phys. Rev. A 72 053604Liu2006 Liu W and Wu C 2006 Phys. Rev. A 74 013607Kuklov2006 Kuklov A B 2006 Phys. Rev. Lett. 97 110405Wu2006 Wu C, Liu W, Moore J and Sarma S 2006 Phys. Rev. Lett. 97 190406Lim2008 Lim L K, Smith C M and Hemmerich A 2008 Phys. Rev. Lett. 100 130402Stojanovic2008 Stojanovic V M, Wu C, Liu W V and Das Sarma S 2008 Phys. Rev. Lett. 101 125301Wu2009 Wu C 2009 Mod. Phys. Lett. B 23 1Lewenstein2011 Lewenstein M and Liu W V 2011 Nat. Phys. 7 101Cai2011 Cai Z and Wu C 2011 Phys. Rev. A 84 033635Li2012 Li X, Zhang Z and Liu W V 2012 Phys. Rev. Lett. 108 175302Martikainen2012 Martikainen J P and Larson J 2012 Phys. Rev. A 86 023611Cai2012 Cai Z, Duan L M and Wu C 2012 Phys. Rev. A 86 051601Liu2013 Liu B, Yu X L and Liu W M 2013 Phys. Rev. A 88 063605Li2016 Li X and Liu W V 2016 Reports Prog. Phys. 79 116401Olschlager2011 Ölschläger M, Wirth G and Hemmerich A 2011 Phys. Rev. Lett. 106 015302Wirth2011 Wirth G, Ölschläger M and Hemmerich A 2011 Nat. Phys. 7 147Soltan-Panahi2012 Soltan-Panahi P, Lühmann D S, Struck J, Windpassinger P and Sengstock K 2012 Nat. Phys. 8 71Olschlager2013 Ölschläger M, Kock T, Wirth G, Ewerbeck A, Morais Smith C and Hemmerich A 2013 New J. Phys. 15 083041Kock2015 Kock T, Ölschläger M, Ewerbeck A, Huang W M, Mathey L and Hemmerich A 2015 Phys. Rev. Lett. 114 115301feynman1998statistical Feynman R P 1998 Statistical Mechanics: A Set of Lectures(New York: Avalon Publishing)Leggett1975 Leggett A J 1975 Rev. Mod. Phys. 47 331dobbs2000helium Dobbs R 2000 Helium Three (Oxford: Oxford University Press)Mackenzie2003 Mackenzie A P and Maeno Y 2003 Rev. Mod. Phys. 75 657Antonenko1994 Antonenko S A and Sokolov A I 1994 Phys. Rev. B 49 15901Kawamura1998 Kawamura H 1998 J. Phys. Condens. Matter 10 4707Janzen2016 Janzen P, Huang W M and Mathey L 2016 Phys. Rev. A 94 063614Villain1977 Villain J 1977 J. Phys. C Solid State Phys. 10 1717stoof2008ultracold Stoof H T C, Dickerscheid D B M and Gubbels K 2009 Ultracold Quantum Fields (Dordrecht: Springer Netherlands)cardy1996scaling Cardy J 1996 Scaling and Renormalization in Statistical Physics (Cambridge: Cambridge University Press)Wilson1972 Wilson K G and Fisher M E 1972 Phys. Rev. Lett. 28 240Domany1977 Domany E, Mukamel D and Fisher M E 1977 Phys. Rev. B 15 5432 Balents1996 Balents L and Fisher M P A 1996 Phys. Rev. B 53 12133Kunz1993 Kunz H and Zumbach G 1993 J. Phys. A Math. Gen 26 3121Loison1998 Loison D and Schotte K D 1998 Eur. Phys. J. B 743 735Itakura2003 Itakura M 2003 J. Phys. Soc. Japan 72 74Gaunt2013 Gaunt A L, Schmidutz T F, Gotlibovych I, Smith R P and Hadzibabic Z 2013 Phys. Rev. Lett. 110 200406Corman2014 Corman L, Chomaz L, Bienaimé T, Desbuquois R, Weitenberg C, Nascimbène S, Dalibard J and Beugnon J 2014 Phys. Rev. Lett. 113 135302Tey2013 Tey M K, Sidorenkov L A, Guajardo E R S, Grimm R, Ku M J H, Zwierlein M W, Hou Y H, Pitaevskii L and Stringari S 2013 Phys. Rev. Lett. 110 055303Sidorenkov2013 Sidorenkov L A, Tey M K, Grimm R, Hou Y H, Pitaevskii L and Stringari S 2013 Nature 498 78Weimer2015 Weimer W, Morgener K, Singh V P, Siegl J, Hueck K, Luick N, Mathey L and Moritz H 2015 Phys. Rev. Lett. 114 095301Singh2016 Singh V P, Weimer W, Morgener K, Siegl J, Hueck K, Luick N, Moritz H and Mathey L 2016 Phys. Rev. A 93 023634Marzari2012 Marzari N, Mostofi A A, Yates J R, Souza I and Vanderbilt D 2012 Rev. Mod. Phys. 84 1419
http://arxiv.org/abs/1708.07550v2
{ "authors": [ "Junichi Okamoto", "Wen-Min Huang", "Robert Höppner", "Ludwig Mathey" ], "categories": [ "cond-mat.quant-gas" ], "primary_category": "cond-mat.quant-gas", "published": "20170824204043", "title": "Critical behavior of a chiral superfluid in a bipartite square lattice" }
The work was done when the first author was on an internship at NECLA. Rice [email protected] NEC Labs AmericaPrincetonNew [email protected] NEC Labs AmericaPrincetonNew [email protected] Rice [email protected] NEC Labs AmericaPrincetonNew [email protected] Dependency graph, as a heterogeneous graph representing the intrinsic relationships between different pairs of system entities, is essential to many data analysis applications, such as root cause diagnosis, intrusion detection, etc. Given a well-trained dependency graph from a source domain and an immature dependency graph from a target domain, how can we extract the entity and dependency knowledge from the source to enhance the target? One way is to directly apply a mature dependency graph learned from a source domain to the target domain. But due to the domain variety problem, directly using the source dependency graph often can not achieve good performance. Traditional transfer learning methods mainly focus on numerical data and are not applicable. In this paper, we propose , a knowledge transfer based model for accelerating dependency graph learning from heterogeneous categorical event streams. In particular, we first propose an entity estimation model toestimate the probability of each source domain entity that can be included in the final dependency graph of the target domain. by utilizing source domain heterogeneous relations from the categorical event streams. Then, we propose a domain adaptation model to construct the dependency relationships for the target domain graph. filter out irrelevant entities from the source domain based on entity embedding and manifold learning. Only the entities with statistically high correlations are transferred to the target domain. On the surviving entities, we propose a dependency construction model for constructing the unbiased dependency relationships by solving a two-constraint optimization problem.The experimental results on synthetic and real-world datasets demonstrate the effectiveness and efficiency of .We also applyto a real enterprise security system for intrusion detection.Our method is able to achieve superior detection performance at least 20 days lead lag time in advance with more than 70% accuracy. Accelerating Dependency Graph Learning from Heterogeneous Categorical Event Streams via Knowledge Transfer Zhichun Li==========================================================================================================§ INTRODUCTION The heterogeneous categorical event data are ubiquitous.Consider system surveillance data in enterprise networks, where each data point is a system event that involves heterogeneous types of entities: time, user, source process, destination process, and so on.Mining such event data is a challenging task due to the unique characteristics of the data: (1) the exponentially large event space. For example, in a typical enterprise network, hundreds (or thousands) of hosts incessantly generate operational data. A single host normally generates more than 10,000 events per second; And (2) the data varieties and dynamics. The variety of system entity types may necessitate high-dimensional features in subsequent processing, and the event data may changing dramatically over time, especially considering the heterogeneous categorical event streams <cit.>. To address the above challenges, the recent studies of dependency graphs <cit.> have witnessed a growing interest. Such dependency graphs can be applied to model a variety of systems including enterprise networks <cit.>, societies <cit.>, ecosystems <cit.>, etc. For instance, we can present an enterprise network as a dependency graph, with nodes representing system entities of processes, files, or network sockets, and edges representing the system events between entities (e.g., a process reads a file).This enterprise system dependency graph can be applied to many forensic analysis tasks such as intrusion detection, risk analysis, and root cause diagnosis <cit.>. A social network can also be modeled as a dependency graph representing the social interactions between different users. Then, this social dependency graph can be used for user behavior analysis or abnormal user detection <cit.>. However, due to the aforementioned data characteristics, learning a mature dependency graph from heterogeneous categorical event streams often requires a long period of time. For instance, the dependency graph of an enterprise network needs to be trained for several weeks before it can be applied for intrusion detection or risk analysis as illustrated in Fig. <ref>.Furthermore, every time, when the system is deployed in a new environment, we need to rebuild the entire dependency graph.This process is both time and resource consuming.Enlightened by the cloud services <cit.>,one way to avoid the time-consuming rebuilding process is by reusing a unified dependency graph model in different domains/environments. However, due to the domain variety, directly apply the dependency graph learned from an old domain to a new domain often can not achieve good performance.For example, the enterprise network from an IT company (active environment) is very different from the enterprise network from an electric company (stable environment). Thus, the enterprise dependency graph of the IT company contains many unique system entities that can not be found in the dependency graph of the electric company.Nevertheless, there are still a lot of room for transfer learning. Directly deploy the model learned from one environment to a new environment will suffered from the domain differences.Our experiment in Section <ref> demonstrate the impracticable of direct reuse.Transfer learning has shed light on how to tackle the domain differences <cit.>. It has been successfully applied in various data mining and machine learning tasks, such as clustering and classification <cit.>. However, most of the transfer learning algorithms focus on numerical data <cit.>.When it comes to graph structure data, there is less existing work <cit.>, not to mention the dependency graph. This motivates us to propose a novel knowledge transfer-based method for dependency graph learning.In this paper, we propose , a knowledge transfer based method for accelerating dependency graph learning from heterogeneous categorical event streams. consists of two sub-models: EEM (Entity Estimation Model) and DCM (Dependency Construction Model). Specifically, first, EEM filters out irrelevant entities from source domain based on entity embedding and manifold learning. Only the entities with statistically high correlations can be transferred to the target domain.Then, based on the reduced entities, DCM model effectively constructs unbiased dependency relationships between different entities for the target dependency graph by solving a two-constraint optimization problem. We launch an extensive set of experiments on both synthetic and real-world data to evaluate the performance of . The results demonstrate the effectiveness and efficiency of our proposed algorithm.We also applyto a real enterprise security system for intrusion detection. Our method is able to achieve superior detection performance at least 20 days lead lag time in advance with more than 70% accuracy. § PRELIMINARIES AND PROBLEM STATEMENT In this section, we introduce some notations and define the problem.Heterogeneous Categorical Event. A heterogeneous categorical event e = (a_1, · · · , a_m) is a record contains m different categorical attributes, and the i-th attribute value a_i denotes an entity from the type 𝒯_i. For example, in the enterprise system (as illustrated in Fig. <ref>), a process event (e.g., a program opens a file or connects to a server) can be regarded as a heterogeneous categorical event. It contains information, such as timing, type of operation, information flow directions, user, and source/destination process, etc.By continuous monitoring/auditing the heterogeneous categorical event data (streams) generated by the physical system, one can generate the corresponding dependency graph of the system, as in <cit.>. This dependency graph is a heterogeneous graph representing the dependencies/interactions between different pairs of entities. Formally, we define the dependency graph as follows:Dependency Graph. A dependency graph is a heterogeneous undirected weighted graph G = {V, E},where V = {v_0, v_1, ..., v_n} is the set of heterogeneous system entities, and n is the total number of entities in the dependency graph; E = {e_0, e_1, ..., e_m} is the set of dependency relationships/edges between different entities. For ease of discussion, we use the terms edge and dependency interchangeably in this paper. A undirected edge e_i(v_k, v_j) between a pair of entities v_k and v_j exists depending on whether they have a dependency relation or not. The weight of the edge denotes the intensity of the dependency relation. A undirected edge e_i(v_s, v_d) between a pair of entities v_s and v_d exists depending on whether the they have a causality relation or not. The weight for the dependency denotes the intensity of the causality relation.In an enterprise system, a dependency graph can be a weighted graph between different system entities, such as processes, files, users, Internet sockets. The edges in the dependency graph are the causality relations between different entities. As shown in Fig. <ref>, the enterprise security system utilities the accumulated historical heterogeneous system data from event streams to construct the system dependency graph and update the graph periodically. The learned dependency graph is applied to forensic analysis applications such as intrusion detection, risk analysis, and incident backtrack etc.However, learning a dependency graph often requires a system auditing for a long period of time which in turn causes overwhelmingly large amount of system audit events.A system dependency graph needs to be trained for more than three weeks before deployed to the real world anomaly detection systems.In addition, when we deploy the security system to a new environment.We often need to spend other several weeks to retrain the dependency graph. The problems of cold-start and time-consuming training reflect a great demand for an automated tool for effectively transferring dependency graphs between different domains. Motivated by this, this paper focuses on accelerating the dependency graph learning via knowledge transfer. Based on the definitions described above, we formally define our problem as follows:Knowledge Transfer for Dependency Graph Learning. Given two domains: a source domain 𝒟_S and a target domain 𝒟_T.In the source domain 𝒟_S, we have a well-trained dependency graph G_S generated from the heterogeneous categorical event streams.In target domain 𝒟_T, we have a small incomplete dependency graph G_T trained by a short period of time. The task of knowledge transfer for dependency graph learning is to use G_S to help construct a mature dependency graph G_T in the domain 𝒟_T.There are two major assumptions for this problem: (1) The event streams in the source domain and target domain are generated by the same physical system; (2) The entity size of source dependency graph G_S should be larger than the size of the intersection graph G_S ∩G_T. Because transferring knowledge from a less informative dependency graph to an informative graph is unreasonable.For example, 𝒟_S can be an enterprise system environment from an IT company, and 𝒟_T can be another enterprise system environment from electronic management servers. In next section, we present a motivation example for this problem.However, there is no existing approach that can be directly used to solve this problem. This motivates us to design a new approach to effectively and efficiently transferring dependency graphs from different domains. In the next section, we will introduce the details of our proposed transferring framework.§ THEMODEL In this section, we introduce the proposed model in details.We now describe themodel in details. As we introduced in section <ref>, a dependency graph consists of two basic elements: (1) entities and (2) dependencies between entities. So we design two sub-models for estimating each element respectively. These two models are: EEM (Entity Estimation Model) and DCM (Dependency Construction Model) as illustrated in Fig. <ref>. We first introduce these two sub-models in details separately and then combine them into a uniform algorithm. To learn a mature dependency graph G_T, intuitively, we would like to leverage the entity and dependency information from the well-trained source dependency graph G_S to help complete the original small dependency graph G_T. One naive way is to directly transfer all the entities and dependencies from the source domain to the target domain. However, due to the domain difference, it is likely that there are many entities and their corresponding dependencies that appear in source domain but not in the target domain. Thus, one key challenge in our problem is how to identify the domain-specific/irrelevant entities from the source dependency graph. After removing the irrelevant entities, another challenge is how to construct the dependencies between the transferred entities by adapting the domain difference and following the same dependency structure as in G_T. To address these two key challenges in dependency graph learning, we propose a knowledge transfer algorithm with two sub-models: EEM (Entity Estimation Model) and DCM (Dependency Construction Model) as illustrated in Fig. <ref>. We first introduce these two sub-models separately in details, and then combine them into a uniform algorithm.§.§ EEM: Embedding-based Entity Estimation ModelFor the first sub-model, Entity Estimation Model, our goal is to filter out the entities in the source dependency graph G_S that are irrelevant to the target domain. To achieve this, we need to deal with two main challenges: (1) the lack of intrinsic correlation measures among categorical entities, and (2) heterogeneous relations among different entities in the dependency graph.To overcome the lack of intrinsic correlation measures among categorical entities, we embed entities into a common latent space where their semantics can be preserved.More specifically, each entity, such as a user, or a process in computer systems, is represented as a d-dimensional vector and will be automatically learned from the data.In the embedding space, the correlation of entities can be naturally computed by distance/similarity measures in the space, such as Euclidean distances, vector dot product, and so on. Compared with other distance/similarity metrics defined on sets, such as Jaccard similarity, the embedding method is more flexible and it has nice properties such as transitivity <cit.>.To address the challenge of heterogeneous relations among different entities, we use the meta-path proposed in <cit.> to model the heterogeneous relations. For example, in a computer system, a meta-path can be a“Process-File-Process", or a "File-Process-Internet Socket". “Process-File-Process" denotes the relationship of two processes load the same file, and "File-Process-Internet Socket" denotes the relationship of a file loaded by a process who opened an Internet Socket.The potential meta-paths induced from the heterogeneous network G_S can be infinite, but not every one is relevant and useful for the specific task of interest. There are some works <cit.> for automatically selecting the meta-paths for specific tasks. For more details about meta-path selecting methods, please refer <cit.>.Given a set of meta-paths P = { p_1, p_2, ...}, where p_i denotes the i-th meta-path and let |P| denotes the number of meta-paths. We can construct |P| graphs G_p_i by each time only extracting the corresponding meta-path p_i from the dependency graph <cit.>. Let u_S denotes the vector representation of the entities in G_S. Then, we model the relationship between two entities u_S(i) and u_S(j) as:u_S(i) - u_S(j)_F^2 ≈ S_G(i,j),In the above, S_G is a weighted average of all the similarity matrices S_p_i: S_G = ∑_i=1^|P| w_i S_p_i,where w_i's are non-negative coefficients, and S_p_i is the similarity matrix constructed by calculating the pairwise shortest path between each entities in A_p_i. Here, A_p_i is the adjacent matrix of the dependency graph G_p_i. By using the shortest path in the graph, one can capture the long term relationship between different entities <cit.>. Putting Eq. <ref> into Eq. <ref>, we have:u_S(i) - u_S(j)_F^2≈∑_i=1^|P| w_i S_p_i,where *_F^2 is is the Frobenius norm <cit.>.Then, the objective function of EEM model is:ℒ_1 ^(u_S, W) = ∑_i,j^n( u_S(i) - u_S(j)_F^2 - S_G)^θ + Ω (u_S, W),where W = {w_1, w_2, ..., w_|P|}, and Ω (u_S, W) = λ u_S+ λ W is the generalization term <cit.>, which prevents the model from over-fitting.λ is the trade-off factor of the generalization term.In practice, we can choose θ as 1 or 2, which bears the resemblance to Hamming distance and Euclidean distance, respectively.Putting everything together, we get:ℒ_1 ^(u_S, W)= ∑_i,j^n(u_S(i) - u_S(j)_F^2 - S_G)^θ+ Ω (u_S, W)= ∑_i,j^n(u_S(i) - u_S(j)_F^2 - ∑_i=0^|P|-1 w_i S_p_i)^θ+ λ u_S+ λ W Then, the optimized value {u_S, W }^opt can be obtained by:{u_S, W}^opt = min_u_S, Wℒ_1 ^(u_S, W). §.§.§ Inference MethodThe objective function in Eq. <ref> contains two sets of parameters: (1) u_S, and (2) W. Then, we propose a two-step iterative method for optimizing ℒ_1 ^(u_S, W), where the entity vector matrices u_S and the weight for each meta-path W mutually enhance each other.In the first step, we fix the weight vectors W and learn the best entity vector matrix u_S. In the second step, we fix the entity vector matrix u_S and learn the best weight vectors W.Fix W and learn u_S: when we fix W, then the problem is reduced to u_S(i) - u_S(j)_F^2 ≈ S_G(i,j), where S_G is a constant similarity matrix. Then, the optimization process becomes a traditional manifold learning problem. Fortunately, we can have a closed form to solve this problem, via so called multi-dimensional scaling <cit.>. To obtain such an embedding, we compute the eigenvalue decomposition of the following matrix:- 1/2HS_GH = UΛ U,where H is the double centering matrix, U has columns as the eigenvectors and Λ is a diagonal matrix with eigenvalues. Then, the embedding u_S can be chosen as:u_S = U_k√(Λ_k). Fix u_S and learn W: When fixing u_S, the problem is reduced to:ℒ_1 ^W = ∑_i,j^n( u_S(i) - u_S(j)_F^2 - ∑_i=1^|P| w_i S_p_i)^θ + λ u_S+ λ W = ∑_i,j^n( C_1 - ∑_i=0^|P| w_i S_p_i)^θ + λ W+ C_2,where C_1 = u_S(i) - u_S(j)_F^2 is a constant matrix, and C_2 = λ E_S is also a constant. Then, this function becomes a linear regression. So, we also have the close form solution for W:W = (S_G^TS_G)^-1S_GC_1. After we get the embedding vectors u_S, then the relevance matrix ℝ between different entities can be obtained as:ℝ = u_Su_S^T One can use a user defined threshold to select the entities with high correlation with target domain for transferring. But user defined threshold is often suffered by the lack of domain knowledge.So here, we introduce a hypothesis test based method for automatically thresholding the selection of the entities.For each entity in G_T, we first normalize all the scores by: ℝ(i,:)_norm = (ℝ(i,:) - μ)/δ, where μ = ℝ(i,:) is the average value of ℝ(i,:), and δ is the standard deviation of ℝ(i,:). This standardized scores can be approximated with a gaussian distribution. Then, the threshold will be 1.96 with P=0.025. (or 2.58 for P=0.001) <cit.>. By using this threshold, one can filter out all the statistically irrelevant entities from the source domain, and transfer highly correlated entities to the target domain.By combining the transferred entities and the original target domain dependency graph G_T, we get G_T, as shown in Fig. <ref>. Then, the next step is to construct the missing dependencies in G_T.§.§ DCM: Dependency Construction Model To construct the missing dependencies/edges in G_T, there are two constraints need to be considered: * Smoothness Constraint: The predicted dependency structure in G_T needs to be close to the dependency structure of the original graph G_T. The intuition behind this constraint is that the learned dependencies should more or less intact in G_T as much as possible.* Consistency Constraint:Inconsistency between G_T and G_S should be similar to the inconsistency between G_T and G_S. Here, G_S and G_S are the sub-graphs of G_S which have the same entity set with G_T and G_T, respectively. This constraint guarantees that the target graph learned by our model can keep the original domain difference with the source graph.Before we model the above two constraints, we first need a measure to evaluate the inconsistence between different domains. In this work, we propose a novel metric named dynamic factor F(G_S,G_T) between two dependency graphs G_S and G_T from two different domains as:F(G_S,G_T)= A_S - A_T/ |G_S|*(|G_S| - 1) / 2 = 2A_S - A_T/ n_S(n_S-1),where n_s = |G_S| is the number of entities in G_S, A_S and A_T denote the adjacent matrix of G_S and G_T, respectively, and n_S(n_S-1) / 2 denotes the number of edges of a fully connected graph with n_S entities <cit.>. Next, we introduce the Dependency Construction Model in details.§.§.§ Modeling Smoothness Constraint We first model the smoothness constraint as follows:ℒ_2.1 ^u_T= ∑_i=1^n_S∑_j=0^n_S-1(u_T(i)u_t(j)^T - A_T(i,j) ) _F^2 + λ u_T = u_Tu_T^T - A_T_F^2 + Ω(u_T),where u_T is the vector representation of the entities in G_T, and Ω(u_T) = λ u_T is the regularization term. §.§.§ Modeling Consistency Constraint We then model the consistency constraint as follows:ℒ_2.2 ^(u_T)= F(u_Tu_T^T, A_S) - F(A_S,A_T) _F^2 + Ω(u_T),where F(*,*) is the dynamic factor as we defined before. Then, putting Eq. <ref> and Ω(u_T) into Eq. <ref>, we get: ℒ_2.2 ^E_T= F(u_Tu_T^T, G_S) - F(G_S,G_T) _F^2 + Ω(u_T)= 2 u_Tu_T^T - A_S/ n_s(n_S-1) - F(G_S,G_T) _F^2 + Ω(u_T)= 2 u_Tu_T^T - A_S/ n_s(n_s-1) - C_3 _F^2 + Ω(u_T),where C_3 = F(G_S,G_T).§.§.§ Unified ModelHaving proposed the modeling approaches in Section <ref> and <ref>, we intend to put all the two constraints together. The unified model for dependency construction is proposed as follows:ℒ_2 ^u_T= μℒ_2.1 ^u_T + (1 - μ)ℒ_2.2 ^u_T = μu_Tu_T^T - A_T_F^2 + (1 - μ)2 u_Tu_T^T - A_S/ n_S(n_S-1) - C_3 _F^2 + Ω(u_T) In the above, the symbols have the same meanings as introduced in previous sections.The first term of the model incorporates the Smoothness Constraint component, which keeps the u_T closer to target domain knowledge existed in the G_S.The second term considers the Consistency Constraint, that is the inconsistency between G_T and G_S should be similar to the inconsistency between G_T and G_S.μ and λ are important parameters which capture the importance of each term, and we will discuss these parameters in Section <ref>. To optimize the model as in Eq. <ref>, we use stochastic gradient descent <cit.> method. The derivative on u_T is given as:1/2∂ℒ_2^u_T/∂ E_T= μ u_T (u_Tu_T^T - A_T)+ (1 - μ)u_T2 u_Tu_T^T - A_S/ n_S(n_S-1) - C_3+ u_T§.§ Overall AlgorithmThe overall algorithm is then summarized as Algorithm <ref>. This algorithm implements the methods we have introduced in the above subsections. In the algorithm, line 5 to line 11 implements the Entity Estimation Model, and line 13 to 16 implements the Dependency Construction Model. In the result of this section, we will discuss some practical issues of the ACRET algorithm.§.§.§ Setting ParametersThere are two parameters, λ and μ, in our model.For λ, as in <cit.>, it is always assigned manually based on the experiments and experience. Therefore, we only discuss the assignment of parameter μ in our model.For μ, when a large number of entities are transferred to the target domain, a large μ can improve the transferring result, because we need more information to be added from the source domain.On the other hand, when only a small number of entities are transferred to target domain, then a larger μ will bias the result.Therefore, the value of μ depends on how many entities are transferred from the source domain to the target domain. In this sense, we can use the proportion of the transferred entities in G_T to calculate μ.Given the entity size of G_T as |G_T|, the entity size of G_T as |G_T|, then μ can be calculated as:μ = (|G_T| - |G_T|)/|G_T|,The experimental results in Section <ref> demonstrate the effectiveness of the proposed parameter selection method.§.§.§ Complexity Analysis As shown in Algorithm <ref>, the time for learning our model is dominated by computing the objective functions and their corresponding gradients against feature vectors. For the Entity Estimation Model, the time complexity of computing the u_S in Eq. <ref> is bounded by O(d_1n), where n is the number of entities in G_S, and d_1 is the dimension of the vector space of u_S. The time complexity for computing W is also bounded by O(d_1n). So, suppose the number of training iterations for EEM is t_1, then the overall complexity of EEM model is O(t_1d_1n).For the Dependency Construction Model, the time complexity of computing the gradients of ℒ_2 against u_T is O(t_2d_2n), where t_2 is the number of iterations, d_2 is the dimensionality of feature vector.As shown in our experiment (see Section <ref>), t_1, t_2, d_1, and d_2 are all small numbers. So that we can regard them as a constant, say C, so the overall complexity of our method is O(Cm), which is linear with the size of the entity set. This makes our algorithm practicable for large scale datasets.§ EXPERIMENTS In this section, we evaluateusing synthetic data and real system surveillance data collected in enterprise networks. §.§ Comparing MethodsWe comparewith the following methods:NT: This method directly uses the original small target dependency graph without knowledge transfer. In other words, the estimated target dependency graph G_T = G_T. DT: This method directly combines the source dependency graph and the original target dependency graph. In other words, the estimated target dependency graph G_T= G_S + G_T.RW-DCM: This is a modified version of themethod.Instead of using the proposed EEM model to perform entity estimation, this method uses the random walk to evaluate the correlations between entities and perform entity estimation. Random walk is a widely-used method for relevance search in a graph <cit.>. For more details about random walk, please refer to <cit.>. EEM-CMF: This is another modified version of themethod. In this method, we replace DCM model with collective matrix factorization <cit.> method. Collective matrix factorization has been applied for link prediction in multiple domains <cit.>. For the details about collective matrix factorization, please refer to <cit.>. §.§ Evaluation MetricsSince inalgorithm, we use hypothesis-test for thresholding the selection of entities and dependencies, similar to <cit.>, we use the F1-score to evaluate the hypothesis-test accuracy of all the methods. Since all methods listed above are based on hypothesis test, similar to <cit.>, we use the F1-score to evaluate the hypothesis-test accuracy of all the methods.F1 score has been used in many research works for analyzing hypothesis-test accuracy. We use the F1-score to evaluate the performance of all the methods. F1-score is the harmonic mean of precision and recall. In our experiment, the final F1-scorebetween the estimated dependency graph G_T and the ground-truth dependency graph G_T is calculated by averaging the entity F1-score and dependency/edge F1-score. To calculate the precision (recall) of both entity and link, we compare the estimated entity (edge) set with the ground-truth entity/link set.Then, precision and recall can be calculated as follows:Precision=N_C/N_E,         Recall=N_C/N_T,where N_C is the number of correctly estimated entities (edges), N_E is the number of total estimated entities (edges), and N_T is the number of ground truth entities (edges).§.§ Synthetic Experiments We first evaluate theon synthetic graph data-sets to have a more controlled setting for assessing algorithmic performance.We control three aspects of the synthetic data to stress test the performance of ourmethod:* Graph size is defined as the number of entities for a dependency graph. Here, we use |G_S| to denote the source domain graph size and |G_T| to denote the target one. * Dynamic factor, denoted as F, has the same definition as in Section <ref>.* Graph maturity score, denoted as M, is defined as the percentage of entities/edges of the ground-truth graph G_T, that are used for constructing the original small graph G_T. Here, graph maturity score is used for simulating the period of learning time of G_T to reach the maturity in the real system. Then, given |G_S|, |G_T|, F, and M, we generate the synthetic data as follows:We first randomly generate an undirected graph as the source dependency graph G_S based on the value of |G_S| <cit.>; Then, we randomly assign three different labels to each entity. Due to space limitations, we will only show the results with three labels, but similar results have been achieved in graphs with more than three labels;We further construct the target dependency graph G_T by randomly adding/deleting F = d% of the edges and deleting |G_S| - |G_T| entities from G_S. Finally, we randomly select M = c% of entities/edges from G_T to form G_T.§.§.§ How Does 's Performance Scale with Graph Size?We first explore how the 's performance changes with graph size |G_S| and |G_T|.Here, we fix the maturity score to M=50%, the dynamic factor to F=10%, and target domain dependency graph size to |G_T|=0.9. Then, we increase the source graph size |G_S| from 0.9K to 1.4K. From Fig. <ref>, we observe that with the increase of the size difference |G_S| - |G_T|, the performances of DT and RW-DCM are getting worse. This is due to the poor ability of DT and RW-DCM for extracting useful knowledge from the source domain.In contrast, the performance ofand EEM-CMF increases with the size differences. This demonstrates the great capability of EEM model for entity knowledge extraction. In sum, compared with all other methods, ourmethod achieves the best performance.§.§.§ How Does 's Performance Scale with Domain Dynamic Factor?We now vary the dynamic factor F to understand its impact on the 's performance. Here, the graph maturity score is set to M=50%, and two domain sizes are set to |G_S|=1.2K and |G_T|=0.6K, respectively. Fig. <ref> shows that the performances of all the methods go down with the increase of the dynamic factor. This is expected, because transferring the dependency graph from a very different domain will not work well. On the other hand, the performances of , RW-DCM, and EEM-CMF only decrease slightly with the increase of the dynamic factor. Since RW-DCM and EEM-CMF are variants of themethod, this demonstrates that the two sub-models of themethod are both robust to large dynamic factors. §.§.§ How Does 's Performance Scale with Graph Maturity?Third, we explore how the graph maturity score M impacts the performance of . Here, the dynamic factor is fixed to F=0.2. The graph sizes are set to |G_S| = 1.2K and |G_T|=0.6. Fig. <ref> shows that with the increase of the M, the performances of all the methods are getting better. The reason is straightforward: with the maturity score increases, the challenge of domain difference for all the methods is becoming smaller.In addition, ourand its variants RW-DCM, and EEM-CMF perform much better than DT and NT. This demonstrates the great ability of the sub-models offor knowledge transfer. Furthermore,still achieves the best performance.§.§ Real-World Experiments Two real-world system monitoring datasets are used in this experiment. The data is collected from an enterprise network system composed of 47 Linux machines and 123 Windows machines from two departments, in a time span of 14 consecutive days. In both datasets, we collect two types of system events: (1) communications between processes, and (2) system activity of processes sending or receiving Internet connections to/from other machines at destination ports. Three different types of system entities are considered: (1) processes, (2) Unix domain sockets, and (3) Internet sockets. The sheer size of the Windows dataset is around 7.4 Gigabytes, and the Linux dataset is around 73.5 Gigabytes. Both Windows and Linux datasets are split into a source domain and a target domain according to the department name. The detailed statistics of the two datasets are shown in Table <ref>. In this experiment, we construct one target domain dependency graph G_T per day by increasing the learning time daily. The final graph is the one learned for 14 days. The result is shown in Fig. <ref>. From Fig. <ref>, we observe that for both Windows and Linux datasets, with the increase of the training time, the performances of all the algorithms are getting better.On the other hand, compared with all the other methods,achieves the best performance on both Windows and Linux datasets. In addition, our proposedalgorithm can make the dependency graph deplorable in less than four days, instead of two weeks or longer by directly learning on the target domain. §.§ Convergence Analysis As described in Section <ref>, the performance bottleneck ofmodel is the learning process of the two sub-models: EEM (Entity Estimation Model) and DCM (Dependency Construction Model).In this section, we report the convergence speed of our approach.We use both synthetic and real-world data to validate the model convergence speed. For the synthetic data, we choose the one with dynamic factor to be F = 0.2, the dependency graph size to be |G_S|=1.2K and |G_T|=0.6K, and the graph maturity to be 50%. For the two real-world datasets, we fix the target dependency graph learning time as 4 days. The convergence rate of different models is shown in Fig. <ref>.From Fig. <ref>, we can see that in all three datasets,converges very fast (i.e., with less than 10 iterations). This makes our model applicable for the real-world large-scale systems. §.§ Parameter Study The result is shown in Fig. <ref>. In this section, we study the impact of parameter μ in Eq. <ref>. We use the same datasets as in Section <ref>. As shown in Fig. <ref>, when the value of μ is too small or too large, the results are not good, because μ controls the leverage between the source domain information and target domain information. The extreme value of μ (too large or too small) will bias the result. On the other hand, the μ value calculated by Eq. <ref> is 0.23 for the synthetic dataset, 0.36 for the Windows dataset, and 0.46 for the Linux dataset. And Fig. <ref> shows the best results just appear around these three values. This demonstrates that our proposed method for setting the μ value is very effective, which successfully addresses the parameter pre-assignment issue.§.§ Case Study on Intrusion Detection As aforementioned, dependency graph is essential to many forensic analysis applications like root cause diagnosis and risk analysis. In this section, we evaluate the 's performance in a real commercial enterprise security system (see Fig. <ref>) for intrusion detection. In this case, the dependency graph, which represents the normal profile of the enterprise system, is the core analysis model for the offshore intrusion detection engine. It is built from the normal system process event streams (see Section <ref> for data description) during the training period.The security system has just been deployed in one Japanese electric company for three days, but it has been deployed in a US IT company for 1 month. In order to achieve good intrusion detection results in the electric company with the only three days' training,is applied for accelerating the dependency graph learning process by leveraging the knowledge learned from the IT company. The same security system has been deployed in two companies: one Japanese electric company and one US IT company. We obtain one dependency graph from the IT company after 30 days' training, and two dependency graphs from the electric company after 3 and 30 days' training, respectively.is applied for leveraging the well-trained dependency graph from the IT company to complete the 3 days' immature graph from the electric company. in the Electronic company with the only three days' training,is applied for accelerating the dependency graph learning process by leveraging the knowledge learned from the IT company.In the one-day testing period, we try 10 different types of attacks <cit.>, including Snowden attack, ATP attack, botnet attack, Sniffer Attack and etc., which resulted in 30 ground-truth alerts. All other alerts reported during the testing period are considered as false positives.We deployto a real world commercial anomaly detection system in an IT company X in Princeton, NJ.The anomaly detection system in this company utilizes the dependency graph as the core analysis engine. Learning the dependency graph for the detection system needs more than three weeks. We omit and anonymize some details for confidentiality reasons.Table <ref> shows the intrusion detection results using the dependency graph generated by , comparing to other four baseline methods. Table <ref> shows the intrusion detection results in the electric company using the dependency graphs generated by different transfer learning methods and the 30 days' training from the electric company. From the results, we can clearly see thatoutperforms all the other transfer learning methods by at least 18% in precision and 13% in recall. On the other hand, the performance of the dependency graph (3 days' model) accelerated byis very close to the ground truth model (30 days' model). This means, by using , we can achieve similar performance in one-tenth training time, which is of great significant to some mission critical environments.this case study experiment, we choose the anomaly detection system of an electronic factory Y in XXX as the source domain. We choose the the branch office of X in Princeton as the target domain. The target domain original graph is a 3 day immature dependency graph. We deployedand the comparing methods into the anomaly detection system of X. The experimental result of anomaly detection test is reported in Table <ref>. The real target model in Table. <ref> denotes the model trained in Y for two weeks. From the result, we can clearly see thatoutperforms all the other methods. On the other hand, the performance of the dependency graph (3 days model) accelerated byis very close to the ground truth model (30 days model). That means, by using , we can achieve the same performance in less than half training time, which is of great significant to industry companies. § RELATED WORK In this section, we briefly introduce some related research efforts.§.§ Transfer LearningTransfer learning has been widely studied in recent years <cit.>. Most of the traditional transfer learning methods focus on numerical data <cit.>.When it comes to graph (network) structured data, there is less existing work. One line of transfer learning research that related to our work is transfer learning on the graph (network) structured data. In <cit.>, the authors presented TrGraph, a novel transfer learning framework for network node classification. TrGraph leverages information from the auxiliary source domain to help the classification process of the target domain.A similar approach proposed by them in <cit.> is <cit.>. In <cit.>, the authors proposed a transfer learning method In one of their earlier work, a similar approach was proposed <cit.> to discover common latent structure features as useful knowledge to facilitate collective classification in the target network. In <cit.>, the authors proposed a framework to propagates the label information from the source domain to the target domain via the example-feature-example tripartite graph. Transfer learning has also been applied to the deep neural network structure.In <cit.>, the authors introduced Net2Net, a technique for rapidly transferring the information stored in one neural net into another. Net2Net utilizes function preserving transformations to transfer knowledge from neural networks. Different from existing methods, we aim to expedite the dependency graph learning process through knowledge transfer.However, none of the methods have done the knowledge transfer for reconstruction the dependency graphs. §.§ Link Prediction and Relevance SearchGraph link prediction is a well-studied research topic <cit.>. There are some link prediction works that related to our paper.In <cit.>, Ye et al. presented a transfer learning algorithm to address the edge sign prediction problem in signed social networks. Because edge instances are not associated with a pre-defined feature vector, this work was proposed to learn the common latent topological features shared by the target and source networks, and then adopt an AdaBoost-like transfer learning algorithm with instance weighting to train a classifier.The other work related to our work is collective matrix factorization Collective matrix factorization <cit.> is another popular technique that can be applied to detect mission links by combining the source domain and target domain graphs. However, all the existing link prediction methods can not deal with dynamics between the source domain and target domain as introduced in our problem. Graph Relevance Search Finding relevant nodes or similarity search in graphs is also related to our work. Many different similarity metrics have been proposed such as Jaccard coefficient, cosine similarity, and Pearson correlation coefficient <cit.>, and Random Walks <cit.>.However, none of these similarity measures consider the multiple relations exist in the data. Recent advances in heterogeneous information networks <cit.> have offered several similarity measures for heterogeneous relations, such as meta-path and relation path <cit.>. However, these methods can not deal with the multiple domain knowledge.§ CONCLUSIONDependency graphs capture intrinsic relationships between different pairs of system entities of physical systems. A well-trained dependency graph can be used for many data analysis application such as anomaly detection, system behavior analysis, recommendation, etc. However, learning dependency graph is a time-consuming process. To speed up the learning process,In this paper, we proposed , the first transfer learning model for accelerating dependency graph learning. can effectively extract useful knowledge (e.g., entity and dependency relations) from the source domain, and transfer it to the target dependency graph. alsothe dynamic factor between different domains. The experimental results on simulation datasets and two real datasets demonstrate the effectiveness and efficiency of . We also deployed ACRET into real world commercial systems.In this paper, we investigate the problem of transfer learning on dependency graph. Different from traditional methods that mainly focus on numerical data, we propose , a two-step approach for accelerating dependency graph learning from heterogeneous categorical event streams.By leveraging entity embedding and constrained optimization techniques,can effectively extract useful knowledge (e.g., entity and dependency relations) from the source domain, and transfer it to the target dependency graph. can also adaptively learn the differences between two domains, and construct the target dependency graph accordingly. We evaluate the proposed algorithm using extensive experiments. The experiment results convince us of the effectiveness and efficiency of our approach. We also applyto a real enterprise security system for intrusion detection. Our method is able to achieve superior detection performance at least 20 days lead lag time in advance with more than 70% accuracy.ACM-Reference-Format
http://arxiv.org/abs/1708.07867v1
{ "authors": [ "Chen Luo", "Zhengzhang Chen", "Lu-An Tang", "Anshumali Shrivastava", "Zhichun Li" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20170825192427", "title": "Accelerating Dependency Graph Learning from Heterogeneous Categorical Event Streams via Knowledge Transfer" }
Coherent curvature radiation and FRB Ghisellini & Locatelli^1INAF – Osservatorio Astronomico di Brera, Via Bianchi 46, I–23807 Merate, Italy ^2Univ. di Milano Bicocca, Dip. di Fisica G. Occhialini, Piazza della Scienza 3, I–20126 Milano, Italy ^3Univ. di Bologna, Via Zamboni, 33 - 40126 Bologna, ItalyFast radio bursts are extragalactic radio transient events lasting a few milliseconds with a ∼Jy flux at ∼1 GHz. We propose that these properties suggest a neutron star progenitor, and focus oncoherent curvature radiation as the radiation mechanism. We study for which sets of parameters the emission can fulfil the observational constraints. Even if the emission is coherent, we find that self–absorption can limit theproduced luminositiesat low radio frequencies and that an efficient re–acceleration process is needed to balance the dramatic energy losses of the emitting particles. Self–absorption limits the luminosities at low radio frequency, whilecoherence favours steep optically thin spectra.Furthermore, the magnetic geometry must have a high degree of order to obtain coherent curvature emission.Particles emit photons along their velocity vectors, thereby greatly reducing the inverse Comptonmechanism. In this case we predict that fast radio bursts emit most of their luminosities in the radio bandand have no strong counterpart in any other frequency bands.Coherent curvature radiation and fast radio burstsGabriele Ghisellini1E–mail: [email protected], Nicola Locatelli1,2,3Accepted ?. Received ?; in original form ?. ===========================================================================================§ INTRODUCTIONFast radio bursts (FRB) are ultrafast radio transients that are typically a millisecond in duration with a flux at the Jy level at ∼1 GHz. Their extragalactic origin is suggested by the measured dispersion measure (DM) exceeding the Galactic value. Recently, FRB 121102 has been associated with a galaxy at a redshift z=0.19, roughly confirming the distance estimated with the observed DM (Chatterjee et al. 2017; Tendulkar et al. 2017).The extragalactic nature of FRBs implies luminosities around 10^43 erg s^-1 and energetics of the order of 10^40 erg. The large flux, short duration, and cosmological distances also imply a huge brightness temperature of the order of T_ B∼ 10^34–10^37 K. This in turn requires a coherent radiation process.Many models have already been proposed to explain the origin of FRBs[There is a strict analogy with the pre–Swift era of gamma ray bursts, when the uncertainty about their origin allowed hundreds of proposed scenarios for their origin]. Before the discovery of the host galaxy of FRB 121102 even the Galactic scenarios were believed to be feasible, although by a minority of the scientific community (e.g. Loeb, Shvartzvald & Maoz 2014;Maoz et al. 2015). Now this idea is disfavoured, but not completely discarded, since the repeatingbehaviour of this FRB remains unique. This makes the existence of two classes of FRBs still possible, which is analogous to the early times of gamma ray bursts andsoft gamma ray repeaters. Progenitor theories of extragalactic FRBs include the merging of compact objects(neutron stars: Totani 2013, white dwarfs: Kashiyama et al. 2013), flares from magnetars and soft gamma–ray repeaters (Popov & Postnov 2010; Thornton et al. 2013; Lyubarsky 2014, Beloborodov 2017),giant pulses from pulsars (Cordes & Wasserman 2015),the collapse of supramassive neutron stars (Zhang 2014; Falcke & Rezzolla 2014), and dark matter induced collapse of neutron stars (Fuller & Ott 2015). There are even exotic ideas, such as FRBs explained by the propulsion of extragalactic sails by alien civilizations (Lingam & Loeb 2017)or the result of lightning in pulsars (Katz 2017). A huge brightness temperature is probably the most important property of FRBs. There are two mechanisms that can result in coherence: the first is a maser and the second is bunches of particles contained in a region of a wavelength size,accelerated synchronously in such a way that their electric fields add up in phase. In this second case the produced radiation depends on the square of the number of particles in the bunch times the number of bunches. In the maser case, particles are not required to be contained in a small volume,since the radiation is produced by stimulated emission and is emitted in phase andin the same direction of the incoming photonsby construction. There are two kinds of masers.The usual type uses different, discrete energy levels of which one is metastable. Free radiative transition from this level is somewhat inhibited, allowing stimulated emission to be effective.This means that a maser requires a population inversion (high energy levels more populated than low energy levels).The second type of maser is associated with free electrons with no discrete energy levels and there is no need for population inversion. This type of maser occurs when a photon hν can trigger stimulated emission with a greater probability than true absorption. This occurs rarely, but it does occur with some special configuration, such as when electrons of the same pitch angle and energy emit, by synchrotron, some photons outside the beaming cone of angle 1/γ ; that is these electrons emit at relatively large angles with respect to their instantaneous velocity (see e.g. Ghisellini & Svensson 1991).In Ghisellini (2017) we investigated whether a synchrotron maser can be at the origin of observed radiation of FRBs. The result was that it is possible, but as long as the magnetic field is of the order of B∼10–100 G if the emitters are electrons, and B∼ 10^4-10^5 G if the emitting particles are protons. These relatively weak values of the magnetic field are required to produce radiation in the GHz radio band. These values of B are appropriate for main sequence stars (if the emitters are electrons) and white dwarf (if protons), which would make it difficult to explain large super–Eddington luminosities on a millisecond timescale. We therefore think that the proposed synchrotron maser would be a valid model if FRBs were Galactic (i.e. with a factor ∼ 10^12 less energy and luminosity), but it is less likely if all of these are extragalactic.We (Locatelli & Ghisellini 2017, herafter LG17) then explored the possibility to have maser emissionfor the curvature radiation process, but we found that it is not possible. The reason is in the different energy dependence of the single particle emitted power P upon the particle energy γ: P∝γ^2 for synchrotron and P∝γ^4 for curvature radiation. We are thus back to the particle bunching possibility.The fact that the phenomenon of FRB is observed in radio wavelengths with such a short duration suggests that the particles emit by a primary (non reprocessed) non-thermal process. The observed power (if extragalactic) suggests a compact object,such as a neutron star or an accreting stellar black hole, at the origin of the energetics. The large luminosities and energetics imply that the numberof emitting particles is very large.This in turn suggests that the FRB emission comes from the vicinity of the powerhouse responsible for the injection and acceleration of the emitting particles. The compactness of the object, the energetics, and the requirement of being close to the powerhouse suggest a strong value of the magnetic field. This in turn excludes the synchrotron mechanism, which would produce frequencies that are too large. The other remaining possibility is curvature radiation from bunches of particles emitting coherently. In this prospective the roughly millisecond duration gives an even stronger limit on the size of the emitting region, which is required to be R< ct∼300 km.This strengthens the arguments in favour of a compact object as central engine. The curvature radiation process has already been discussed by several authors, mainly to explain the origin of the pulses in pulsars, which also show huge T_ B, and shorterduration than in FRBs, but much lower luminosities. The most recent and complete work suggesting coherent curvature radiation to explain FRBs is Kumar, Lu, & Bhattacharya (2017, hereafter K17). They found a set of constraints limiting the geometry, density, and energy of the emitting particles in models that can successfully explain FRBs. Perhaps the most severe limitation they found is about the requirement on the magnetic field. It has to be equal or stronger than the critical magnetic field B_ c≡ m_ e^2 c^3/(he) = 4.4 × 10^13 G to be comparable with the Poynting vector of the produced radiation. This suggests strongly magnetized neutron stars as progenitors.In this paper, following the study of K17, we consider what limits are posed by the process of self–absorption, finding solutions aiming to minimize the required total energy. Furthermore, we consider power law particle distributions as well as mono-energetic particle distributions, illustrating the effects that coherence has on the observable spectrum, both in the thick and thin part. Finally, we study the importance of inverse Compton radiation.We illustrate that in the highly ordered geometries, associated withcurvature radiation in highly magnetized neutron stars, inverse Compton emission is severely limited. This agrees with the non-detection, so far, of FRBs in any other frequency band other than the radio (see Sholtz et al. 2017 and Zhang & Zhang 2017 for FRB 121102)This may imply that the FRB is mainly a radio phenomenonwith no counterparts at other frequencies. § SET UP OF THE MODEL First let us introduce the notation, that is ρ= curvature radius; ν_0≡ c/2πρ; ν_ c≡3/ 2γ^3 ν_0; x ≡ ν/ν_ c≡ 2ν/3 ν_0 γ^3, where ν_0 would correspond to the fundamental frequency if the trajectoryof the particle were a real circle of radius ρ. The observer sees radiation from the relativistic particles for a time (ρ/c)/ γ of the trajectory and the Doppler effect shortens it by another factor ∼ 1/γ^2.Thus the typical observed frequency ν_ c is a factor γ^3 higher than ν_0.The monochromatic power emitted by the single particle is written as (see e.g. Jackson 1999; LG17)p_ c(ν,γ)= √(3)e^2/ργ x ∫_x^∞ K_5/3(x^') dx^'. Usually, the monochromatic luminosity, which we denote L(ν),is the single electron power multiplied by the particle density at a given γ, integrated over all γ, and then multiplied by the volume. This assumes that particles emit incoherently and that they are distributed isotropically.In these conditions as well the emitted luminosity is isotropic.For ordered magnetic geometry, we must take into account that the observed times are Doppler contracted for lines of sight along the particle velocity direction. Furthermore, the radiation is collimated.If an observer located orthogonally to the motion measures a time Δ t_⊥, the observer locatedwithin an angle 1/γ along the velocity direction receives photons within a time Δ t_∥, that is Δ t_∥ ∼ Δ t_⊥/γ^2 . As discussed in K17, the volume of a bunch ofparticles observed to emit coherently may extend in the lateral dimensions (lateral with respect to the velocity direction, which coincides with the line of sight). On the other hand, extending the lateral dimensions too much requiresto consider particles moving along magnetic field lines of different curvature radii,thereforeemitting at other frequencies, possibly in a incoherent way. We assume that along the line of sight the coherence length is ρ/γ,the same length for which the produced radiation reaches us.This size is along a single magnetic field line, perpendicular to the radial direction; radial here means with respect to the centre of the neutron star. Along the latter, we assume that the relevant size is the same, ρ/γ. Perpendicular to this, along the longitudinal direction, the magnetic field lines could have the same curvature radius for an extension of the order of ρ, if there are indeed particles along the corresponding field lines. In general we parametrize this uncertainty defining the volume as V = ρ^3/γ^2+a , where a can be zero or one.The observed emission is not isotropic, but it is beamed within a solid angle much smaller than 4π.The single particleemits mainly within ΔΩ∼π/γ^2, but the general case, for an ensemble of particles, depends on the assumed geometry. As said above for the volume, particles in the longitudinal size may indeed contribute to the coherent luminosity.If so, the corresponding solid angle is πρ^2/(γ R^2)∼π/γ. Here R is the distance from the centre of the neutron star. The other case corresponds tofewer lines occupied by the emitting particles. For simplicity, we assume in this case a solid angle πρ^2/(γ^2 R^2)∼π/γ^2. Again, we may parametrize the two cases assuming ΔΩ= π/γ^1+a , where a is the same parameter of Eq. <ref>, so that V/ΔΩ= ρ^3/πγ, V^2/ΔΩ= ρ^6 /πγ^3+a .Coherent radiation depends on the square of the number of particles contained in the coherence volume[The volume for which the particles are observed to emit coherently. As discussed in K17, in general this is a function of the distance from the sources andobserver: the longer the distance, the larger the coherence volume can be.]. Taking into account the Doppler contraction of times and the collimation of the radiation we have L^ thin_ iso(ν)=∫_γ_1^γ_2 p(ν,γ) [N(γ) V]^2 Δ t_⊥/Δ t_∥ 4π/ΔΩ dγ= ∫_γ_1^γ_2 4p(ν,γ) ρ^6/γ^1+α N^2(γ) dγ . §.§ Monoenergetic particle distribution For a monoenergetic particle distribution we have N(γ)=N_0δ(γ-γ_0). In this case we obtain L^ thin_ iso(ν) =16π/Γ(1/3) ( 2 πρ/ 3 c )^1/3 e^2 ρ^5 N_0^2/γ_0^1+aν^1/3 e^-ν/ν_c , where we used the following approximation: F(ν/ν_ c)≡ ν/ν_ c∫^∞_ν/ν_ c K_5/3(y)dy∼ 4π/√(3)Γ(1/3)( ν/ 2ν_ c)^1/3 e^-ν/ν_c as in Ghisellini (2013), applied in that work for the synchrotron process, but also valid here.§.§ Power law particle distribution Assume now that the particles are distributed as a power law, N(γ)=N_0γ^-n between γ_1 and γ_2. Eq. <ref> with this particle distribution gives L^ thin_ iso(ν) = A(n) e^2 ρ^5 N_0^2 (ν/ν_0)^-(2n+a-1)/3e^-ν/ν_ c, maxA(n)= 2/√(3) 3^(2n+a-1)/32n+a+4/ 2n+a+2  Γ( 2n+a+4/ 6)Γ( 2n+a / 6) . § ABSORPTIONThe absorption could be coherent, but there is a very important difference: the absorption cross section and hence the absorption coefficient depends on the mass of the particle. Therefore even if particles absorb as the square of their number, as if they were a single charge Q=N(γ) e,their equivalent mass is M=N(γ)m, such that the absorption coefficient is α(ν) ∝ Q^2/M = (e^2/m) N(γ), as in the incoherent process (see e.g. Cocke & Pacholczyk 1975). Therefore the absorption optical depth is the same as in the incoherent process.The elementary process of absorption of curvature radiation isappropriately described by the corresponding cross section, derived byLocatelli & Ghisellini (2017), as follows: σ_ν =1/ 2 m ν^2(d p(ν, γ)/ dγ)=1/ 2√(3) e^2ρ/γ^6 mc^2[ K_5/3(x)-2/ 3 ∫_x^∞ K_5/3(y) dy / x]. For small arguments y of the Bessel function, K_a(y) → 2^a-1Γ(a) y^-a. Using this approximation up to y=1 and setting K_a(y)=0 above, we find that σ_ν(γ) ∝ν^-1 at low frequencies σ_ν≈√(3)Γ(5/3)/ 2^4/3 e^2ρ/γ^3 mc^2 ( ν/ν_0)^-1, ν≪ν_ c . This also agrees with numerical results.§.§ Absorption by a monoenergetic particle distribution If the particle distribution is monoenergetic, N(γ) = N_0 δ(γ-γ_0), the absorption optical depth for a layer of length ρ/γ_0 isτ_ν= σ_ν ( γ_0)N_0ρ/γ_0 . The self-absorption frequency ν_ t is defined setting τ_ν=1. Using the low frequency approximation of Eq. <ref> we have ν_ t ∼ ν_0 √(3)Γ(5/3)/ 2^4/3 e^2ρ^2 N_0 / m c^2 γ_0^4 .§.§ Absorption by a power law particle distribution Using the N(γ) distribution of Eq. <ref>and using a (γ–dependent) layer of length ρ/γ we have τ_ν = ∫_γ_1^γ_2σ_ν ( γ)N(γ)dR/ dγ dγ=- ∫_γ_1^γ_2σ_ν ( γ)N(γ)ρ/γ^2 dγ = e^2 ρ^2 N_0 / 16 √(3) mc^2( ν/ν_0)^-(n+7)/3 F(n) F(n) = 3^ (n+1)/3 (n+6)(n+2)/ n+4Γ(n+6/ 6)Γ(n+2/ 6) , where dR/dγ= -ρ/γ^2. We note that the choice of ρ/γ as the appropriate length introduces an extra dependence on the observed frequency with respect to the choice of a fixed length. The self absorption frequency is ν_ t= ν_0 [ e^2 ρ^2 N_0 / 16 √(3) mc^2F(n) ]^3/(n+7).§ SED OF COHERENT CURVATURE RADIATION Rather generally, we can account for both the thin and thick part of the spectrum setting, that is L_ iso(ν)=L^ thin_ iso(ν) 1-e^-τ_ν/τ_ν . With this equation we can calculate the entire spectrum. The model parameters are* The curvature radius ρ.Since we are dealing with neutron stars, ρ is likely associated with (be a multiple of) the radius of the neutron star. The K17 work found a lower limit to the magnetic field that is able toguidethe emitting particles, and found that the guiding magnetic field must be stronger than B=10^13 G. If true, this implies that the emitting particles are not too far from the surface of the neutron star. For illustration, we use ρ=10^6 cm.* The particle density ∼ N_0. This is to be found considering two limits: the first is the luminosity produced by the ensemble of the emitting particles in a coherent way; the second is that self–absorption can decrease the observed luminosity in a severe way. * The particle distribution, assumed to be a power law of index n or monoenergetic.We investigate both cases and for the slope n of the power law we preferentially use n=2.5, but weshow how the spectral energy distribution (SED) changes for different n. We consider that the coherent nature implies that the observed radiation, inthe thin part, is proportional to the square of particle distribution, giving L(ν) ∝ν^-(2n+a-1)/3 (Eq. <ref>).For a given n, the observed spectrum is steeper with respect to the incoherent case. * The minimum and maximum particle energy, or equivalently, the corresponding Lorenz factors γ_ min and γ_ max. For simplicity, when treating the power law case, we assume γ_ min=1. The high energy end of the particle distribution is not energetically importantif L(ν)∝ν^-α, where α>1.* The collimation factor, parametrized by a in Eq. <ref>, associated with the choice of the emitting volume as in Eq. <ref>. We do not have arguments in favour or against either possibility, so weconsider both cases. * The number of emitting leptons with respect to protons.Protons suffer less self-absorption, since τ_ν∝ m^-1. This makes the solutions with emitting protons more economical. On the other hand, the regions close to the surface of a highly magnetized neutron stars are the kingdom of electron–positron pairs, that can greatly outnumber protons. In any case, we consider both cases.§.§ Mononergetic particle distribution Fig. <ref> shows how the SED changes by changing one parameter at a time in the case of a monoenergetic particle distribution. The reference values of the input parameters are indicated as ρ=10^6 cm, N_0=10^15 cm^-3, a=1,and γ_ max=10^3. For all the figures shown in this paper, we chose the ν-ν L_ν representation. In this way we see in which band most of the luminosity is produced. The ν L_ν luminosity is a proxy for the bolometric luminosity.0.2 cm Changing particle density — The top panels shows the SED produced by protons (solid line) and leptons (dashed line) for different densities.We note that the absorption for leptons occurs at greater frequencies. 0.2 cm Changing ρ —The middle panel shows SEDs produced by different curvature radiiρ.The emission is stronger for greater ρ. In this case the emitting volume is larger, which more than compensates forthe smaller acceleration.0.2 cm Changing γ_ max — The bottom panel shows that for small values of γ_ max the emission is completely self–absorbed, and remains so for larger values of γ_ max for leptons than for protons. Once the optically thin regime is achieved, we observe the ν^1/3 slope. §.§ Power law particle distribution We show in Fig. <ref> how the SED changes by changing one parameter at a time, keeping the others fixed. The reference values of the input parameters are ρ=10^6 cm, N_0=10^15 cm^-3,n=2.5, a=1, and γ_ max=10^3.0.2 cm Changing slope — The top left panel of Fig. <ref> shows SEDs withdifferent slopes n of the power law distribution of particles.The self-absorbed spectrum∝ L_ iso^ thin(ν)/τ_ν∝ν^7/3ν^-(n+a-1)/3. The slope does depend on n, contrary to the incoherent case, for which the self–absorbed spectrum is ∝ν^2. Since the absorption coefficient is smaller for protons, the corresponding self-absorbed spectrum has a higher luminosity than for leptons, and the self-absorption frequency is smaller.0.2 cm Changing particle density — The top right panel of Fig. <ref> shows theSEDs for different density N_0 of the power law distribution of particles. We note that the thick part of the spectrum ∝ N_0, while the thin part ∝ N_0^2.0.2 cm Changing γ_ max — The bottom left panel of Fig. <ref> shows the SEDs fordifferent γ_ max. For small γ_ max, the spectrum is entirely self-absorbed, peaking at thecharacteristic maximum frequency ν_ c.Larger γ_ maxallows the thin part of the SED to be visible, that is equal for protons and leptons.0.2 cm Changing ρ — The bottom right panel ofFig. <ref> shows the SEDs when changing ρ. We note that the maximum frequency ∝ν_0 ∝ρ^-1. § APPLICATION TO FAST RADIO BURSTS We are looking for models that are able to reproduce the typical characteristicobserved properties of FRBs, namely ν L_ν∼ 10^43 erg s^-1at ν∼1 GHz, and that are compatible in size with the observed duration of ∼1 ms. If these properties can be reproduced, then the model is automatically consistent with the observed brightness temperatures. Since the information about the slope of the radio spectrum is uncertain, we do not constrain it; this means that, in principle, we allow for both optically thin and optically thick cases. Another criterion we adopt is to look for models requiring the minimum amount of energy. §.§ Illustrative examplesTable <ref> reports the chosen values of the parameters. In general, the cases with a=0 are more economical, since theoptically thin luminosity depends on V^2/ ΔΩ∝γ^-3-a (Eq. <ref>). In these illustrative examples, the SED is always optically thin at 1 GHz and the slope α≡ (2n+a-1)/3 of L(ν)∝ν^-α is α=5/3 (a=1) or α=4/3 (a=0). In the self–absorbed regime (τ_ν≫ 1), we have L(ν) ∝ L_ iso^ thin /τ_ν∝ν^(8-n-a)/3. As mentioned above, the slope of the self-absorbed part of the SED does depend on n. In our case, with n=2.5, we have L(ν)∝ν^11/6 (if a=0) or L(ν)∝ν^3/2 (if a=1). Fig. <ref> shows two sets of models with a power law particle distribution with the same slope n=2.5and different ρ. In Fig. <ref> the vertical and horizontal lines correspondν=1 GHz and ν L_ν= 10^43 erg s^-1.§.§ Suppression of the inverse Compton process Usually, the importance of the inverse Compton process is measured by theComptonization parameter y, defined as y= averagenumberof scattering× fractional energy gain per scattering . Compton up-scattering is important for y>1. If particles are relativistic and the seed photons are isotropically distributed, we havey ∼τ_ T⟨γ^2⟩ when the scattering optical depth τ_ T is smaller than unity. In our case of highly ordered geometry, both the average number of scatterings and the fractional energy gain of the scattered photons are largely reduced. In fact, the curvature radiation process produces photons moving along the same direction of the emitting particle: the typical angle between seed photonsand particles is of the order of ψ = 1/⟨γ⟩. Since the rate of scatterings is ∝ (1-βcosψ) ∼ (1-⟨β^2⟩), we have a reduction of a factor 1/⟨γ^2⟩ with respect to an isotropic case. The frequency of the scattered photons ν_1 depends upon theangles between the electron velocity and the photons before (ψ) and after (ψ_1) the scattering.Both are measured in the observer frame. We have (see e.g. Ghisellini 2013) ν_1 = ν 1-βcosψ/ 1-βcosψ_1≈ν . In the comoving frame, the pattern of the scattered radiation has a backwards–forwards symmetry, and this implies that in the observed frame, most of the scattered radiation in concentrated along the electron velocity vector, within an angle ψ_1∼ 1/γ, independent of ψ.As a result, not only the scattering rate is largely reduced, but the scattered photons, on average, do not change their frequency.Another possible process is inverse Compton scattering with photonsproduced by the hot surface of the neutron star. For monoenergetic particles with Lorentz factor γ,the observed luminosity, accounting for collimation, is L_ ext∼4π/ΔΩ V σ_ T c U_ ext N_0 γ^2, where N_0 is the approximately the particle density and U_ ext is the radiation energy densityproduced externally to the scattering site. If the latter is close to the neutron stars surface of temperature T, U_ ext∼ aT^4 In this case we have L_ ext∼ 4πρ^3 σ_ T c a T^4 n ⟨γ⟩∼ 1.9× 10^32ρ_6^3 T_6^4N_0, 15⟨γ_2⟩ erg s ^-1 . This luminosity should last the same time as the FRBs and should be observed at hν∼ 3γ^2 kT ∼ 2 γ_2^2 T_6MeV. We conclude that it is very unlikely that the inverse Compton process plays a significant role in the proposed scenario. Therefore we expect that FRBs are weak or very weak emitters of X-rays or γ-rays. § COOLING TIMESCALESIn the absence of coherent effects,the frequency integrated power emitted by a single particle is P_ c = 2e^2γ^4 c/ 3ρ^2 . Instead, for coherent emission, the power emitted by the single particledepends on the square of the number of other particles emitting in the same volume Vof the bunch. The cooling timescale can be calculateddividing the total energy of particles by the total power they emit as follows: t_ cool = [V N(γ)] γ mc^2 / [V N(γ)]^2 P_ c= γ mc^2/V N(γ)P_ c =3 γ^n+a-1mc^2 /2e^2 N_0ρc = 1.7× 10^-17 γ^n+a-1/ N_0, 13 ρ_6 s , where we have assumed m=m_ e, N_0=10^15 N_0, 15 cm^-3 and ρ=10^6ρ_6 cm.For n>1, t_ cool increases for larger γ: the radiative process is more efficient for lower energy particles. This strange behaviour occurs because (for positive n) there arefewer and fewer particles for increasing γ, and thus the power of coherent emission decreases. However, this is true for thin coherent emission. Low energy particles self-absorb the radiation they themselves produce, inhibiting their radiative cooling. In the absence of re-acceleration or injection of new particles, most of the radiation is then emitted at the self-absorption frequency (by electron of corresponding Lorentz factor γ_ t), which must evolve (decrease) very quickly. The cooling time is therefore long for electron energies γ≪γ_ t, has a minimum at γγ_ t and thenincreases again for γ≫γ_ t.The typical value of γ required to emit 1 GHz emission isγ∼ (4πν_ c/3c)^1/3∼ 52 (ρ_6 ν_ c, 9)^1/3. At 1 GHz, for n=3, N_0=10^13 cm^-3 and a=1, t_ cool∼ 2.4 ×10^-12 s for any ρ. In this extremely short timescale the particle can travel for a mere 7× 10^-2 cm. This severe problem can be relaxed if fewer particles per bunch are needed to produce the observedluminosity, namely if the observed emission is produced by many (M) bunches, each containingVN_0/M particles.Alternatively, we should have an efficient acceleration mechanism that is able to balance the radiation losses. One possibility is an electric field parallel to the magnetic field. In this case, particles with γ≪γ_ t increase their energy up to the point where radiative losses dominate.Particles would tend to accumulate at the energy where gains and losses balance.In this scenario (acceleration of low energy particles, no re-injection,and cooling) the emitting particle distribution would be quasi-monoenergetic. Even if we can conceive some solutions, we consider the problem of these extremelyshort radiative cooling timescales very challenging. We not only require either many bunches or a re–acceleration mechanism, but we have still to explain why the typical duration of the observed pulses is ∼1 ms: it cannot be associated with a typical radiative cooling timescale. § DISCUSSION AND CONCLUSIONS We studied the coherent curvature radiation process and its self-absorption process to find if a region of the parameter space exists that is allowed to reproduce the observed properties. To this aim, we were forced to consider a well-definedgeometry with a high degree of order, located close to the surface of a neutron star, where it is more likely to find the required large densities of relativistic particles responsible for the coherent emission. Furthermore, we discussed the main properties of the SED predicted in this case, which is qualitatively different fromthe non-coherent case.Owing to Doppler time contraction (Eq. <ref>) the size ρ/γ visible to the observer corresponds to a time ρ/(cγ^3)and to an observed wavelength λ_ c=c/(γ^3ν_0). In other words, the observed wavelength is short, but particles are distributed in a much larger region and can still emit coherently.We did not investigate the reasons for having such a short and powerful burst of energyin the vicinity of the neutron star.However, the involved energetics are not extreme for a neutron star. The isotropic equivalent observed energetic is E_ iso∼ 10^40 erg, which can be reduced by a factor 10^2–10^3 if the emission is collimated. Giant flares from magnetars, by comparison, can reach energetics one million times larger (see e.g. Palmer et al. 2005 for SGR 1806–20). This implies that repetition is well possible from the energy point of view.The findings and conclusions of our work can be summarized as follows:* Self-absorption of coherent curvature radiation is an important process towards constraining the allowed choices of the parameters, such as the curvature radius and the particle density. On the other hand, there is indeed a region of the parameter space in which the observed properties of FRBs can be reproduced. * The emitted SED is likely to be steep (i.e. α>1, for F(ν)∝ν^-α) above the self-absorption frequency. Because of the limited range of the allowed values of the parameters, it is likely that the emission is self-absorbed at frequencies below 1 GHz. This predicts that the observed radio spectrum at ∼1 GHz, once scintillation is accounted for, is either close to its peak (i.e. α∼ 0) or steep (α>1), at least when leptons dominate the emission. * Coherent curvature radiation is possible if the geometry of the magnetic field and the emitting region is well ordered.Curvature photons are produced along the same direction of the particle velocities,much depressing the possibility to scatter. Therefore the model predicts an absent or very weak inverse Compton emission at high energies. * This makes the ∼GHz-millimeterband the preferred band where FRBs can be observed. The weakness of the produced emission at high (X and γ–ray) energiesis a strongprediction of the model. § ACKNOWLEDGEMENTSWe thank the anonymous referee for her/his critical comments that helped improve the paper.[]beloborodov2017 Beloborodov A.M., 2017, ApJ, 843, L26[]chatterjee2017 Chatterjee S., Law C.J, Wharton R.S. et al. 2017, Nature, 541, 58[]cocke1975 Cocke W.J. & Pacholczyk A.G., 1975, ApJ 195, 279[]cordes2016 Cordes J.M. & Wasserman I., 2016, MNRAS, 457, 232[]falcke2014 Falcke H. & Rezzolla L., 2014, A&A, 562, A137 []fuller2015 Fuller J. & Ott, C.D., 2015, MNRAS, 450, L71[]ghisellini1991 Ghisellini G. & Svensson R., 1991, MNRAS, 252, 313[]ghisellini2013 Ghisellini G. 2013, it Radiative processes in high energy astrophysics,Lecture Notes in Physics, 873, Springer Switzerland[]ghisellini2017 Ghisellini G. 2017, MNRAS, 465, L30 []jackson99 Jackson J.D., 1999, Classical electrodynamics, third edition, John Wiley and Sons, Inc.[]kashiyama2013 Kashiyama K., Ioka K. & Meszaros P., 2013, ApJ, 776, L39 []katz2017 Katz J.I., 2017, MNRAS, 469, L39 []kumar2017 Kumar P., Lu W. & Bhattacharya M., 2017, MNRAS, 468, 2726 (K17)[]locatelli2017 Locatelli N. & Ghisellini G. 2017, submitted to A&A(arXiv 1707.06352) (LG17)[]loeb2014 Loeb A., Shvartzvald Y. & Maoz D., 2014, MNRAS, 439, L46[]lingam2017 Lingam M. & Loeb, A., 2017, ApJ, 837, L23[]liubarsky2014 Lyubarsky Y., 2014, MNRAS,442, L9 []maoz2015 Maoz D., Loeb A. & Shvartzvald Y et al., 2015, MNRAS, 454, 2183 []palmer2005 Palmer D.M., Barthelmy S., Gehrels N. et al., 2005, Nature, 434, 1107 []popov2010 PopovS.B. & Postnov K.A, 2010,Proc. of the Conference held 15–18 Sep 2008 in Yerevan and Byurakan, Armenia, Editors: H.A. Harutyunian, A.M.Mickaelian &Y. Terzian, Yerevan, Publishing House of NAS RA, p. 129-132 []scholtz2017 Scholtz P., Bogdanov S., Hessels J.W.T. et al., 2017, ApJ in press (arXiv:1705.07824) []tendulkar2017 Tendulkar S.P., Bassa C.G., Cordes J.M. et al., 2017, ApJ, 834, L7[]thornton2013 Thornton D., Stappers B., Bailes M., et al. 2013, Science, 341, 53[]totani2013 Totani, T. 2013, PASJ, 65, L12 []zhang2014 Zhang, B. 2014, ApJ, 780, L21 [] Zhang B.–B. & Zhang B., 2017, ApJ in press (arXiv:1705.04242)
http://arxiv.org/abs/1708.07507v2
{ "authors": [ "Gabriele Ghisellini", "Nicola Locatelli" ], "categories": [ "astro-ph.HE" ], "primary_category": "astro-ph.HE", "published": "20170824175959", "title": "Coherent curvature radiation and Fast Radio Bursts" }
Finite element approximation of steady flows of generalized Newtonian fluids with concentration-dependent power-law index Seungchan KoMathematical Institute, University of Oxford, Andrew Wiles Building, Woodstock Road, Oxford OX2 6GG, UK. Email: [email protected]  and  Endre SüliMathematical Institute, University of Oxford, Andrew Wiles Building, Woodstock Road, Oxford OX2 6GG, UK. Email: [email protected]   ===================================================================================================================================================================================================================================================================================================================== We consider a system of nonlinear partial differential equations describing the motion of an incompressible chemically reacting generalized Newtonian fluid in three space dimensions. The governing system consists of a steady convection-diffusion equation for the concentration and ageneralized steady power-law-type fluid flow model for the velocity and the pressure, where the viscosity depends on both the shear-rate and the concentration through a concentration-dependent power-law index. The aim of the paper is to perform a mathematical analysis of a finite element approximation of this model. We formulate a regularization of the model by introducing an additional term in the conservation-of-momentum equation and construct a finite element approximation of the regularized system. We show the convergence of the finite element method to a weak solution of the regularized model and prove that weak solutions of the regularized problem converge to a weak solution of the original problem. Keywords: Non-Newtonian fluid, variable exponent, synovial fluid, finite element methodAMS Classification: 65N30, 74S05, 76A05 Introduction We are interested in developing a convergence theory for finite element approximations of a system of nonlinear partial differential equations (PDEs) modelling the rheological response of the synovial fluid. The synovial fluid is a biological fluid found in the cavities of movable joints and is composed of ultrafiltrated blood, called hyaluronan. Laboratory experiments have shown that the viscosity of the fluid depends on the concentration of hyaluronan, as well as on the shear-rate. In particular, it was observed in steady shear experiments that the concentration of the hyaluronan is not just a scaling factor of the viscosity (understood as ν(c,|Du|)=f(c) ν̃(|Du|)) but it has an influence on the degree of shear-thinning. Therefore, a new mathematical model of the rheological response of the synovial fluid was proposed in <cit.>. There, the authors considered a power-law-type model for the velocity and the pressure, where the power-law index depends on the concentration, corresponding to the fact that the concentration affects the level of shear-thinning. To close the system, a generalized convection-diffusion equation was assumed to be satisfied by the concentration. For a detailed rheological background we refer to <cit.>.Based on the description above, we consider the following system of PDEs: div=0inΩ,div (⊗)-div (c,) =-∇p+finΩ,div (c)-div _c(c,∇c, ) =0inΩ, where Ω⊂^d is a bounded open Lipschitz domain. In the above system of PDEs, :Ω→^d, p:Ω→, c:Ω→_≥0 denote the velocity, pressure and concentration fields, respectively, f:Ω→^d is a given external force, anddenotes the symmetric velocity gradient, i.e. =1/2(∇+(∇)^T). To complete the model, we impose the following Dirichlet boundary conditions: =0, c=c_don∂Ω, where c_d∈ W^1,s(Ω) for some s>d. By Sobolev embedding, c_d is continuous up to the boundary, and we can therefore define c^-min_x∈Ωc_d and c^+max_x∈Ωc_d.We further assume that the extra stress tensor :_≥ 0×^d× d_ sym→^d× d_ sym is a continuous mapping satisfying the following growth, strict monotonicity and coercivity conditions, respectively: there exist positive constants C_1, C_2 and C_3 such that |(ξ,B)|≤ C_1(|B|^r(ξ)-1+1),((ξ,B_1)-(ξ,B_2))·(B_1-B_2)>0 forB_1≠B_2,(ξ,B)·B≥ C_2(|B|^r(ξ)+||^r'(ξ))-C_3, where r:_≥0→_≥0 is a Hölder-continuous function satisfying 1<r^-≤ r(ξ)≤ r^+<∞ and r'(ξ) is defined as its Hölder conjugate, r(ξ)/r(ξ)-1. We further assume that the concentration flux vector _c(ξ,g,B):_≥ 0×^d×^d× d_sym→^d is a continuous mapping, which is linear with respect to g, and it additionally satisfies the following growth and coercivity conditions: there exist positive constants C_4 and C_5 such that |_c(ξ,g,B)| ≤ C_4|g|, _c(ξ,g,B)·g ≥ C_5|g|^2.As we have discussed above, the prototypical examples we have in mind are the following: (c,)=ν(c,||),_c(c,∇ c,)=K(c,||)∇ c, where the viscosity ν(c,||) is of the following form:ν(c,||)∼ν_0(κ_1+κ_2||^2)^r(c)-2/2and ν_0,κ_1,κ_2 are positive constants.The rigorous mathematical analysis of the existence of global weak solutions to a PDE system, consisting of the generalized Navier–Stokes equations, with a concentration-dependent viscosity coefficient, coupled to a convection-diffusion equation, was initiated in <cit.>. There, however, the power-law index was fixed and the concentration was only a scaling factor of the viscosity; the authors considered the evolutionary model and established the long-time existence of large-data global weak solutions. Concerning the model (<ref>)–(<ref>) where the power-law index is concentration-dependent, the mathematical analysis was initiated in <cit.>. The authors established there the existence of weak solutions, provided that r^->3d/d+2, by using generalized monotone operator theory. In <cit.>, with the help of a Lipschitz-truncation technique, the existence of weak solutions with r^->d/2 was proved and the Hölder continuity of the concentration was shown by using De Giorgi's method.In <cit.>, the convergence of a finite element approximation to the system (<ref>)–(<ref>) was shown, using a discrete De Giorgi regularity result. Because of the absence of a discrete De Giorgi regularity result in three space dimensions, the analysis in <cit.> was restricted to the case of two space dimensions. In this paper, we extend the analysis developed in <cit.> to three space dimensions, and we formulate an analogous convergence result for a finite element method in a three-dimensional domain. The main idea here is to use a different numerical approximation scheme from the one in <cit.>, resulting in a different limiting process. To this end, we consider different meshes for the conservation of linear momentum equation and the concentration equation. The resulting numerical method can be viewed as a two-level Galerkin approximation. This enables us to separate the passages to the limits with respect to the discretization parameters in the two equations, thus avoiding the need for a discrete De Giorgi regularity result in three space dimensions.As a first step, in Section 2 we introduce the necessary notational conventions and auxiliary results, which will be used throughout the paper. In Section 3, we define a regularized problem, which enables us to enlarge the range of the power-law index so as to be able to cover the practically relevant range of values of this index. In Sections 4 and 5, we construct a two-level Galerkin finite element approximation to the regularized problem and perform a convergence analysis of the numerical method. Finally, in Section 6, we shall prove that weak solutions of the regularized problem converge to a weak solution of the original problem when we pass to the limit with the regularization parameter. Notation and auxiliary results In this section, we shall introduce certain function spaces and auxiliary results that will be used throughout the paper. Let 𝒫 be the set of all measurable functions r:Ω→[1,∞]; we shall call the function r∈𝒫(Ω) a variable exponent. We define r^- ess inf_x∈Ωr(x), r^+ ess sup_x∈Ωr(x) and we only consider the case 1<r^-≤ r^+<∞.Since we are considering a power-law index depending on the concentration, we need to work with Lebesgue and Sobolev spaces with variable exponents. To be specific, we introduce the following variable-exponent Lebesgue spaces, equipped with the corresponding Luxembourg norms: L^r(·)(Ω) {u∈ L^1_loc(Ω):∫_Ω|u(x)|^r(x)<∞}, u_L^r(·)(Ω)=u_r(·) inf{λ>0:∫_Ω|u(x)/λ|^r(x)≤1}. Similarly, we introduce the following generalized Sobolev spaces: W^1,r(·)(Ω) {u∈ W^1,1(Ω)∩ L^r(·)(Ω):|∇ u|∈ L^r(·)}, u_W^1,r(·)(Ω)=u_1,r(·) inf{λ>0:∫_Ω[|u(x)/λ|^r(x)+|∇ u(x)/λ|^r(x)]≤1}. It is easy to show that all of the above spaces are Banach spaces, and because of (<ref>), they are all separable and reflexive; see <cit.>.Furthermore, we introduce certain function spaces that are frequently used in PDE models of incompressible fluids. Henceforth, X(Ω)^d will denote the space of d-component vector-valued functions with components from X(Ω). We also define the space of tensor-valued functions X(Ω)^d× d. Finally, we define the following spaces: W^1,r(·)_0(Ω)^d {∈ W^1,r(·)(Ω)^d:=0on∂Ω},W^1,r(·)_0,div(Ω)^d {∈ W^1,r(·)_0(Ω)^d:div =0 in Ω},L^r(·)_0(Ω) {f∈ L^r(·)(Ω):∫_Ωf(x)=0}.Throughout the paper, we shall denote the duality pairing between f∈ X and g∈ X^* by⟨ g,f⟩, and for two vectors a and b, a·b denotes their scalar product; similarly, for two tensors 𝐀 and 𝐁, 𝐀·𝐁 signifies their scalar product. Also, for any Lebesgue-measurable set Q⊂^d, |Q| denotes the standard Lebesgue measure of the set Q.Next we introduce the necessary technical tools. First we define the subset 𝒫^log(Ω)⊂𝒫(Ω): it will denote the set of all log-Hölder-continuous functions defined on Ω, that is the set of all functions r ∈𝒫(Ω) satisfying |r(x)-r(y)|≤C_log(r)/-log|x-y|∀x,y∈Ω:0<|x-y|≤1/2. It is obvious that classical Hölder-continuous functions on Ω automatically belong to this class.Next we state the following lemma, which summarizes some inequalities involving variable-exponent norms. For proofs, see <cit.>, which is an extensive source of information concerning variable-exponent spaces.Let Ω⊂^d be a bounded open Lipschitz domain and let r∈𝒫^log(Ω) satisfy (<ref>). Then, the following inequalities hold: * Hölder's inequality, i.e., fg_s(·)≤2f_r(·)g_q(·), with r,q,s∈𝒫(Ω),1/s(x)=1/r(x)+1/q(x),x ∈Ω. * Poincaré's inequality, i.e., u_r(·)≤ C(d,C_log(r)) diam(Ω)∇ u_r(·)∀u∈ W^1,r(·)_0(Ω). * Korn's inequality, i.e., ∇_r(·)≤ C(Ω, C_log(r))_r(·)∀ ∈ W^1,r(·)_0(Ω)^d, where C_log(r) is the constant appearing in the definition of the class of log-Hölder-continuous functions.Another important auxiliary result is the existence of the Bogovskiĭ operator in the variable-exponent setting.Let Ω⊂^d be a bounded open Lipschitz domain and suppose that r∈𝒫^log(Ω) with 1<r^-≤ r^+<∞. Then, there exists a bounded linear operator ℬ:L^r(·)_0(Ω)→ W^1,r(·)_0(Ω)^d such that for all f∈ L^r(·)_0(Ω) we have div (ℬf) =f, ℬf_1,r(·) ≤ Cf_r(·), where C depends on Ω, r^-, r^+, and C_log(r).Let us now state the inf-sup condition, which has a crucial role in the mathematical analysis of incompressible fluid flow problems. For any s, s'∈(1,∞), with 1/s+1/s'=1, there exists a positive constant α_s>0 such thatα_sq_s'≤sup_0≠∈ W^1,s_0(Ω)^d⟨div ,q⟩/_1,s∀q∈ L^s'_0(Ω). This is a direct consequence of the existence of the Bogovskiĭ operator in spaces with fixed exponent, which is a special case of Theorem <ref>;see <cit.> for additional details.Furthermore, we can prove the following inf-sup condition in spaces with variable-exponent norms, which will play an important role in the subsequent analysis. Let Ω⊂^d be a bounded open Lipschitz domain and let r∈𝒫^log(Ω) with 1<r^-≤ r^+<∞. Then, there exists a constant α_r>0 such thatα_rq_r'(·)≤sup_0≠∈ W^1,r(·)_0(Ω)^d⟨div ,q⟩/_1,r(·)∀q∈ L^r'(·)_0(Ω).Proposition <ref> is a direct consequence of Theorem <ref> and the norm-conjugate formula stated in the following lemma. Let r∈𝒫^log(Ω) be a variable exponent with 1<r^-≤ r^+<∞; then we have1/2f_r(·)≤sup_g∈ L^r'(·)(Ω),g_r'(·)≤1∫_Ω|f||g|,for all measurable functions f ∈L^r(·)(Ω). Finally, we recall the following well-known result due to De Giorgi and Nash <cit.>; see also <cit.> for its application to the system of partial differential equations considered in the present paper. Let Ω⊂^d be a Lipschitz domain and let s>d be fixed. Suppose that K∈ L^∞(Ω)^d× d is uniformly elliptic with ellipticity constant λ>0. Then, there exists an α∈(0,1) such that, for any f∈ L^s(Ω)^d, g∈ L^ds/d+s(Ω) and any c_d∈ W^1,s(Ω), there exists a unique c∈ W^1,2(Ω) such that c-c_d∈ W^1,2_0(Ω)∩ C^0,α(Ω) and ∫_ΩK∇ c·∇φ=∫_Ωf·∇φ+∫_Ωgφ∀ φ∈ W^1,2_0(Ω); furthermore, the following uniform bound holds:c_W^1,2∩ C^0,α≤ C(Ω,λ,s,K_∞,f_s,g_ds/d+s, c_d_1,s).Using these notations, the weak formulation of the problem (<ref>)–(<ref>) is as follows. Problem (Q). For f∈ (W^1,r^-_0(Ω)^d)^*, c_d∈ W^1,s(Ω), s>d, and a Hölder-continuous function r, with 1<r^-≤ r(c)≤ r^+<∞ for all c∈[c^-,c^+], find (c-c_d)∈ W^1,2_0(Ω)∩ C^0,α(Ω), for some α∈ (0,1), ∈ W^1,r(c)_0(Ω)^d, p∈ L^r'(c)_0(Ω) such that ∫_Ω(c,)·∇ψ-(⊗)·∇ψ-⟨divψ,p⟩ =⟨f,ψ⟩∀ ψ∈W^1,∞_0(Ω)^d,∫_Ωq div=0∀q∈L^r'(c)_0(Ω),∫_Ω_c(c,∇c,)·∇φ-c·∇φ =0∀ φ∈W^1,2_0(Ω).Thanks to Proposition <ref>, we can restate Problem (Q) in the following (equivalent) divergence-free setting. Problem (P). For f∈ (W^1,r^-_0(Ω)^d)^*, c_d∈ W^1,s(Ω), s>d, and a Hölder-continuous function r, with 1<r^-≤ r(c)≤ r^+<∞ for all c∈[c^-,c^+], find (c-c_d)∈ C^0,α(Ω) ∩ W^1,2_0(Ω), ∈ W^1,r(c)_0,div(Ω)^d, such that ∫_Ω(c,)·∇ψ-(⊗)·∇ψ =⟨f,ψ⟩∀ ψ∈ W^1,∞_0,div(Ω)^d, ∫_Ω_c(c,∇ c,)·∇φ-c·∇φ =0∀ φ∈ W^1,2_0(Ω).From now on, for simplicity, we shall restrict ourselves to the case of d=3. Our results can be however easily extended to the case of any d ≥ 2. We note in passing that since no uniqueness result is currently known for weak solutions of the problem under consideration, we can only prove that a subsequence of the sequence of discrete solutions converges to a weak solution of the problem. Regularization of the problem Before constructing the approximation of problem (Q) we shall formulate a regularized problem; it will then be the regularized problem that will be approximated by a finite element method. We shall show that the sequence of finite element approximations converges to a weak solution of the regularized problem, and that solutions of the regularized problem, in turn, converge to a weak solution of problem (Q). The reason for proceeding in this way is that direct approximation of problem (Q), which bypasses the use of the regularized problem, necessitates the imposition of an unnaturally strong condition on the variable exponent r in the convergence analysis of the finite element method; the procedure that we describe below does not suffer from this shortcoming.Motivated by <cit.>, we shall utilize the following regularized problem, involving the regularization parameter k∈ℕ. We choose a sufficiently large t>0, such that r^->3/2>t/t-2. Then we seek a weak solution (,p,c)(^k,p^k,c^k) to div=0inΩ, div (⊗)-div (c,)+1/k||^t-2 = -∇ p+finΩ, div (c)-div _c(c,∇ c, ) =0inΩ, Therefore, we consider the following regularized weak formulation.Problem (Q*). For f∈ (W^1,r^-_0(Ω)^3)^*, c_d∈ W^1,s(Ω), s>3, and a Hölder-continuous function r, with 1<r^-≤ r(c)≤ r^+<∞ for all c∈[c^-,c^+], and r^->3/2>t/t-2, t>2, find (c-c_d)(c^k-c_d)∈ W^1,2_0(Ω) ∩ C^0,α(Ω), for some α∈ (0,1), ^k∈ W^1,r(c)_0(Ω)^3, p p^k∈ L^r'(c)_0(Ω) such that ∫_Ω(c,)·∇ψ-(⊗)·∇ψ+ 1/k||^t-2·ψ-⟨div ψ,p⟩ = ⟨f,ψ⟩∀ ψ∈ W^1,∞_0(Ω)^3, ∫_Ωq div=0∀q∈ L^r'(c)_0(Ω), ∫_Ω_c(c,∇ c,)·∇φ-c·∇φ =0∀ φ∈ W^1,2_0(Ω).Again, by using Proposition <ref>, we can restate Problem (Q*) in the following (equivalent) divergence-free setting:Problem (P*). For f∈ (W^1,r^-_0(Ω)^3)^*, c_d∈ W^1,s(Ω), s>3, and Hölder-continuous function r, with 1<r^-≤ r(c)≤ r^+<∞ for all c∈[c^-,c^+],and r^->3/2>t/t-2, t>2, find (c-c_d)(c^k-c_d)∈ C^0,α(Ω)∩ W^1,2_0(Ω), ^k∈ W^1,r(c)_0,div(Ω)^3, such that ∫_Ω(c,)·∇ψ-(⊗)·∇ψ+1/k||^t-2·ψ =⟨f,ψ⟩∀ ψ∈ W^1,∞_0,div(Ω)^3, ∫_Ω_c(c,∇ c,)·∇φ-c·∇φ =0∀ φ∈ W^1,2_0(Ω).We shall formulate the finite element approximation of the regularized problem Problem (Q*) in a three-dimensional domain; the convergence analysis of the method is presented in Section 4 and Section 5. In Section 6, we will prove that a sequence of weak solution triples {(^k,p^k,c^k)}_k ≥ 1 of the regularized problem converges to a weak solution triple (,p,c) of Problem (Q). The latter result is recorded in our next theorem.Suppose that Ω⊂^3 is a convex polyhedral domain and c_d∈ W^1,s(Ω) for some s>3. Let us further assume that r:_≥0→_≥0 is a Hölder-continuous function with r^->3/2>t/t-2, t>2, and suppose that f∈ (W^1,r^-_0(Ω)^3)^*. Let (^k,p^k,c^k) be a weak solution of the regularized problem (<ref>)–(<ref>). Then, as k→∞, (a subsequence, not indicated, of) the sequence {(^k,p^k,c^k)}_k ≥ 1 converges to (,p,c) in the following sense: ^k⇀weaklyinW^1,r^-_0,div(Ω)^3, c^k⇀ c weaklyinW^1,2(Ω),c^k→ c stronglyinC^0,α(Ω)for some α∈(0,1),p^k⇀ p weaklyinL^j'(Ω)∀j>max{r^+,2}.Furthermore, (,p,c) is a weak solution of the problem Problem (Q*) stated in (<ref>)–(<ref>). Finite element approximation Finite element spaces Let {_n}, {_m}be families of shape-regular partitions of Ω such that the following properties hold: * Affine equivalence: For each element E∈_n (or E∈_m) , there exists an invertible affine mappingF_E:E→Ê,where Ê is the standard reference 3-simplex in ^3.* Shape-regularity: For any element E∈_n (or E∈_m), the ratio of diam E to the radius of the inscribed ball is bounded below uniformly by a positive constant, with respect to all _n (or _m) and n∈ℕ (or m ∈ℕ). For given partitions _n and _m, the finite element spaces are defined by ^n =(_n){∈ C(Ω)^3:_|E∘F^-1_E∈ℙ̂_,E∈_nand_|∂Ω=0}, ^n =(_n){Q∈ L^∞(Ω):Q_|E∘F^-1_E∈ℙ̂_,E∈_n}, ^m =(_m){Z∈ C(Ω):Z_|E∘F^-1_E∈ℙ̂_,E∈_mandZ_|∂Ω=0}, where ℙ̂_⊂ W^1,∞(Ê)^3, ℙ̂_⊂ L^∞(Ê) and ℙ̂_⊂ W^1,∞(Ê) are finite-dimensional linear subspaces.We assume that ^n and ^m have finite and locally supported bases; for example, for each n∈ℕ and m∈ℕ, there exists an N_n∈ℕ and an N_m∈ℕ such that ^n=span{^n_1,…,^n_N_n}, ^m=span{Z^m_1,…,Z^m_N_m}, and for each basis function ^n_i, Z^m_j, we have that if there exists an E∈_n (respectively, _m), with ^n_i≠0 (respectively, Z^m_j≠0) on E, then supp ^n_i⊂⋃{E'∈_n:E'∩ E≠∅} S_E. supp Z^m_j⊂⋃{E'∈_m:E'∩ E≠∅} T_E. For the pressure space ^n, we assume that ^n has a basis consisting of discontinuous piecewise polynomials; i.e., for each n∈ℕ, there exists an Ñ_n∈ℕ such that ^n=span{Q^n_1,…,Q^n_Ñ_n} and for each basis function Q^n_i, we have that supp Q^n_i=Efor some E∈_n. We assume further that ^n contains continuous piecewise linear functions and ^n contains piecewise constant functions.Using the assumed shape-regularity we can easily verify that ∃ X∈ℕ:|S_E|≤ X|E| for all E∈_n, ∃ Y∈ℕ:|T_E|≤ Y|E| for all E∈_m, where X is independent of n and Y is independent of m. We denote by g_E the diameter of E∈_n and by h_E the diameter of E∈_m.We also introduce the subspace ^n_div of discretely divergence-free functions. More precisely, we define^n_div{∈^n:⟨div ,Q⟩=0 ∀Q∈^n},and the subspace of ^n consisting of vanishing integral mean-value approximations:^n_0{Q∈^n:∫_ΩQ=0}. Throughout this paper, we assume that the finite element spaces introduced above have the following minimal approximation properties.Assumption 1 (Approximability) For all s∈[1,∞),inf_∈^n-_1,s →0∀ ∈ W^1,s_0(Ω)^3asn→∞,inf_Q∈^nq-Q_s →0 ∀q∈ L^s(Ω)asn→∞, inf_Z∈^mz-Z_1,s →0∀z∈ W^1,s_0(Ω)asm→∞.For this, a necessary condition is that the maximal mesh size vanishes, i.e., that max_E∈_ng_E→0 as n→∞ and max_E∈_mh_E→0 as m→∞.Assumption 2 (Existence of a projection operator Π^n_div) For each n∈ℕ, there exists a linear projection operator Π^n_div:W^1,1_0(Ω)^3→^n such that: * Π^n_div preserves the divergence structure in the dual of the discrete pressure space; in other words, for any ∈ W^1,1_0(Ω)^3, we have⟨div ,Q⟩=⟨div Π^n_div,Q⟩∀Q∈^n. * Π^n_div is locally W^1,1-stable, i.e., there exists a constant c_1>0, independent of n, such that_E|Π^n_div|+g_E|∇Π^n_div|≤ c_1_S_E||+g_E|∇|∀ ∈ W^1,1_0(Ω)^3 and ∀E∈_n. Note that the local W^1,1(Ω)^3-stability of Π^n_div implies its local and global W^1,s(Ω)^3-stability for s∈[1,∞]. In other words, for any s∈[1,∞] we haveΠ^n_div_1,s≤ c_s_1,s∀ ∈ W^1,s_0(Ω)^3,with a constant c_s>0 independent of n>0. Note further that the approximability (Assumption 1) and inequality (<ref>) imply the convergence of Π^n_div to . In fact, -Π^n_div_1,s→0∀ ∈ W^1,s_0(Ω)^3asn→∞, ∀s ∈ [1,∞). Assumption 3 (Existence of a projection operator Π^n_) For each n∈ℕ, there exists a linear projection operator Π^n_:L^1(Ω)→^n such that Π^n_ is locally L^1-stable; i.e., there exists a constant c_2>0, independent of n, such that_E|Π^n_q|≤ c_2_S_E|q|for all q∈ L^1(Ω) and all E∈_n.Again, we have the following global stability and convergence property:Π^n_q_s'≤ c_s'q_s'∀q∈ L^s'(Ω), ∀s'∈(1,∞),andq-Π^n_q_s'→0,as n→∞ for all q∈ L^s'(Ω) and s'∈(1,∞).According to <cit.>, the following pairs of velocity-pressure finite element spaces satisfy Assumptions 1, 2 and 3, for example: * The conforming Crouzeix–Raviart Stokes element, i.e., continuous piecewise quadratic plus cubic bubble velocity and discontinuous piecewise linear pressure approximation (compare e.g. with <cit.>);* The space of continuous piecewise quadratic polynomials for the velocity and piecewise constant pressure approximation; see, <cit.>. Our final assumption is the existence of a projection operator for the concentration space.Assumption 4 (Existence of a projection operator Π^m_) For each m∈ℕ, there exists a linear projection operator Π^m_:W^1,1_0(Ω)→^m such that_E|Π^m_z|+h_E|∇Π^m_z|≤ c_3_T_E|z|+h_E|∇ z|∀z∈ W^1,1_0(Ω) and ∀E∈_m,where c_3 does not depend on m.Similarly as above, the projection operator Π^m_ is globally W^1,s-stable for s∈[1,∞], and thus, by approximability,Π^m_z-z_1,s→0 ∀z∈ W^1,s_0(Ω),∀s ∈ [1,∞). Finally, we introduce a discrete inf-sup condition, which holds in our finite element setting. It is a direct consequence of (<ref>) and the existence of Π^n_div; see <cit.> for further details. For s, s'∈(1,∞) satisfying 1/s+1/s'=1, there exists a positive constant β_r>0, which is independent of n, such thatβ_rQ_s'≤sup_0≠∈^n⟨div ,Q⟩/_1,s∀Q∈^n_0 and ∀ n∈ℕ. The finite element approximation In this section, we shall construct the finite element approximation of the problem (<ref>)–(<ref>). An important property of the incompressible Navier–Stokes equations is that the convective term in the momentum equation is skew-symmetric; this is a consequence of the velocity fieldbeing divergence-free. However, in the discretized problem, we might lose the skew-symmetry because we are considering only discretely divergence-free finite element functions from the finite element space for the velocity. Thus we need to modify the finite element approximation of the convective term in order to ensure that the skew-symmetry is preserved under discretization. We therefore define the following modified convective terms: B_u[,,] 1/2∫_Ω((⊗)·∇-(⊗)·∇),B_c[b,,z] 1/2∫_Ω(z·∇ b-b·∇ z), for all ,,∈ W^1,∞_0(Ω)^3, b,z∈ W^1,∞(Ω). These trilinear forms then coincide with the corresponding trilinear forms appearing in the weak formulations of the momentum equation and the concentration equation, provided that we are considering pointwise divergence-free velocity fields. Furthermore, thanks to their skew symmetry, these two trilinear forms now also vanish for discretely divergence-free functions when = and b=z, respectively. Explicitly, we have B_u[,,] =0 and B_c[z,,z]=0∀ ∈ W^1,∞_0(Ω)^3, z∈ W^1,∞(Ω),B_u[,,] =-∫_Ω(⊗)·∇∀ ,,∈ W^1,∞_0,div(Ω)^3,B_c[b,,z] =-∫_Ωb·∇ z∀ ∈ W^1,∞_0,div(Ω)^3, b,z∈ W^1,∞(Ω). Moreover, the trilinear form B_u[·,·,·] is bounded. Indeed, if ,,∈ W^1,∞_0(Ω)^3, then, by Hölder's inequality, ∫_Ω(⊗)·∇≤_2(r^-)'_2(r^-)'_1,r^-, and ∫_Ω(⊗)·∇≤_2(r^-)'_2(r^-)'_1,r^-. Therefore, we obtain the bound |B_u[,,]|≤_2(r^-)'_2(r^-)'_1,r^-+_2(r^-)'_1,r^-_2(r^-)'.Now, for each n,m∈ℕ, we call a triple (^n,m,P^n,m,C^n,m)∈^n×^n_0×(^m+c_d) a discrete solution to the Galerkin approximation if it satisfies ∫_Ω(C^n,m,^n,m)·+1/k|^n,m|^t-2^n,m· +B_u[^n,m,^n,m,]-⟨div ,P^n,m⟩ =⟨f,⟩∀ ∈^n,∫_ΩQ div ^n,m =0∀Q∈^n,∫_Ω_c(C^n,m,∇C^n,m,^n,m)·∇Z+B_c[C^n,m,^n,m,Z] =0∀Z∈^m, where c_d∈ W^1,s(Ω) with s>3 and f∈(W^1,r^-_0(Ω)^3)^*.If we restrict the test functionsto ^n_div, then the above problem is transformed to the following: find (^n,m,C^n,m)∈^n_div×(^m+c_d) satisfying ∫_Ω(C^n,m,^n,m)·+1/k|^n,m|^t-2^n,m· +B_u[^n,m,^n,m,] =⟨f,⟩∀ ∈^n_div ∫_Ω_c(C^n,m,∇ C^n,m,^n,m)·∇ Z+B_c[C^n,m,^n,m,Z] =0∀Z∈^m.If 3/2<r^-, the existence of the discrete solution pair (^n,m, C^n,m)∈^n_div×(^m+c_d) follows from a fixed point argument combined with an iteration scheme. Let us briefly summarize the proof of the existence of the pair (^n,m, C^n,m)∈^n_div×(^m+c_d). Let {_i}^N_n_i=1 be a basis of ^n_div⊂ W^1,∞_0(Ω)^3 such that ∫_Ω_i·_j=δ_ij and let {z_j}^N_m_j=1 be a basis of ^m⊂ W^1,2_0(Ω) such that ∫_Ωz_iz_j=δ_ij. Then, for fixed n,m∈ℕ, we define the Galerkin approximations. ^n,m∑^N_n_i=1α_i^n,m_i, C^n,m∑^N_m_i=1β_i^n,mz_i+c_d, which satisfy (<ref>)–(<ref>). First we define C^n,m_1 c_d∈^m+c_d. Then, for any ℓ∈ℕ, we define ^n,m_ℓ∈^n_divas a solution of the finite-dimensional problem∫_Ω(C^n,m_ℓ,^n,m_ℓ)·+1/k|^n,m_ℓ|^t-2^n,m·+B_u[^n,m_ℓ,^n,m_ℓ,]=⟨f,⟩∀ ∈^n_div, and C^n,m_ℓ∈^m+c_d as a solution of the finite-dimensional problem ∫_Ω_c(C^n,m_ℓ,∇ C^n,m_ℓ,^n,m_ℓ-1)·∇ Z+B_c[C^n,m_ℓ,^n,m_ℓ-1,Z]=0∀Z∈^m. The existence of the functions^n,m_ℓ∈^n_div and C^n,m_ℓ∈^m+c_d is easily shown by means of Brouwer's fixed point theorem. Furthermore, for each n,m∈ℕ, the sequences of functions {^n,m_ℓ}^∞_ℓ=1 and {C^n,m_ℓ}^∞_ℓ=1 satisfy the following uniform bounds: ^n,m_ℓ_1,r^-+^n,m_ℓ_t≤ C_1,∇ C^n,m_ℓ_2≤ C_2, where C_1 and C_2 are positive constants, independent of ℓ. Thus, by the Bolzano–Weierstrass theorem we deduce the existence of limits ^n,m∈^n_div and C^n,m∈^m+c_d for ^n,m_ℓ and C^n,m_ℓ, respectively, as ℓ→∞, and these limits form a solution pair for the Galerkin approximation (<ref>), (<ref>). For further details, see <cit.>.This establishes the existence of a solution to the Galerkin approximations (<ref>), (<ref>) for any fixed pair of integers n,m∈ℕ. The existence of a discrete solution triple for (<ref>)–(<ref>) then follows by the discrete inf-sup condition stated in Proposition <ref>, and we write P^n,m=∑^Ñ_n_i=1γ^n,m_iy_i where {y_i}^Ñ_n_i=1 is a basis of ^n_0.We are now ready to state and prove our main theorem in this section. It asserts that, as n,m→∞, the sequence of discrete solution triples converges to a weak solution triple of the regularized problem. Suppose that Ω⊂^3 is a convex polyhedral domain and c_d∈ W^1,s(Ω) for some s>3. Let us assume that r:_≥0→_≥0 is a Hölder-continuous function with r^->3/2>t/t-2, t>2, and let f∈ (W^1,r^-_0(Ω)^3)^*. Let (^n,m,P^n,m,C^n,m)∈^n_div×^n_0×(^m+c_d) be a discrete solution triple defined by the finite element approximation (<ref>)–(<ref>). Then, the following convergence results hold.* At the first level of Galerkin approximation, there exists a subsequence (not relabelled) with respect to m such that (as m→∞),^n,m →^n, ^n,m →^n,P^n,m → P^n,C^n,m ⇀ C^nweaklyinW^1,2(Ω),where ^n∈^n, P^n∈^n_0.* At the second level of Galerkin approximation, there exists a subsequence (not relabelled) with respect to n such that (as n→∞),^n⇀weaklyinW^1,r^-_0(Ω)^3P^n⇀ p weaklyinL^j'(Ω)∀j>max{r^+,2},C^n⇀ c weaklyinW^1,2(Ω),C^n→ c stronglyinC^0,α(Ω),where (,p,c)=(^k,p^k,c^k) is a weak solution triple of the regularized problem (<ref>)–(<ref>).Proof of Theorem <ref> The limit m→∞ First, we shall derive some uniform bounds, independent of m∈ℕ, and let m tend to infinity by using the weak compactness properties in the corresponding reflexive spaces. For simplicity, we shall denote ^n,m(C^n,m,^n,m), _c^n,m_c(C^n,m,∇ C^n,m,^n,m).We test with ^n,m∈^n_div in (<ref>); then, thanks to the skew symmetry of B_u[·,·,·], we have ∫_Ω^n,m·∇^n,m+1/k|^n,m|^t=∫_Ω^n,m·^n,m+1/k|^n,m|^t=⟨f,^n,m⟩. By (<ref>) and Young's inequality, we have∫_Ω|∇^n,m|^r(C^n,m)+|^n,m|^r'(C^n,m)+|^n,m|^t≤ C_1,where C_1 is independent of m. Next, we test with C^n,m-c_d ∈^m in (<ref>) and deduce that ∫_Ω_c(C^n,m,∇ C^n,m^n,m)·∇(C^n,m-c_d)=B_c[C^n,m,^n,m,c_d]. By (<ref>), (<ref>), Hölder's inequality and Young's inequality,∇ C^n,m^2_2≤∫_Ω|∇ C^n,m||∇ c_d|+B_c[C^n,m,^n,m,c_d]≤ε∇ C^n,m^2_2+C(ε)∇ c_d^2_2+B_c[C^n,m,^n,m,c_d].Then, by Sobolev embedding,B_c[C^n,m,^n,m,c_d]=1/2∫_Ωc_d^n,m·∇ C^n,m-1/2∫_ΩC^n,m^n,m·∇ c_d=∫_Ωc_d^n,m·∇ C^n,m+1/2∫_ΩC^n,m(div ^n,m)c_d≤c_d_∞^n,m_2∇ C^n,m_2+1/2 c_d_∞ C^n,m_(r^-)'div ^n,m_r^-≤ C^n,m_1,r^-∇ C^n,m_2+C^n,m_1,r^-∇ C^n,m_3r^-/4r^–3≤ C(ε)^n,m^2_1,r^-+ε∇ C^n,m^2_2.Hence, by (<ref>) and (<ref>), we have∫_Ω|∇ C^n,m|^2+|^n,m_c|^2≤ C_2,where C_2 is independent of m.Next, we shall derive a uniform bound on the pressure. By Proposition <ref> together with (<ref>), (<ref>) and the equivalence of norms in the finite-dimensional spaces, we haveβ_rP^n,m_(r^+)' ≤sup_0≠∈^n⟨div ,P^n,m⟩/_1,r^+≤sup_0≠∈^n|∫_Ω^n,m·|/_1,r^++Csup_0≠∈^n|B_u[^n,m,^n,m,]-⟨f,⟩|/_1,r^-≤ C sup_0≠∈^n^n,m_(r^+)'_r^+/_1,r^++C(n)sup_0≠∈^n^n,m^2_2(r^-)'_1,r^-+f_-1_1,r^-/_1,r^-.Therefore, by (<ref>), we deduce thatP^n,m_(r^+)'≤ C(n). Now we are ready to let m tend to infinity. By (<ref>) and (<ref>) with the equivalence of norms in finite-dimensional spaces, we have |α^n,m|≤ C(n) and |γ^n,m|≤ C(n). Then, together with the uniform estimates (<ref>), we can extract (not relabelled) subsequences such thatα^n,m →α^nstrongly in ^N_n, γ^n,m →γ^nstrongly in ^Ñ_n,C^n,m ⇀ C^nweakly in W^1,2(Ω).From (<ref>), (<ref>) and compact embedding,we have^n,m →^n, ^n,m →^n,P^n,m → P^n,C^n,m → C^nstrongly in L^2(Ω).By (<ref>) and (<ref>), note that^n∈^nand P^n∈^n_0.Finally, from (<ref>), we can extract a further subsequence (not relabelled) such thatC^n,m→ C^na.e. in Ω. Note that sinceis continuous, by (<ref>) and (<ref>), we have(C^n,m,^n,m)→(C^n,^n)a.e. in Ω.Now, by (<ref>), we have that, for sufficiently large m∈ℕ,|^n,m|<1+|^n|.Thus, by (<ref>), we have, for sufficiently large m∈ℕ,|(C^n,m,^n,m)|≤ C|^n,m|^r(C^n,m)-1+C≤ C(1+|^n|)^r(c^n,m)-1+C≤ C(1+|^n|)^r^+-1+C,and C(1+|^n|)^r^+-1+C∈ L^(r^+)'(Ω). Therefore, by the Dominated Convergence Theorem, we have^n,m→^n(C^n,^n)strongly in L^(r^+)'(Ω)^3×3. Furthermore, by (<ref>) and (<ref>), together with the Dominated Convergence Theorem,K(C^n,m,|^n,m|)→K(C^n,|^n|)strongly in L^q(Ω)∀q∈(1,∞).Therefore, together with (<ref>), we have^n,m_c⇀^n_c_c(C^n,∇ C^n,^n)weakly in L^2(Ω)^3. Now we are ready to pass m to infinity in the Galerkin approximation (<ref>)–(<ref>). First, by (<ref>) and (<ref>),B_u[^n,m,^n,m,] → B_u[^n,^n,]∀ ∈^n, 1/k|^n,m|^t-2^n,m· →1/k|^n|^t-2^n·∀ ∈^n. Furthermore, from (<ref>) and (<ref>),∫_Ω^n,m· →∫_Ω^n·∀ ∈^n, ⟨div ,P^n,m⟩ →⟨div ,P^n⟩∀ ∈^n.Therefore, we have∫_Ω^n·+1/k|^n|^t-2^n·+B_u[^n,^n,]-⟨div ,P^n⟩=⟨f,⟩∀ ∈^n.Moreover, from (<ref>) and (<ref>),∫_ΩQ div ^n=0∀Q∈^n.Next, let us investigate the limit of the concentration equation, (<ref>). We fix an arbitrary Z∈ W^1,2_0(Ω) and define Z^mΠ^m_Z∈^m. Thanks to (<ref>) and (<ref>),C^n,m^n,m-C^n^n_2≤(^n,m-^n)C^n,m_2+^n(C^n,m-C^n)_2≤^n,m-^n_∞C^n,m_2+^n_∞C^n,m-C^n_2→0.Also, thanks to (<ref>) and (<ref>),Z^m^n,m-Z^n_2≤(^n,m-^n)Z^m_2+^n(Z^m-Z)_2≤^n,m-^n_∞Z^m_2+^n_∞Z^m-Z_2→0.In other words, we haveC^n,m^n,m → C^n^nstrongly in L^2(Ω)^3,Z^m^n,m → Z^nstrongly in L^2(Ω)^3.By (<ref>) and (<ref>),|∫_ΩZ^m^n,m·∇ C^n,m-∫_ΩZ^n·∇ C^n|≤∫_Ω|Z^m^n,m-Z^n||∇ C^n,m|+|∫_ΩZ^n(∇ C^n,m-∇ C^n)|→0.Moreover, from (<ref>) and (<ref>),|∫_ΩC^n,m^n,m∇ Z^m-∫_ΩC^n^n·∇ Z|≤C^n,m^n,m_2Z^m-Z_1,2+Z_1,2C^n,m^n,m-C^n^n_2→0.Therefore, we havelim_m→∞B_c[C^n,m,^n,m,Z^m]=B_c[C^n,^n,Z].Finally, from (<ref>),∫_Ω^n,m_c·∇ Z^m→∫_Ω^n_c·∇ Zas m→∞.Altogether, we have∫_Ω^n_c·∇ Z+B_c[C^n,^n,Z]=0∀Z∈ W^1,2_0(Ω).The limit n→∞ Now we shall derive further uniform estimates and let n pass to infinity. First, we test with ^n in (<ref>). Then, by (<ref>) and (<ref>), we have∫_Ω^n·^n+1/k|^n|^t=⟨f,^n⟩.By using (<ref>) and Young's inequality, we have∫_Ω|^n|^r(C^n)+|^n|^r'(C^n)+1/k|^n|^t≤ C_1,where C_1 is independent of n, which leads us to^n_1,r^-^r^-+^n_(r^+)'^(r^+)'+1/k^n^t_t≤ C_1,where C_1 is independent of n.Next, we test with C^n-c_d in (<ref>), and by (<ref>) we obtain∫_Ω^n_c·∇ C^n=∫_Ω^n_c·∇ c_d+B_c[C^n,^n,c_d].From (<ref>), (<ref>), Hölder's inequality and Young's inequality we have∇ C^n^2_2≤ C∫_Ω|∇ C^n||∇ c_d|+B_c[C^n,^n,c_d]≤ε∇ C^n^2_2+C(ε)∇ c_d^2_2+B_c[C^n,^n,c_d].Furthermore, by Sobolev embedding,B_c[C^n,^n,c_d]=1/2∫_Ωc_d^n·∇ C^n-1/2∫_ΩC^n^n·∇ c_d=∫_Ωc_d^n·∇ C^n+1/2∫_ΩC^n(div ^n)c_d≤c_d_∞^n_2∇ C^n_2+c_d_∞/2C^n_(r^-)'div ^n_r^-≤ C^n_1,r^-∇ C^n_2+C^n_1,r^-∇ C^n_3r^-/4r^–3≤ C(ε)^n^2_1,r^-+ε∇ C^n^2_2.Hence, from (<ref>) and (<ref>),∫_Ω|∇ C^n|^2+|_c^n|^2≤ C_2,where C_2 is independent of n. Thus we haveC^n^2_1,2+^n_c^2_2≤ C_2,where C_2 is independent of n.Now, since 3/2>t/t-2, by Sobolev embedding and the uniform estimates (<ref>) and (<ref>), for s>3 sufficiently close to 3, C^n^n_s≤C^n_6^n_6s/6-s≤ CC^n_1,2^n_t≤ C, where C is independent of n.Also, for s>3 sufficiently close to 3, we have∇ C^n·^n_3s/s+3≤∇ C^n_2^n_6s/6-s≤ CC^n_1,2^n_t≤ C, where C is independent of n.Therefore, we can apply Theorem <ref> with F=C^n^n and g=∇ C^n·^n. Hence, there exists an α_1∈(0,1) such thatC^n_C^0,α_1(Ω)≤ C_3.Since C^0,α_1(Ω)↪↪ C^0,α̃_1(Ω) for all α̃_1∈(0,α_1), we haveC^n→ cstrongly in C^0,α̃_1(Ω),which implies thatr∘ C^n→ r∘ cstrongly in C^0,β_1(Ω),for some β_1∈(0,1).We now apply Proposition <ref>. For a given r^+>0, choose j>max{r^+,2}. Then, since r^->3/2, we have that W^1,j_0(Ω)^3↪ L^2(r^-)'(Ω)^3 by Sobolev embedding. Furthermore, since t/t-2<r^-, we have that 2(r^-)'<t. Now, from (<ref>) and (<ref>),β_rP^n_j' ≤sup_0≠∈^n⟨div ,P^n⟩/_1,j≤sup_0≠∈^n|∫_Ω^n·+B_u[^n,^n,]-⟨f,⟩|/_1,j≤ Csup_0≠∈^n^n_(r^+)'_1,r^+/_1,r^++Csup_0≠∈^n^n^2_t_1,r^-+f_-1_1,r^-/_1,r^-+Csup_0≠∈^n^n_2(r-)'_2(r-)'^n_1,r^-/_2(r-)'.Hence, by noting (<ref>),P^n_j'≤ C_4,where C_4 is independent of n.Now, by (<ref>)–(<ref>), thanks to the reflexivity of the relevant spaces and by compact embedding, we can extract (not relabelled) subsequences such that ^n⇀weaklyinW^1,r^-_0(Ω)^3∩ L^t(Ω)^3,^n→stronglyinL^σ(Ω)^3∀ σ∈[1,t),|^n|^t-2^n ⇀||^t-2weaklyinL^t/t-1(Ω)^3,C^n⇀ c weaklyinW^1,2(Ω),C^n→ c stronglyinC^0,α̃_1(Ω),P^n⇀ p weaklyinL^j'(Ω)∀j>max{r^+,2}, ^n⇀weaklyinL^(r^+)'(Ω)^3× 3, _c^n⇀_c weaklyinL^2(Ω)^3.Before proceeding further, we note that these limits, together with weak lower semicontinuity and(<ref>), in conjunction with Korn's inequality, imply that∫_Ω|∇|^r(c)+||^r'(c)≤ C,hence the limit functionis, in fact, contained in the space W^1,r(c)_0(Ω)^3; see <cit.> for the details of the proof of this.Next, we shall prove that the limit functionis pointwise divergence-free. For an arbitrary q∈ C^∞_0(Ω), by (<ref>),0 =∫_Ω(Π^n_q) div ^n=∫_Ω(Π^n_q-q) div ^n+∫_Ωq(div ^n-div )+∫_Ωq div .The first term tends to zero by (<ref>), (<ref>) and the second term converges to zero by (<ref>). Therefore,∫_Ωq div =0for any q∈ C^∞_0(Ω),which implies that div= 0 a.e. on Ω.Now, we shall identify the limit of the convective term B_u[·,·,·] as follows. For an arbitrary ∈ W^1,∞_0(Ω)^3, we define ^nΠ^n_div∈^n. Then, by (<ref>), we have^n→strongly in W^1,σ_0(Ω)^2 for σ∈(1,∞).By (<ref>),^n⊗^n→⊗strongly in L^1+ε(Ω)^3×3.Hence, we can identify the second part of the convective term-∫_Ω(^n⊗^n)·∇^n→-∫_Ω(⊗)·∇as n→∞. Also, we assert that ^n·^n→· strongly in L^(r^-)'(Ω). Indeed,^n·^n-·_(r^-)' ≤(^n-)^n+(^n-)_(r^-)'≤^n-_σ^n_t-ε+^n-_t-ε_σfor some σ∈(1,∞). The first term tends to zero thanks to (<ref>), (<ref>) and the second term tends to zero by (<ref>). Therefore, since div =0, we have∫_Ω(^n⊗^n)·∇^n =-∫_Ω(^n⊗^n)·∇^n+∫_Ω(div ^n) ^n·^n→-∫_Ω(⊗)·∇as n→∞.Altogether, we then deduce thatlim_n→∞B_u[^n,^n,^n]=-∫_Ω(⊗)·∇. Now, we are ready to pass n to infinity in the Navier–Stokes equations. Since Π^n_div is linear, by noting (<ref>), we have⟨div ,P^n⟩ =⟨div ^n,P^n⟩+⟨div (-^n),P^n⟩=∫_Ω(C^n,^n)·^n+1/k|^n|^t-2^n·^n -⟨f,^n⟩+B_u[^n,^n,^n]+⟨div (-^n),P^n⟩→∫_Ω·+1/k||^t-2·+div(⊗)·-⟨f,⟩,where we have used (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>). Also, by (<ref>) again,⟨div ,P^n⟩→⟨div ,p⟩.Collecting all the limits gives us∫_Ω·+1/k||^t-2·+div (⊗)·-⟨div ,p⟩=⟨f,⟩∀ ∈ W^1,∞_0(Ω)^3. With the same argument as above, we also have that∫_Ω·+1/k||^t-2·+div (⊗)·=⟨f,⟩∀ ∈ W^1,∞_0,div(Ω)^3. Note that by Proposition <ref> and (<ref>), we havep∈ L^r'(c)_0(Ω). Now, let us investigate the limit of the convection-diffusion equation, (<ref>). For an arbitrary but fixed z∈ W^1,2_0(Ω), we define Z^nΠ^n_z∈^n. Thanks to (<ref>) and (<ref>),C^n^n-c_2≤(C^n-c)^n_2+c(^n-)_2≤C^n-c_∞^n_2+c_∞^n-_2→0.Moreover, by (<ref>), (<ref>) and Sobolev embedding,Z^n^n-z_2≤(Z^n-z)^n_2+z(^n-)_2≤Z^n-z_6^n_3 +z_6^n-_3≤ CZ^n-z_1,2^n_3+Cz_1,2^n-_3→0.In other words,C^n^n → cstrongly in L^2(Ω)^3,Z^n^n → zstrongly in L^2(Ω)^3.From (<ref>) and (<ref>),|∫_ΩZ^n^n·∇ C^n-∫_Ωz·∇ c|≤∫_Ω|Z^n^n-z||∇ C^n|+|∫_Ωz·(∇ C^n-∇ c)|→0.Therefore, as div =0 a.e. on Ω, we obtain∫_ΩZ^n^n·∇ C^n→∫_Ωz·∇ c=-∫_Ωc·∇ zas n→∞.Moreover, by (<ref>) and (<ref>),|∫_ΩC^n^n·∇ Z^n-∫_Ωc·∇ z|≤C^n^n_2Z^n-z_1,2+C^n^n-c_2z_1,2→0.Altogether, we havelim_n→∞ B_c[C^n,^n,Z^n]=-∫_Ωc·∇ z.Finally, by (<ref>) and (<ref>), we have∫_Ω_c(C^n,∇ C^n,^n)·∇ Z^n→∫_Ω_c·∇ zas n→∞.By collecting all the limits, we obtain that∫_Ω_c·∇ z-c·∇ z=0∀z∈ W^1,2_0(Ω).As we can see from (<ref>) and (<ref>), what we now need to prove is the identification of the limits:=(c,) and _c=_c(c,∇ c, ).To this end, we require the following lemma.The sequences {^n}_n ∈ℕ and {C^n}_n ∈ℕ satisfy the following equality:lim_n→∞∫_Ω(((C^n,^n)-(C^n,))·(^n-))^1/4=0.The detailed proof of Lemma <ref> is presented in Section 4.2 in <cit.>. Here, we shall briefly summarize the key steps of the proof as we shall require a similar, but more involved, argument in the next section. The strategy is to decompose the integral into several terms and to estimate them separately. To this end, for arbitrary but fixed χ>0, we introduce the matrix-truncation function T_χ:^3× 3→^3× 3 by T_χ(M)={ [M for |M|≤χ,; χM/|M| for |M|>χ. ]. The essential step in the proof relies on using a discrete Lipschitz truncation technique. In<cit.> a version of the discrete Lipschitz truncation method in variable-exponent norms was presented: see Theorem 3.15 in <cit.> and let ^n_j denote the discrete Lipschitz truncation of the function of ^n.The most important and difficult part of the proof is to estimate the following term:lim_χ→ 0lim_j →∞lim_n →∞∫_Ω((C^n,^n)-(C^n,T_χ()))·(^n_j-T_χ())≤ 0;(see eq. (4.23) in <cit.>). The other terms arising from the decomposition can be easily estimated by using the uniform bound (<ref>), Hölder's inequality and the discrete Lipschitz truncation theorem, Theorem 3.15 in <cit.>.To estimate (<ref>), we introduce the following discretely divergence-free approximations with zero trace on ∂Ω:Ψ^n_j ℬ^n(div ^n_j), Φ^n_j ^n_j-Ψ^n_j.Here ℬ^n is a discrete Bogovskiĭ operator defined in Section 3.4 of <cit.>. It is then clear that Φ^n_j has zero trace on ∂Ω and, by construction, Φ^n_j∈^n_div. Moreover, it can be easily verified, by using basic properties of the discrete Lipschitz truncation and the discrete Bogovskiĭ operator, thatΦ^n_j ⇀_j-ℬ(div _j)Φ_jweaklyin W^1,σ_0(Ω)^3, Φ^n_j →Φ_jstronglyinL^σ(Ω)^3,as n→∞, where σ∈(1,∞) is arbitrary. We can then rewrite (<ref>) above in terms of this approximation to obtain∫_Ω((C^n,^n)-(C^n,T_χ()))·(^n_j-T_χ())=∫_Ω(C^n,^n)·(Φ^n_j+Ψ^n_j)-∫_Ω(C^n,^n)· T_χ()-∫_Ω(C^n,T_χ())·(^n_j-T_χ()) B^n,1_χ,j-B^n,2_χ,j-B^n,3_χ,j.Now we use (<ref>) with =Φ^n_j∈^n_div and pass to the limit; thus we have, by (<ref>), thatlim_n→∞∫_Ω^n·Φ^n_j =-lim_n→∞B_u[^n,^n,Φ^n_j]-∫_Ω1/k|^n|^t-2^n·Φ^n_j+lim_n→∞⟨f,Φ^n_j⟩=∫_Ω(⊗)·∇Φ_j-1/k||^t-2·Φ_j+⟨f,Φ_j⟩=∫_Ω·Φ_j. Furthermore, with the help of Lipschitz truncation, we can show thatlim_n→∞∫_Ω^n·Ψ^n_j ≤(C/2^j/r^+)^γ(r^-,r^+), ∫_Ω·ℬ(div _j) ≤(C/2^j/r^+)^γ(r^-,r^+).Altogether, we havelim_χ→∞lim_j→∞lim_n→∞(B^n,1_χ,j-B^n,2_χ,j-B^n,3_χ,j)≤lim_χ→∞∫_Ω(-(c,T_χ()))·(_j-T_χ())The last limit is equal to zero by using the Dominated Convergence Theorem. That completes the proof of (<ref>), and thereby also of the most technical step in the proof of the lemma. Now we are ready to identify the limits. In the above lemma, since the integrand is nonnegative, (<ref>) also holds with Ω replaced by the set Q_γ⊂Ω defined byQ_γ{x∈Ω:||≤γ},with a givenγ>0; thus, from the sequence of integrands featuring in(<ref>), we can extract a subsequence (again not relabelled), which converges to zero almost everywhere in Q_γ. Then, by Egoroff's Theorem, for an arbitrary ε>0, there exists a subset Q^ε_γ⊂ Q_γ⊂Ω satisfying |Q_γ∖ Q^ε_γ|<ε, where the convergence ofintegrands is uniform. Note that, thanks to the choice of Q^ε_γ, we have lim_γ→∞lim_ε→0|Ω∖ Q^ε_γ|= lim_γ→∞lim_ε→0[ |Ω∖ Q_γ| + |Q_γ∖ Q^ε_γ| ]= 0. Moreover, we have from the uniform convergence of the integrands thatlim_n→∞∫_Q^ε_γ((C^n,^n)-(C^n,))· (^n-)=0.Sinceis bounded on Q^ε_γ, by the Dominated Convergence Theorem we have (C^n,)→(c,) strongly in L^q(Ω)^3× 3 for any q∈[1,∞). Hence, from the above L^q-convergence, (<ref>), and (<ref>), we obtainlim_n→∞∫_Q^ε_γ(C^n,^n)·(^n-)=0.Thus, by the boundedness ofon Q^ε_γ and(<ref>), we havelim_n→∞∫_Q^ε_γ(C^n,^n)·^n=∫_Q^ε_γ·.Now, let B∈ L^∞(Q^ε_γ)^3× 3 be arbitrary but fixed. From the monotonicity (<ref>), (<ref>), the L^q-convergence of (C^n,B)→(c,B) and the weak convergence (<ref>), we have0 ≤lim_n→∞∫_Q^ε_γ((C^n,^n)-(C^n,B))·(^n-B)=∫_Q^ε_γ·(-B)-∫_Q^ε_γ(c,B)·(-B)=∫_Q^ε_γ(-(c,B))·(-B).Now we are ready to use Minty's trick. First, we choose B=±λA with λ>0 and A∈ L^∞(Q^ε_γ)^3× 3. Then, passing to the limit λ→0, the continuity ofgives us ∫_Q^ε_γ(-(c,))·A=0.Hence, we have that=(c,) a.e. on Q^ε_γ.Now we pass ε→0 and then γ→∞ to conclude that=(c,) a.e. on Ω. Finally, sinceis strictly monotonic and C^n→ c in C^0,α̃_1(Ω), by (<ref>) we deduce that^n→ a.e. on Ω.By the Dominated Convergence Theorem, with (<ref>), (<ref>) and (<ref>), we obtain that_c(C^n,∇ C^n,^n)⇀_c(c,∇ c, )weaklyinL^2(Ω)^3.Therefore, by the uniqueness of the weak limit, we can identify_c=_c(c,∇ c,).Proof of Theorem <ref> Minimum and maximum principles Before we proceed, let us prove minimum and maximum principles for the concentration. Let φ^k_1=(c^k-min_x∈∂Ωc_d)_- and φ^k_2=(c^k-max_x∈∂Ωc_d)_+. Since c^k=c_d on ∂Ω, it is clear that φ^k_1,φ^k_2∈ W^1,2_0(Ω), so we can test with φ^k_1 and φ^k_2 in (<ref>). Therefore, we have-∫_Ω^kc^k·∇φ^k_1+∫_Ω_c∇φ^k_1=0, -∫_Ω^kc^k·∇φ^k_2+∫_Ω_c∇φ^k_2=0.We first consider (<ref>). From (<ref>) with integration by parts we obtain∫_Ω^-^k·∇ c^kφ^k_1+∫_Ω^-C|∇ c^k|^2≤0,where Ω^-={x∈Ω:φ^k_1(x)<0}, since div ^k=0 and ^k=0 on ∂Ω. By using the fact that ∇ c^k=∇φ^k_1 on Ω^- and the extension of ∇ c^k from Ω^- to the whole domain Ω by using the negative part, we have∫_Ω^k·∇φ^k_1 φ^k_1+∫_ΩC|∇φ^k_1|^2≤0.Note that∫_Ω^k·∇φ^k_1φ^k_1=1/2∫_Ω^k·∇|φ^k_1|^2=-1/2∫_Ω(div ^k)|φ^k_1|^2=0,and thus,φ^k_1=(c^k-min_x∈∂Ωc_d)_-=constanta.e.inΩ.In the same way, we can also show thatφ^k_2=(c^k-max_x∈∂Ωc_d)_+=constanta.e.inΩ.By combining the above results we finally obtain thatmin_x∈∂Ωc_d≤ c^k≤max_x∈∂Ωc_da.e. in Ω.The limit k→∞ First, note that by weak lower semicontinuity of the norm-function, and (<ref>), (<ref>) and (<ref>), we obtain the following uniform estimates, independent of k∈ℕ:^k^r^-_1,r^-+(c^k,^k)^(r^+)'_(r^+)'+1/k^k^t_t≤ C_1, c^k^2_1,2+_c(c^k,∇ c^k,^k)^2_2≤ C_2, p^k^j'_j'≤ C_3,for some positive constants C_1, C_2 and C_3, which are independent of k∈ℕ.Now, since r^->3/2, by the min/max principle (<ref>), Sobolev embedding and the uniform estimate (<ref>), for s>3 sufficiently close to 3,c^k^k_s≤c^k_∞^k_s≤ C^k_1,r^-≤ C. Therefore, we can again apply Theorem <ref> with F=c^k^k and g=0. Hence, there exists an α_2∈(0,1) such thatc^k_C^0,α_2(Ω)≤ C_4,for some positive constant C_4 independent of k∈ℕ. Since C^0,α_2(Ω)↪↪ C^0,α̃_2(Ω) for all α̃_2∈(0,α_2), we havec^k→ cstrongly in C^0,α̃_2(Ω),which implies thatr∘ c^k→ r∘ cstrongly in C^0,β_2(Ω),for some β_2∈(0,1).Therefore, by the reflexivity of the relevant spaces and compact embedding, there exists a subsequence (not relabelled) such that^k⇀weaklyinW^1,r^-_0,div(Ω)^3,^k→stronglyinL^2(1+ε)(Ω)^3,c^k⇀ c weaklyinW^1,2(Ω),c^k→ c stronglyinC^0,α̃_2(Ω),p^k⇀ p weaklyinL^j'(Ω)∀j>max{r^+,2}, (c^k,^k)⇀weaklyinL^(r^+)'(Ω)^3× 3, _c(c^k,∇ c^k, ^k)⇀_c weaklyinL^2(Ω)^3. Again, by the weak lower semicontinuity of norms, (<ref>) and (<ref>) together with Korn's inequality, we have that∫_Ω|∇|^r(c)+||^r'(c)≤ C,and thus the weak solutionis in the desired space W^1,r(c)_0(Ω)^3.Now we shall let k→∞ in (<ref>), with ∈ W^1,∞_0(Ω)^3 chosen arbitrarily. By (<ref>),^k⊗^k→⊗strongly in L^1+ε(Ω)^3×3.Thus, we can identify the limit of the convective term-∫_Ω(^k⊗^k)·∇→-∫_Ω(⊗)·∇as k→∞,∀ ∈ W^1,∞_0(Ω)^3. Next, by (<ref>), we have that1/k^k^t-1_t→0as k→∞.Therefore, we have1/k|∫_Ω|^k|^t-2^k·|≤1/k^k^t-1_t_t →0as k→∞,∀ ∈ W^1,∞_0(Ω)^3. We recall from the identification asserted in (<ref>) that =(c,) a.e. on Ω; more precisely, with the index k reinstated in our notation, ^k=(c^k,^k) a.e. on Ω. Hence, from (<ref>) and (<ref>), we obtain⟨div ,p^k⟩→⟨div ,p⟩and∫_Ω^k·→∫_Ω·as k→∞,∀ ∈ W^1,∞_0(Ω)^3. Altogether, we have∫_Ω·+(⊗)·∇-⟨div ,p⟩=⟨f,⟩∀ ∈ W^1,∞_0(Ω)^3. Furthermore, it is clear that∫_Ω·+(⊗)·∇=⟨f,⟩∀ ∈ W^1,∞_0,div(Ω)^3. Note that by Proposition <ref> and (<ref>) we havep∈ L^r'(c)_0(Ω). Now, let us investigate the limit of the concentration equation (<ref>). Let us choose an arbitrary, but fixed, z∈ W^1,2_0(Ω). By (<ref>) and (<ref>),c^k^k-c_2 ≤(c^k-c)^k_2+c(^k-)_2 ≤c^k-c_∞^k_2+c_∞^k-_2→0.In other words,c^k^k→ cstrongly in L^2(Ω)^3.Hence we have∫_Ωc^k^k·∇ z→∫_Ωc·∇ z. Recalling the identification (<ref>) and reinstating the index k, we have _c^k:=_c(c^k,∇ c^k, ^k); hence, by (<ref>), we get ∫_Ω_c^k·∇ z→∫_Ω_c·∇ zas k→∞. By collecting the above limits, we deduce that∫_Ω_c·∇ z-c·∇ z=0∀z∈ W^1,2_0(Ω). As a final step, we need to identify the limits:=(c,) and _c=_c(c,∇ c, ).To this end, analogously as before, we need to prove the following equality:lim_k→∞∫_Ω(((c^k,^k)-(c^k,))·(^k-))^1/4=0.The proof is similar to the one presented in the previous section. The only part of the argument that we shall give here in detail is the proof of the analogue of (<ref>)–(<ref>) since we now have a different weak formulation at this level. The other parts of the proof proceed as Section 4.2 in <cit.>.First we define a divergence-free approximation with zero trace as follows:Φ^k_j^k_j-ℬ(div ^k_j),where ℬ is the Bogovskiĭ operator introduced in Theorem <ref>. Then, as before, we haveΦ^k_j ⇀_j-ℬ(div _j)Φ_jweaklyin W^1,σ_0(Ω)^3, Φ^k_j →Φ_jstronglyinL^σ(Ω)^3,as k→∞, where σ∈(1,∞) is arbitrary.Let us further define χ^n,k_1,jΠ^n_divΦ^k_j. Then, by (<ref>),χ^n,k_1,j→Φ^k_jstrongly in W^1,σ_0(Ω)^3, ∀ σ∈(1,∞).Now, by (<ref>),∫_Ω^n·χ^n,k_1,j=-B_u[^n,^n,χ^n,k_1,j]-∫_Ω1/k|^n|^t-2^n·χ^n,k_1,j+⟨f,χ^n,k_1,j⟩.If we take n→∞ in the above equality, we have∫_Ω(c^k,^k)·Φ^k_j=∫_Ω(^k⊗^k)·∇Φ^k_j-1/k|^k|^t-2^k·Φ^k_j+⟨f,Φ^k_j⟩. Next, we define χ^n,k_2,jΠ^n_divΦ_j, and then we haveχ^n,k_2,j→Φ_jstrongly in W^1,σ_0(Ω)^3, ∀ σ∈(1,∞).Again, by (<ref>),∫_Ω^n·χ^n,k_2,j=-B_u[^n,^n,χ^n,k_2,j]-∫_Ω1/k|^n|^t-2^n·χ^n,k_2,j+⟨f,χ^n,k_2,j⟩.If we take n→∞, we have∫_Ω(c^k,^k)·Φ_j=∫_Ω(^k⊗^k)·∇Φ_j-1/k|^k|^t-2^k·Φ_j+⟨f,Φ_j⟩.Subsequently, if we pass k to the infinity, we obtain∫_Ω·=∫_Ω(⊗)·∇Φ_j+⟨f,Φ_j⟩. Therefore, from (<ref>) and (<ref>), we deduce thatlim_k→∞∫_Ω(c^k,^k)·Φ^k_j =lim_k→∞∫_Ω(^k⊗^k)·∇Φ^k_j-1/k|^k|^t-2^k·Φ^k_j+lim_k→∞⟨f,Φ^k_j⟩=∫_Ω(⊗)·∇Φ_j+⟨f,Φ_j⟩=∫_Ω·Φ_j,which is the desired analogue of (<ref>)–(<ref>) corresponding to the limit k→∞, and thereby the proof of (<ref>) has been completed.We can then use the same argument as the one we employed in the previous section to identify = ^k=(c^k,^k) and _c=_c^k = _c(c^k,∇ c^k,^k) (cf. (<ref>) and (<ref>), with the index k reinstated), and thus we can again identify =(c,), _c=_c(c,∇ c,). That completes the proof of the convergence theorem.§ CONCLUSIONS We have considered a system of nonlinear partial differential equations modelling the motion of an incompressible chemically reacting generalized Newtonian fluid in three space dimensions. The governing system consists of a steady convection-diffusion equation for the concentration and a generalized steady power-law-type fluid flow model for the velocity and the pressure,where the viscosity depends on both the shear-rate and the concentration through a concentration-dependent power-law index. We performed a rigorous convergence analysis of a finite element approximation of a regularized counterpart of the model; specifically, we showed the convergence of the finite element method to a weak solution of the regularized model. We then proved that weak solutions of the regularized problem converge to a weak solution of the original problem.§ ACKNOWLEDGEMENTSSeungchan Ko's work was supported by the UK Engineering and Physical Sciences Research Council [EP/L015811/1].abbrv
http://arxiv.org/abs/1708.07830v1
{ "authors": [ "Seungchan Ko", "Endre Suli" ], "categories": [ "math.NA" ], "primary_category": "math.NA", "published": "20170826072941", "title": "Finite element approximation of steady flows of generalized Newtonian fluids with concentration-dependent power-law index" }
Aggregation and Resource Scheduling in Machine-type Communication Networks: A Stochastic Geometry Approach Onel L. Alcaraz Lpez, Student Member, IEEE, Hirley Alves, Member, IEEE, Pedro H. J. Nardelli, Matti Latva-aho, Senior Member, IEEE Onel L. Alcaraz Lpez, Hirley Alves, Matti Latva-aho are with the Centre for Wireless Communications (CWC), University of Oulu, Finland.{onel.alcarazlopez,hirley.alves,pedro.nardelli,matti.latva-aho}@oulu.fi Pedro H. J. Nardelli is with Laboratory of Control Engineering and Digital Systems, Lappeenranta University of Technology, Finland. [email protected] This work is partially supported by Academy of Finland (Aka) (Grants n.303532, n.307492), SRC/Aka (n. 292854), and by the Finnish Funding Agency for Technology and Innovation (Tekes), Bittium Wireless, Keysight Technologies Finland, Kyynel, MediaTek Wireless, Nokia Solutions and Networks. ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Data aggregation is a promising approach to enable massive machine-type communication (mMTC). This paper focuses on the aggregation phase where a massive number of machine-type devices (MTDs) transmit to aggregators. By using non-orthogonal multiple access (NOMA) principles, we allow several MTDs to share the same orthogonal channel in our proposed hybrid access scheme. We develop an analytical framework based on stochastic geometry to investigate the system performance in terms of average success probability and average number of simultaneously served MTDs, under imperfect successive interference cancellation (SIC) at the aggregators, for two scheduling schemes: random resource scheduling (RRS) and channel-aware resource scheduling (CRS).We identify the power constraints on the MTDs sharing the same channel to attain a fair coexistence with purely orthogonal multiple access (OMA) setups. Then, power control coefficients are found, so that these MTDs perform with similar reliability. We show that under high access demand, the hybrid scheme with CRS outperforms the OMA setup by simultaneously serving more MTDs with reduced power consumption. § INTRODUCTION Machine-type Communication (MTC) is typically cited as an integral use case for fifth generation (5G) cellular networks, where fully automatic data generation, exchange, processing and actuation among intelligent machines, are on the agenda. With the rapid penetration of embedded devices, MTC is becoming the dominant communication paradigm for a wide range of emerging smart services including healthcare, manufacturing, utilities, consumer goods and transportation. Specifically, massive MTC (mMTC), as the name suggests, is about massiveaccess by a large number of devices, that is, about providing wireless connectivity to enormous number of often low-complexity low-power machine-type devices (MTDs). Of course, the access is a major concern and a flouring research area. Different methods have been proposed and addressed to provide more efficient access, e.g., access class barring <cit.>, prioritized random access <cit.>, backoff adjustment scheme <cit.>, delay-estimation based random access <cit.>, distributed queuing <cit.>. Another promising way to deal with the massive connection problem comes from the concept of data aggregation. Instead of being directly connected to the access network, e.g., by equipping MTDs with own Subscriber Identity Module (SIM) card to have cellular connectivity, the idea is that MTDs may organize themselves locally, creating MTC area networks and exploiting short-range technologies. These MTC area networks may then connect to the core networks through MTC gateways or data aggregators. This is a key solution strategy to collect, process, and communicate data in MTC use cases with static devices, especially if the locations of the devices are known, such as smart utility meters <cit.> or video surveillance cameras <cit.>. The traffic from MTDs is first transmitted to the designed data aggregator, while the aggregator then relays the collected packets to the core network <cit.>, e.g., the base station (BS), thus, reducing the congestion and the power consumption at the MTD side <cit.>.When aggregating such huge number of MTDs, the density of the aggregators, although considerable smaller than the density of the MTDs, obviously will still be large, and the interference coming from the devices sharing the same resource could be significant. There is limited literature thatcharacterizes the interference in mMTC with data aggregation. The analysis in <cit.> shows how the wireless channel, transmit power, and random deployment of data collectors affect the signal-to-interference-plus-noise ratio (SINR) distribution in random access networks with randomly deployed sensor nodes, and for some special cases, a simple form of signal-to-noise ratio (SNR) or SINR distribution is found. An energy-efficient data aggregation scheme for a hierarchical MTC network is proposed in <cit.>, while authors develop a coverage probability-based optimal data aggregation scheme for MTDs to minimize the energy density of the network. In <cit.>, a theoretical and numerical framework, which aims to assess, model and characterize the network energy consumption profile, is presented by exploiting the stochastic geometry tools.By incorporating some intelligence in the aggregator, the network performance improves, as shown in <cit.> for resource scheduling strategies. Among them, only <cit.> considers a more realistic scenario with a multi-cell network, hence the inter-cell interference, which is a critical issue, is taken into account.The authors introduce a tractable two-phase network model for mMTC, where MTDs first transmit to their serving aggregators (aggregation phase) and then, the aggregated data is delivered to BSs (relaying phase). Key metrics such as the MTD success probability, average number of successful MTDs and probability of successful channel utilization are investigated for two resource scheduling schemes RRS,where aggregators randomly allocate the limited resources to MTDs, and CRS, where aggregators allocate resources to the MTDs having better channel conditions. Authors show that, compared to the CRS scheme, the RRS scheme can achieve similar performance as long as the resources in the aggregation phase are not very limited.In its turn, non-orthogonal multiple access (NOMA) has recently attracted a lot of attention as a promising technology for the coming 5G networks to significantly improve the spectral efficiency of mobile communication networks, and/or to meet the demand of massive connectivity.Authors in <cit.> present NOMA as a promising solution to support a massive number of MTDs in cellular networks, while tackling the main practical challenges and future research directions. Actually, NOMA is considered in <cit.> when serving a massive number of devices in cellular-based MTC networks. Energy-efficient clustering and medium access control are investigated in <cit.> to minimize device energy consumption and prolong network battery lifetime, while authors in <cit.> study the throughput and energy efficiency of the NOMA scenario with a random packet arrival model and derive the stability condition for the system to guarantee the performance.The key idea of NOMA is to exploit the power domain for multiple access, such that multiple users can be multiplexed at different power levels but at the same time/frequency/code. SIC is utilized to separate superimposed messages at the receiver side <cit.>.In general NOMA can be applied on both, downlink and uplink. A downlink NOMA transmission system is studied in <cit.>, where the authors show that the outage performance of NOMA depends critically on the choices of the targeted data rates and allocated power in scenarios with randomly deployed users; while a dynamic power allocation scheme is proposed in <cit.> to downlink and uplink NOMA scenarios with two users with various quality of service requirements. A NOMA scheme for uplink that allows more than one user to share the same subcarrier without any coding/spreading redundancy is proposed in <cit.>, while the authors establish an upper limit on the number of users per subcarrier to control the receiver complexity. Also in the uplink, authors in <cit.> consider a massive uncoordinated NOMA scheme where devices have strict latency requirements and no retransmission opportunities are available. Finally, a good overview of NOMA, from its combination with multiple-input multiple-output (MIMO) technologies to cooperative NOMA, as well as the interplay between NOMA and cognitive radio, is provided in <cit.>. Even though all the recent advances, only few papers focus on evaluating the performance of NOMA by using the stochastic geometry, except for <cit.>. However, the inter-cell interference, which is a pervasive problem in most of the existing wireless networks, is not explicitly considered in <cit.>, and neither do many other works on NOMA.In contrast, authors in <cit.> do consider the inter-cell interference when evaluating the performance of downlink NOMA on coverage probability and average achievable rate, as well as in <cit.> for downlink and uplink NOMA scenarios.In fact, and to the best of our knowledge, <cit.> is the only work analyzing NOMA by means of stochastic geometry in uplink mMTC scenarios for large-scale networks. However, even when authors deal with dense networks, aggregation architectures are not considered, and SIC is assumed perfect in the uplink. This paper aims at filling that gap by characterizing the interference, average success probability and average number of simultaneously served MTDs, for uplink mMTC in a large-scale cellular network system overlaid with data aggregators. We focus on the aggregation phase where the MTDs are allowed to share the same orthogonal channel, while the resource scheduling is implemented at the aggregator side as in <cit.> for OMA setups. The main contributions of this work can be listed as follows: * We introduce a hybrid access protocol, OMA-NOMA, for the aggregation phase of mMTC systems, while we develop a general analytical framework to investigate its performance in terms of average success probability and average number of simultaneously served MTDs. * We extend the resource scheduling schemes RRS and CRS, proposed in <cit.> to deal with the limited resources, to scenarios where several MTDs are allowed to share the same orthogonal channel. As expected, CRS fits better to our setup since it allows performing adequate power control easily, while improving the system performance. Also, we find the power constraints on the MTDs sharing the same channel in order to attain a fair coexistence of our scheme with purely OMA setups, while power control coefficients are found too, so these MTDs can perform with similar reliability. * We show that, even when the hybrid scheme could lead to a less reliable system with greater chances of outages per MTD, e.g., due to the additional intra-cluster interference, the number of simultaneous active MTDs could be significantly improved for high access demand scenarios, as long as the success probability does not decrease so much. In that sense, our scheme aims at providing massive connectivity in scenarios with high access demand, which is not covered by traditional OMA setups. Additionally, the CRS scheme requires even a lower average power consumption per orthogonal channel and per MTD, than the OMA setup. * We attain approximated, yet accurate, expressions when analyzing the CRS scheme. Compared to the time-consuming Monte-Carlo simulations, even heavier for our hybrid scheme than for the purely OMA setup, our analytical derivations allow for fast computation. Also, imperfection when implementing SIC is incorporated in our proposed analytical framework. The remainder of the paper is organized as follows. Section <ref> presents the system model and assumptions. Sections <ref> and <ref> discuss the RRS and CRS scheduling schemes for our hybrid access protocol, respectively. Section <ref> studies the power consumption of both schemes for OMA and the hybrid protocol.Section <ref> presents the numerical results, and finally Section <ref> concludes the paper.Notation: 𝔼[ · ] denotes expectation, while (A) and(A|B) are the probability of event A, and (A) conditioned on B, respectively. |S| is the cardinality of set S, ||x|| is the Euclidean norm of vector x and mod(a,b) is the modulo operation. _xF_y is the generalized hyper-geometric function <cit.>, ψ(x) is the digamma function <cit.>, Γ(x) is the gamma function, while Q(a,x) and Beta(x,a,b) are theregularized incomplete gamma function <cit.> and incomplete beta function <cit.>, respectively. ⌊·⌋ and ⌈·⌉ are the floor and ceiling functions, respectively. 𝐢=√(-1) and Im{z} is the imaginary part of z∈ℂ. f_X(x) and F_X(x) are the Probability Density Function (PDF) and Cumulative Distribution Function (CDF) of random variable (RV) X, respectively. X∼Exp(1) is an exponential distributed RV with unit mean, e.g., f_X(x)=exp(-x) and F_X(x)=1-exp(-x); while Y∼Poiss(m̅) is a Poisson distributed RV with mean m̅, e.g., (Y=y)=1y!m̅^yexp(-m̅) and F_Y(y)=Q(y+1,m̅).§ SYSTEM MODEL AND ASSUMPTIONS Consider a cellular network overlaid with data aggregators, which are spatially distributed according to a homogeneous Poisson point process (PPP), denoted Φ_a with density λ_a. Each aggregator, which could function as an ordinary cellular user for certain BS, serves multiple MTDs located nearby. Thus, the result is a cluster point process uniquely defined as: Φ⋃_𝐰∈Φ_a𝐰+ℬ^𝐰, where Φ_a is the parent PPP and ℬ^𝐰 denotes the offspring point process where each point at 𝐬∈ℬ^𝐰 is i.i.d. around the cluster center 𝐰∈Φ_a with distance distribution f_r_a(r_a)=2r_aR_a^2, e.g., uniformly distributed in the disk region of radius R_a. K∼Poiss(m̅) is the instantaneous number of MTDs requiring service in each aggregator, e.g., number of points in ℬ^𝐰, thus, the process is a Matrn cluster point process[A Matrn cluster point process is appropriated when considering either static or low mobility MTDs beign served by the aggregators. Some use cases for mMTC over cellular are: smart utility metering and industry automation <cit.>.]. Notice, that each MTD is associated with a single aggregator even when it could be located within the aggregation areas of several aggregators. We focus on the uplink where the MTDs across the entire network are served through the same set of orthogonal channels, 𝒩, available at each aggregator, with |𝒩|=N. Differently from <cit.>, here the same orthogonal channel could be used for more than one MTD. Some aggregators could be allocating one MTD per channel because the access demand is not so high, but some other aggregators could be allocating more MTDs per channel to face the increasing access demand, thus we propose and assess a hybrid OMA-NOMA multiple access scenario. Notice that the same aggregator could have channels operating with only one MTD, while others are operating with more. The maximum number of users per orthogonal channel is L, where L=1 reduces to the OMA scenario analyzed in <cit.>, which is used here as a benchmark. We focus on the L=2 setup[This is for analytical tractability, but notice that even when some existing results show that NOMA with more devices may provide a better performance gain <cit.>, this may not be practical. The reason is that considering processing complexity for SIC receivers, especially when SIC error propagation is considered, 2-users NOMA is actually more practical in reality <cit.>.], although some of our results hold for any L≥1. Notice that for L>1 there is both: inter-cluster interference, e.g., interference from MTDs in the serving zone of other aggregators; and intra-cluster interference, e.g., interference from MTDs within the serving area of the same aggregator. After aggregating the MTDs' data, each aggregator relays the entire information to its associated BS. However, the focus of our currentwork is on the aggregation phase and we assume that this phase occurs synchronously in all aggregators. Fig.<ref> shows a snapshot of the considered network model. The silent MTDs are those out of the N· L available resources being used by the active MTDs.We assume the quasi-static fading channel model, where the channel power coefficients, q∈{h,g}, are exponentially distributed with unit mean, e.g., Rayleigh fading. q=h denotes the fading experienced by the signals coming from within the same serving zone, while q=g denotes the fading experienced by the inter-cluster interfering signals. The instantaneous received power at the receiver side is thus given by p_tqr^-α, where p_t is the transmit power from the transmitter, r is the distance between the receiver and transmitter, and α represents the path-loss exponent. Full channel state information (CSI) is assumed at receiver side as in <cit.> in order to obtain benchmark results, and all MTDs are assumed to use statistical full inversion power control <cit.>, which guarantees a uniform user experience while saving valuable energy, with receiver sensitivity ρ. Since here we analyze an interference-limited scenario, ρ does not impact the performance of the network. The aggregators implement the resource scheduling according to one of the schemes presented in the following sections, and the MTDs being considered are those with access granted to the aggregators, since the random access in the network is assumed to be performed[There are some solutions using NOMA for the random access stage as well, e.g., <cit.>. In fact, the work in <cit.> proposes a NOMA scheme where the devices transmit their messages over randomly selected channels, while the random access and data transmissions phases are combined. Readers will realize that our resource scheduling schemes can be easily incorporated to that strategy to improve the overall system performance; however, the details of such implementation are out of the scope of this work.], as in <cit.>.§ RRS FOR THE HYBRID ACCESS Herein we explore the RRS scheduling scheme, for which the CSI is only required at the aggregators when decoding the information and not for resource scheduling. Due to its simplicity, this scheme is used as benchmark when compared to a more evolved strategy discussed in Section <ref>, where the benefits of the CSI acquisition are exploited also for resource allocation. In Subsection <ref> we attain the success probability for each MTD in a given channel, which depends on the Laplace transform of the inter-cluster interference. An accurate approximation of the latter is given, which is fundamental to efficiently evaluate the success probability. In Subsection <ref> we characterize the overall system performance.Under the RRS scheme, N out of the K instantaneous MTDs requiring transmissions are independently and randomly, chosen and matched, one-to-one, with the channels in 𝒩. If K≤N, all MTDs get channel resources, and even N-K channels will be unused. Otherwise, if K>N,the channel allocation is executed again by allowing the remaining MTDs to share channels with the already served MTDs. This process is executed repeatedly until all the MTDs are allocated or the maximum number of MTDs per channel, L, is reached for all the channels. The Probability Mass Function (PMF) of the number of MTDs allocated to the same channel is given in (<ref>) at the top of the page. See Appendix <ref>.The point process of the active MTDs on certain channel n∈𝒩 is obviously a subset of Φ, defined in (<ref>), and can be defined as Φ_n⋃_𝐰∈Φ_a𝐰+ℬ^𝐰_n, where ℬ^𝐰_n∈ℬ^𝐰denotes the offspring point process with instantaneous number of points aroundthe cluster center 𝐰∈Φ_a obeying (<ref>). Thus, the generating function of the number of active MTDs on certain channel in one cluster isG_0(z)=∑_u=0^Lc_uz^u, where c_u=(U=u), while c̅=∑_u=1^Luc_u is the mean number of active MTDs in each channel of certain cluster. §.§ MTD success probability for L=2 The intra-cluster interference coming from the MTDs sharing the same channel is faced with SIC[SIC, as in <cit.>, is a common assumption when dealing with NOMA.]. Thus, the SIR[Other than the real SIR at the receiving antenna of an aggregator, we are more interested in the SIR after SIC that can be used to calculate the success probability. Notice that in the L=2 case, the first decoded MTD does not need to perform interference cancellation and directly treats the signal from the second MTD as interference, while the second MTD has to decode first the signal from the other MTD to remove it, which is performed with an efficiency of 100×(1-μ)%.], SIR^r_j,u, of the jth MTD being decoded on a typical channel of certain cluster, given the number of MTDs u sharing the same channel and the RRS scheme is SIR^r_1,1 =h/I_r, SIR^r_1,2 =max(h',h”)/I_r+min(h',h”), SIR^r_2,2 =min(h',h”)/I_r+μmax(h',h”), where I_r=∑_x∈Φ_n'gr_a^αx^-α is the inter-cluster interference for the RRS scheme with x∈Φ_n'⊂Φ_n denoting both the location and the interfering MTD which occupies certain channel n in other clusters, and r_a is the distance between the MTD and its serving aggregator. Also, μ∈[0,1] is used to model the impact caused by imperfect SIC <cit.>, while h' and h” are the channel power coefficients of both MTDs sharing the channel when u=2.Notice that we cannot weight the power of the coexistent nodes on the same channel since CSI information is not exploited for resource scheduling when using the RRS scheme. Also, lim_I_r→ 0SIR_1,2^r is unbounded, but lim_I_r→ 0SIR_2,2^r≤1/μ since min(h',h”)≤max(h',h”), thus the performance of the second MTDs being decoded on the channel is strongly limited by the SIC imperfection parameter.Assuming a fixed rate coding scheme where the receiver decodes successfully whenever (SIR≥θ), where θ is the SIR threshold, e.g., information rate of log_2(1+θ) (bits/symbol), we state the following theorem. The RRS success probability, p^r_j,u, of the jth MTD sharing a typical channel conditioned on u MTDs, is given by p^r_1,1 =ℒ_I_r(θ), p^r_1,2 ={[ 2/1+θℒ_I_r(θ)-1-θ/1+θℒ_I_r(2θ/1-θ),if 0≤θ<1;2/1+θℒ_I_r(θ), if θ≥ 1 ]., p^r_2,2 ={[ 1-θμ/1+θμℒ_I_r(2θ/1-θμ),if 0≤θμ<1; 0, if θμ≥ 1 ]., where ℒ_I_r(s) =exp(2πλ_a∫_0^∞r_𝐰(∑_u=0^Lc_uΥ(r_𝐰,s)^u-1)dr_𝐰), is the Laplace transform of RV I_r and Υ(r_𝐰,s) =1/π R_a^2∫_0^R_a∫_0^2πr_adωdr_a/1+sr_a^α(r_𝐰^2+r_a^2+2r_𝐰r_acos(ω))^-α/2. See Appendix <ref>. Although for some pairs (θ,μ) it is possible to achieve a similar reliability of both MTDs sharing the same channel, e.g., p^r_1,2≈ p^r_2,2, this is not attained in general, and it could be a main drawback when using the RRS scheme and certain homogeneity in the Quality of Service (QoS) is expected. Another interesting fact is that p^r_1,1=p^r_1,2=ℒ_I_r(θ) for θ=1, while p^r_2,2 would be smaller even when μ→ 0, e.g., ℒ_I_r(2θ). Additionally, notice that μ<1/θ is required such that the second user being decoded on certain channel has chances to succeed. Thus, the greater the SIR threshold, θ, the greater the impact of imperfect SIC, μ. Expression (<ref>) for L≥ 2 is upper (almost surely) and lower bounded by ℒ_I_r(s) a.s≤ℒ_I_r^up(s)=exp(-χ s^2/α∑_u=1^Lc_uu^2/α), ℒ_I_r(s) ≥ℒ_I_r^lo(s)=exp(-χc̅ s^2/α), where χ=1/2λ_aπ R_a^2Γ(1+2α)Γ(1-2α), while ℒ_I_r(s) ≈β_0ℒ_I_r^up(s)+β_1ℒ_I_r^lo(s), provides an approximation with β_0,β_1∈[0,1] and β_0+β_1=1. See Appendix <ref> for derivation of (<ref>) and (<ref>), while (<ref>) is a weighted average of both bounds, thus attaining an approximation that will be more accurate than at least one of the bounds. The weighted average becomes relevant if we know a priori which of the bounds in (<ref>) and (<ref>) is more accurate for the setup being analyzed and we give it a greater weight, e.g., β_0≶β_1, otherwise trivial choice β_0=β_1=0.5 is advisable, as we adopt here.Fig. <ref> shows the bounds and approximation attained in Theorem <ref> while comparing them with the exact value given in (<ref>).Specifically, Fig. <ref> shows the performance for α∈{3,5}, while we set c_2=1, which conduces to scenarios where the bounds in (<ref>) and (<ref>) are the least tight since T_1 in (<ref>) has no impact on (<ref>), e.g., T_1=1. As noticed, the greater the α the lesser the tightness of the bounds, however the approximation performs very well whatever the setup, which is also appreciated in Fig. <ref>.Notice that for relatively small N, the approximation in (<ref>) is accurate even when λ_a increases and the bounds are not tight. When N increases, c_0 and c_1 increase, which increases the contribution of T_1 in (<ref>), thus increasing the tightness of the bounds, and the accuracy of the approximation. §.§ Overall Performance for L=2 The following two lemmas characterize the system performance in terms of average over all MTD success probability, and average number of simultaneously served MTDs since one of the main benefits of NOMA techniques is the possibility of offering service to a great number of devices simultaneously. Conditioned on being using certain channel, the average over all MTD success probability is p^r_succ=c_1/1-c_0p^r_1,1+c_2/2(1-c_0)(p^r_1,2+p^r_2,2). Knowing that c_u=(U=u) it is straightforward to attain (<ref>). The average number of simultaneously served MTDs is given in (<ref>) at the top of the page, where A_1 =m̅Q(N+1,m̅)-exp(-m̅)m̅^N+1/N!,A_2 =exp(-m̅)(m̅^2N/(2N-1)!-m̅^N+1/N!)-(m̅-2N)(Q(2N,m̅)-Q(N+1,m̅)), A_3 =N(Q(N+1,m̅)+1) In (<ref>), (a) comes from averaging the number of simultaneously active and successful MTDs, while (b) comes from regrouping terms and using the expressions of the CDF and expected K on one interval (see Appendix <ref>) along with some algebraic transformations. When m̅ grows above 2N, the last term in equality (a) of (<ref>) becomes the largest contributor for K̅_r, with K̅_r→ N(p^r_1,2+p^r_2,2)=2Np^r_succ since ∑_k=2N^∞(K=k)≈ 1 and c_2≈ 1 (c_0≈ c_1≈ 0) in (<ref>). Thus, when comparing with an L=1 setup under the same circumstances, e.g., with average number of simultaneously served MTDs being Np_succ^OMA, the condition required for the hybrid access scheme[In general, the required condition can be extended for any L, thus p_succ^HYB>1Lp_succ^OMA.] to overcome the OMA system (L=1) is that p^r_succ>12p_succ^OMA.Notice that the hybrid scheme with RRS could lead to a less reliable system with greater chances of outages per MTD, e.g., due to a larger inter-cluster interferenceand additional intra-cluster interference. However, the number of simultaneous active MTDs could be improved for high access demand scenarios, e.g., m̅∼ 2N, as long as the success probability does not decrease so much. § CRS FOR THE HYBRID ACCESS Herein we explore the CRS scheduling scheme, which contrary to the RRS scheme, strongly relies on the CSI for resource scheduling. Thus, the CRS scheme is more adjusted to NOMA scenarios, where CSI is a keystone for efficiently decoding multiple user data over the same orthogonal channel with SIC <cit.>. Similar than in the previous section, we attain first in Subsection <ref> the success probability for each MTD in a given channel along with necessary approximations. Later, in Subsection <ref>, we find the power constraints on the MTDs sharing the same channel in order to attain a fair coexistence of our scheme with purely OMA setups, while power control coefficients are found too, so these MTDs can perform with similar reliability. Finally, in Subsection <ref> we characterize the overall system performance.Under the CRS scheme, the MTD with better fading (equivalently, better SIR) will be preferentially assigned with the available channel resources. An aggregator with K instantaneousMTDs requiring transmission has the knowledge of their fading gains. Let {h_1,...,h_i,...,h_K} denote the decreasing ordered channel gains, where h_i-1>h_i. If K≤ N all the MTDs will be chosen, but if K>N the aggregator will pick the N MTDs with better channel gains, i.e., h_1,...,h_N, and then will assign the channel set 𝒩 to them <cit.>. As a continuation, the remaining MTDs canbe still allocated sharing those same resources, i.e., users N+1,...,K go to the second round for allocation. This process is executed repeatedly until all the MTDs are allocated or the maximum number of MTDs per channel, L, is reached.Both Lemma <ref> and the point process of the active MTDs on certain channel n∈𝒩 characterized through (<ref>) and (<ref>) hold here. §.§ MTD success probability for L=2 Under the CRS scheme and using SIC to face the intra-cluster interference, the SIR, SIR^c (i)_j,u, of the jth MTD being decoded on a typical channel, given the first MTD allocated there has the ith larger channel coefficient, h_i, and there are u MTDs sharing that same channel, is given by SIR^c (i)_1,1 =h_i/I_c, SIR^c (i)_1,2 =a_ih_i/I_c+b_ih_i+N, SIR^c (i)_2,2 =b_ih_i+N/I_c+μ a_ih_i. Notice that differently from the RRS scheme, here we can weight the power of coexistent nodes on the same channel through a_i and b_i since CSI is used for resource allocation. Of course, some kind of feedback from the aggregators would be required. Once again, the performance of the first MTD being dedoced in the channel is somewhat unbounded, e.g., lim_I_c→ 0SIR_1,2^c (i) is unbounded, while the performance of the second one is not,but this time we can relax that situation by choosing a_i<b_i since lim_I_c→ 0SIR_2,2^c (i)<b_i/μ a_i. However, weighting the transmit power of the MTDs sharing the same channel conduces to a marked point process with marks m_x=1 when the MTD on x is alone in the channel, which happens with probability c_1, and m_x∈{a_i,b_i} for those MTDs sharing the same channel, which happens with probability c_2. Thus, the inter-cluster interference for the CRS scheme is I_c=∑_x∈Φ_n'gm_xr_a^αx^-α. By letting a_i+b_i=δ be a fixed value we impose some kind of total transmission power constraint <cit.>. This is crucial for NOMA scenarios, and here it is particular important in order to control the interference from the inter/intra-cluster MTDs sharing the same channel. Now, assuming that the receiver can decode successfully (SIR exceeds a threshold θ), we state the following theorem. The CRS success probability, p^c (i)_j,u, of the jth MTD being decoded on a typical channel, given the first MTD allocated there has the ith larger channel coefficient, h_i, and there are u MTDs sharing that same channel, is approximately given by p^c (i)_j,u ≈1/2-1/π∫_0^∞1/φIm{ℒ_I_c(-𝐢φ)exp(-𝐢φ B_j,u^(i,K))}dφ, where ℒ_I_c(s) =exp(2πλ_a∫_0^δ∫_0^∞r_𝐰(c_0+c_1Υ(r_𝐰,s)+c_2Υ(r_𝐰,a_is)Υ(r_𝐰,(δ-a_i)s)-1)f_a_i(a_i)dr_𝐰da_i),≈∑_t∈{1,2}β_t-1exp(-χ(c_1+c_2t^α-2/αδ^2/α)s^2/α), is the Laplace transform of RV I_c for L=2, Υ(r_𝐰,s) is defined in (<ref>), and β_0,β_1∈[0,1], β_0+β_1=1, such that (<ref>) with β_0=1 and β_1=1 provide, almost surely, upper and lower bounds for (<ref>), respectively. Finally, B_1,1^(i,K) =ψ(K+1)-ψ(i)/θ, B_1,2^(i,K) =(a_i/θ-b_i)ψ(K+1)+b_iψ(i+N)-a_i/θψ(i), B_2,2^(i,K) =(b_i/θ-μ a_i)ψ(K+1)+μ a_iψ(i)-b_i/θψ(i+N). See Appendix <ref>.Notice that even in the case when all MTDs operating on the same channel in the network are using the same a_i and b_i coefficients, e.g., a_i and b_i with deterministic values, evaluating (<ref>) using (<ref>) is not an easy task. A closer look at that expression makes us suspect on its efficiency. This is because it requires evaluating two inner triple integrals. In fact, several numerical tests we run corroborated that evaluating (<ref>), using (<ref>) with fixed a_i,b_i values, is highly inefficient and computationally too heavy. Thus, the approximation given in (<ref>)becomes necessary, which holds also under the premise that the pair (a_i,b_i) has not to be simultaneously the same for all MTDs operating on the same channel. This would be required for instance in order to optimize in some way the system performance. Additionally, notice that B_1,2^(i,K) and B_2,2^(i,K) defined in (<ref>) and (<ref>), respectively, are functions of the power control coefficients a_i and b_i of the typical links, which are assumed fixed for the entire period of evaluation. Fig. <ref> shows the accuracy of (<ref>) by plotting the exact values using (<ref>) for fixed a_i,b_i and δ=1 and comparing also with simulations by choosing a_i, b_i uniformly random. The values for a_i, b_i are interchangeable since the perceived interference is the same. The remaining values for the system parameters are the same than the previously used when discussing Fig. <ref>. Both, Fig. <ref> and Fig. <ref> showing ℒ_I_c as a function of s and N, respectively, corroborate the idea behind(<ref>). This is, the system performance depends heavily on δ rather than the individuals a_i, b_i, or their distribution. Notice that the exact values for two completely different pairs (a_i,b_i) are very similar, and even when a_i, b_i were chosen randomly, the performance is kept alike. We know that the interference under the CRS scheme with a_i=b_i=1 is characterized in the same way as in the RRS case, ℒ_I_c=ℒ_I_r, since no weights to the transmit power, e.g., no marks, are assigned; and notice that (<ref>) captures well this phenomena since with δ=2 we attain (<ref>). This means that whatever the values of a_i, b_i, as long as δ=2 the interference is distributed approximately equal as for the RRS scenario. An alternative, and easy of evaluating, expression for the CRS success probability, p_j,u^c (i), for a given δ, is p_j,u^c (i)≈1/2-∑_t∈{1,2}β_t-1/π∫_0^∞exp(-ς_tφ^2/α)sin(ϱ_tφ^2/α-φ B_j,u^i,K)/φdφ, where ς_r=ν_rcos(πα), ϱ_r=ν_rsin(πα) and ν_r=χ(c_1+c_2r^α-2/αδ^2/α). The idea here is substituting into (<ref>) the approximation for ℒ_I_c(s) given in (<ref>). Using (-𝐢)^2/α=cos(πα)-𝐢sin(πα) and Im{pexp(-q𝐢)}=-psin(q), along with simple algebraic transformations, we attain (<ref>). The summation comes from both sum terms in (<ref>). The CRS success probability, p_j,u^c, of the jth MTD sharing a typical channel conditioned on u MTDs, is given in (<ref>) and (<ref>) at the top of the next page, where j∈{1,2}. Note that (a) comes directly from the total probability theorem, and also averaging p_j,u^c (i) over all possible values of i given k and N. On the other hand, (b) comes from using the expressions for (K≤ N) and (K=k) easily obtained through the PDF and CDF of K. Also, (k,N)=k-N if N+1 ≤ k≤ 2N-1, and the approximation is because we substituted the infinite sumfor a finite sum until k_max to reach (<ref>). Notice that a proper value for k_max is such that the sum over the remaining k>k_max does not contribute significantly to the success value in (<ref>). Thus, we could choose k_max such ∑_k=k_max+1^∞(K=k)<τ→ Q(k_max+1,m̅)>1-τ holds, where τ is the maximum allowable error we admit when approximating with (<ref>), e.g., τ=10^-5 and m̅=30→ k_max=56. §.§ Practical issues If we look closely at (<ref>) and (<ref>) we can notice there must exist some choice for a_i and b_i in order to attain a similar reliability for both MTDs sharing the orthogonal channel. Thus, we resort to SIR_1,2^c (i)=SIR_2,2^c (i) with a_i+b_i=δ. However, the knowledge of the inter-cluster interference is required and it is a major drawback for this method. Alternatively, we state the following theorem avoiding that problem. A proper approximate choice for a_i and b_i in order to attain a similar reliability for both MTDs sharing the channel when u=2 is given by a_i =δ(1+1/θ)(ψ(K+1)-ψ(i+N))/(1+μ+2/θ)ψ(K+1)-(μ+1/θ)ψ(i)-(1+1/θ)ψ(i+N), b_i =δ-a_i. Notice that p_1,2^c (i) and p_2,2^c (i) in (<ref>) only differ in the terms B_1,2^(i,K) and B_2,2^(i,K). Thus, solving B_1,2^(i,K)=B_2,2^(i,K) with a_i+b_i=δ conduces to an approximate choice for these parameters. The values of a_i and b_i aside of the index i, strongly rely on the instantaneous number of MTDs contesting for transmission resources, K, the number of available channels, N, the SIR threshold, θ, and the SIC imperfection parameter, μ. These values for a_i, b_i do not guarantee the same instantaneous SIRfor both MTDs sharing the channel, but when averaging over a long period[Assuming that each of them will be occupying the same ith channel and the same order when transmitting in future rounds.] it guarantees a similar reliability for them. Also, all MTDs have the same chances of occupying any of the channels in 𝒩, thus for high loaded systems where m̅>N, e.g., the probabilities of all the channels being occupied by two MTDs are great, an average similar reliability will be attained.On the other hand, both the RRS scheme, and CRS scheme with relatively large δ, do not favor the coexistence with purely OMA setups. The reason is because these NOMA setups would increase the interference, e.g., up to twice and approximately δ larger for RRS and CRS schemes, respectively, caused to the OMA setups, and since this is not compensated by multiplexing several users, the performance of the OMA clusters is affected. On the other hand, even when only our hybrid OMA-NOMA scheme is utilized, it is expected that all aggregators will not be under the same average access demand, e.g., same m̅. Therefore, those with low demand could be operating with OMA almost all the time and the larger interference of previously schemes will be impractical.Thus, limiting the interference is crucial and that could be done by properly selecting a relatively small δ. The required δ, δ^*, for a fair coexistence[We refer to “fair coexistence” when for any aggregator, the interference coming from the outside topology (the inter-cluster interference) remains the same regardless of the alternative (OMA protocol or our hybrid approach) the outside clusters utilize.] between OMA and our hybrid setup is approximated by the solution of ξ^δ^2/α-1+ξ^2^α-2/αδ^2/α-1=2, where ξ=exp(-χ c_2 s^2/α), and it is bounded by 2^2-α/2≤δ^*≤ 1. See Appendix <ref>. Interestingly, even though δ^* depends completely on the system parameters e.g., λ_a, R_a, c_u, θ and α, we were able to limit its range only as a function of α. The greater the α the smaller δ^* and consequently the greater the limitation on NOMA setup over nodes. Also, the fact that δ^*< 1 is expected, means that the NOMA setup has to operate with an overall consumption power inferior to OMA setup. §.§ Overall Performance for L=2 Differently than in the RRS case, where we were able to attain the average over all MTD success probability based only on a linear combination of p_j,u^r as stated in (<ref>), here we cannot do the same since the weights of p_1,1^c and p_j,2^c when averaging are different for each k and dependent on the index of the ordered channels, i. Instead, the average over all MTD success probability is given in (<ref>) as a function of p_j,u^c (i)at the top of the page, where (a) comes from using the expressions for (K≤ N), (K=0) and (K=k).On the other hand, the average number of simultaneous served MTDs is given in (<ref>), shown below of (<ref>) on top of the page, where A_1 was given in (<ref>) and the approximation is because the same reasons than previously discussed. Notice that Remark <ref> holds here for the CRS scheme as well.§ POWER CONSUMPTION ANALYSIS The average transmit power per orthogonal channel for the OMA (L=1) and, RRS and CRS of our hybrid scheme (L=2) is given by p̅_t ={[(1-c_0)Ψ,for OMA; (c_1+δ c_2)Ψ for our hybrid scheme with L=2 ]., where Ψ=2ρ R_a^α/α+2. For OMA, RRS and CRS with L=2, the average transmit power is 𝔼[(1-c_0)ρ r^α]=(1-c_0)ρ𝔼[r^α], 𝔼[c_1ρ r^α+2c_2ρ r^α]=c̅ρ𝔼[r^α] and 𝔼[c_1ρ r^α+c_2(a_i+b_i)ρ r^α]=(c_1+δ c_2)ρ𝔼[r^α], respectively. For RRS c̅=c_1+δ c_2 since δ=2, and now it is only necessary to compute 𝔼[r^α], 𝔼[r^α]=∫_0^R_ar^αf_r(r)dr=2/R_a^2∫_0^R_ar^α+1dr=2R_a^α/α+2=Ψ/ρ, and (<ref>) is attained. Obviously, the RRS scheme with L=2 will always require a greater power consumption than an OMA setup, since whenever two MTDs are transmitting on the same channel of a cluster, the power consumption doubles. On the other hand, the CRS scheme allows to reduce the power consumption by adopting a relatively small δ, e.g., δ^*. This behavior is illustrated in Fig. <ref> for c_0=0. All the schemes have a common starting point in Ψ since all of them are equivalent when no MTDs require to share the same orthogonal channel, e.g., c_2=0→ c_1=1. When c_2 increases, the power consumption of the RRS and CRS schemes more, as long as δ 2, reaching the maximum difference for c_2=1. Notice that choosing δ^* we guarantee operating with the same interference than an OMA setup while the power consumption reduces since δ^*<1. § NUMERICAL RESULTS Both, simulation and analytical results, are presented in this section in order to investigate the performance of our hybrid scheme as a function of the system parameters while comparing it with an OMA setup. The analytical results for the RRS scheme come from using the exact expressions; while for the CRS scheme we use the approximations. Unless stated otherwise, results are obtained by setting m̅=60, λ_a=10^-4.4/m^2 (39.81/km^2), R_a=40m, α=3.6, μ=0 and θ=1. The value of δ^* is found by solving numerically (<ref>) whenever is required. Simulation results are generated using 50000 Monte Carlo runs and a sufficiently large area such 400 aggregators are placed on average. Fig. <ref> shows the success probability of the MTDs sharing the same channel conditioned on u=2, e.g., (<ref>) and (<ref>) for RRS, and (<ref>) for CRS. The idea is to show how fair the schemes are when allocating the transmission resources. Notice that for the RRS scheme, the gap between both MTDs performance keeps quite constant, independently of N. This is because that gap relies strongly on the gap between max(h',h”) and min(h',h”), (see (<ref>) and (<ref>))[Notice the gap also relies strongly on the value of μ.], which only depends on the number of MTDs requiring transmissions and not on the available channels. While for the CRS schemes with fixed a_i, b_i, the gap tends to widen when increasing N because h_i becomes larger with respect to h_i+N in (<ref>) and (<ref>). Of course, a given system is projected to work given one value of N, and by properly choosing some fixed a_i and b_i we can reduce the performance gap between the MTDs sharing the channel, which is not possible for the RRS scheme since weighting the transmit powers is not available. Notice also that by choosing a_i and b_i according to (<ref>) and (<ref>) both MTDs reach similar performance, hence the fairest scheme. Fig. <ref>a shows the average over all MTD success probability for the different schemes and L={1,2}. Each scheme with L=1 performs better than for L=2. Also, as expected and in general, the CRS setups outperforms the RRS scheme, except when choosing a_i, b_i according to (<ref>) and (<ref>), for which a greater fairness is attained instead. Notice that when N is small, CRS outperforms RRS; while as N increases, their curves tend to overlap. This is due to the fact that when N is large, i) e.g, greater than m̅ for L=1, most of the time the number of MTDs is less than the available resources such that the implementation of the CRS is almost the same as the RRS; ii) e.g., greater than m̅/2 for L=2, the impact of ordering the channels becomes less significant since most of the time all the MTDs will be allocated.While when N increases more and more, the system tends to behave as when L=1.The change in the CRS curves from decreasing to increasingoccurring close to m̅/2 is somewhat explained by that latter argument. Even when achieving a higher reliability with an L=2 setup is not possible, we are able to enhance the average number of simultaneously served MTDs, as shown in Fig. <ref>b. The success probability does not deteriorate and a significant improvement on K̅ is attained, fundamentally when N is not large. Notice that this advantage is evinced when L=2, especially with the CRS scheme, so to cover a high instantaneous access demand m̅>N, which is even more favorable than predicted in Remark <ref>. By setting L=2, the number of efficiently served MTDs could be up to twice the number with a single MTD per orthogonal channel setup. Particularly attractive are the configurations operating with δ^* since no change in the perceived interference occurs when switching transmission schemes from L=1 and L=2 setups and vice versa.Fig. <ref> shows the average number of simultaneously served MTDs as a function of the density of aggregators for μ∈{0,10%}, Fig. <ref>a, and the SIC imperfection coefficient, Fig. <ref>b. When both, the network is sparse and the SIC imperfection coefficient are not so restrictive, the performance of the L=2 setup increases. Since N=30, each channel per cluster is operating with two MTDs almost all the time, which are more sensitive to the interference, hence their performance will be affected if either λ_a, θ, or even R_a are larger. If c_2 was smaller, e.g., larger N, this situation becomes less critical, although the gap between the L=1 and L=2 setups would be smaller as shown previously in Fig. <ref>. Notice that the imperfect SIC degrades the system performance when L=2, but even for a high imperfection such 10%, the advantage over the OMA setup keeps evident for a wide range of values for the system parameters. For instance, the CRS scheme with a_i=b_i=δ^*/2 overcomes the CRS scheme with L=1 when λ_a≲ 1.3· 10^-4 for μ=0, while λ_a≲ 1· 10^-4 would be required for μ=10%. Since SIC is only related with the L=2 setup, the OMA setup appears shown as a straight line for both RRS and CRS schemes in Fig. <ref>b. Notice that the setup with fixed power coefficients, e.g., RRS and CRS with fixed a_i, b_i, are the most affected when μ increases since those coefficients work well for certain system parameters but others will be required if they change, e.g. different μ in this case. It is expected that a smaller a_i, hence larger b_i, work better as μ increases (see (<ref>) and (<ref>)). The setup with a_i and b_i given by (<ref>) and (<ref>), respectively, adapts better to the different μs since this is a parameter taken into account when performing their calculations. This is, the values of a_i and b_i that allow a similar reliability between both MTDs sharing the same channel in a given cluster, depend on μ, thus no additional adjustment on them is required. It is clear that failing to efficientlyeliminate the intra-cluster interference could reduce significantly the benefits from NOMA, and can be achallenging issue for implementing NOMA in practice. Finally, notice that simulations and analytical expressions, even when several of them are approximations, fit well in all the cases, e.g., Fig. <ref>-<ref>, which validates our findings[Analytical expressions, even when some of them are numerically evaluated, have an additional value since simulations of these scenarios require a huge amount of computational resources specially for L>1.].§ CONCLUSION In this paper, we analyzed the uplink mMTC in a large-scale cellular network system overlaid with data aggregators. We propose a hybrid access scheme, OMA-NOMA, while developing a general analytical framework to investigate its performance in terms of average success probability and average number of simultaneously served MTDs for two scheduling schemes RRS and CRS. Also, we found the power constraints on the MTDs sharing the same channel to attain a fair coexistence with purely OMA setups, while power control coefficients are found too, so that both MTDs can perform with similar reliability.Our analytical derivations allow for fast computation compared to the time-consuming Monte-Carlo simulations, which are even heavier for our hybrid scheme than for a purely OMA setup. The numerical results show that * our hybrid access scheme aims at providing massive connectivity in scenarios with high access demand, which is not covered by traditional OMA setups, and even with lower average power consumption per orthogonal channel and per MTD, the hybrid scheme with CRS outperforms the OMA setup; * failing to efficientlyeliminate the intra-cluster interference could reduce significantly the benefits from NOMA, and can be achallenging issue for implementing NOMA in practice; * our mathematical derivations, besides being easy to evaluate, are accurate. Future work could focus on the relaying phase, while finding some strategies to cope with the larger aggregated data. Additionally, we intend to deeply investigate strategies in order to optimally decide when to switch from a purely OMA setup to our hybrid scheme. § PROOF OF LEMMA <REF>The PMF of the number of MTDs sharing the same channel, u, conditioned on the number of MTDs requiring transmissions, k, is given by (U=u|k)= {[1, for u=L if k≥ N L; 1-k/N+⌊kN⌋, for u=⌊kN⌋ if k<N L; k/N-⌊kN⌋, foru=⌊kN⌋+1=⌈kN⌉ if k<N L;0, otherwise ]. . Notice that if k is equal or greater than the number of available resources, NL, there will be for sure u=L MTDS per channel in the representative cluster. Otherwise, if k<NL, only two consecutive values for u are possible. For instance, if N=10, L=4 and k=26, then 2 MTDs will be allocated in each of 4 channels (2× 4=8 MTDs), and 3 MTDs in the remaining 6 channels (3× 6=18 MTDs). Therefore, the probability of one channel being occupied by 2 MTDs is 4/10=0.4=1-26/10+⌊ 26/10 ⌋, while the probability of being occupied by 3 is the complement, 0.6=26/10-⌊ 26/10 ⌋.Now, the required PMF, (U=u)=(U=u|k)(K=k), can be calculated as follows(U=u) ={[ ∑_k=NL^∞(K=k),for u=L; (1-k/N+⌊kN⌋)∑_k=0^NL-1(K=k), for u=⌊kN⌋; (k/N-⌊kN⌋)∑_k=0^NL-1(K=k), foru=⌊kN⌋+1=⌈kN⌉; 0,otherwise ], .={[ ∑_k=0^N-1(1-kN)(K=k),u=0; ∑_k=Nu^N(u+1)-1(1-kN+u)(K=k)+∑_k=N(u-1)^Nu-1(kN-u+1)(K=k), u=1,...,L-1;∑_k=NL^∞(K=k)+∑_k=N(L-1)^NL-1(kN-L+1)(K=k), u=L; 0, otherwise ]. . Using the PDF of K, (K=k), its CDF, (K≤ k), and the expected K on one interval,∑_k=a^k=bk(K=k)=m̅(Q(a,m̅)+Q(1+b,m̅))+exp(-m̅)(m̅^a/(a-1)!-m̅^1+b/b!) a=0=m̅Q(1+b,m̅)-exp(-m̅)m̅^1+b/b!b→∞=m̅(1-Q(a,m̅))+exp(-m̅)m̅^a/(a-1)!,and regrouping similar terms, we attain (<ref>).§ PROOF OF THEOREM <REF> We proceed as follows p^r_1,1 =(SIR_1,1>θ)=𝔼_I[(h>θ I|I)]=𝔼_I[exp(-θ I)|I],p^r_1,2 =(SIR_1,2>θ) =𝔼_I[(max(h',h”)-θmin(h',h”)>θ I|I)] =𝔼_I[(v_1>θ I|I)](a)={[ 𝔼[2/1+θexp(-θ I)-1-θ/1+θexp(-2θ/1-θ I)|I], if 0≤θ<1; 𝔼[2/1+θexp(-θ I)|I],if θ≥ 1 ].,p^r_2,2 =(SIR_2,2>θ) =𝔼_I[(min(h',h”)-θμmax(h',h”)>θ I|I)] =𝔼_I[(v_2>θ I|I)](b)={[ 𝔼[1-θμ/1+θμexp(-2θ/1-θμI)|I], if 0≤θμ<1;0,if θμ≥ 1 ]., where v_1=max(h',h”)-θmin(h',h”) and v_2=min(h',h”)-θμmax(h',h”), while (a) and (b) come from using their CDF expressions, which are given by F_V_1(v_1) ={[ 1- 2/1+θexp(-v_1)+1-θ/1+θexp(-2v_1/1-θ), if 0≤θ<1;1-2/1+θexp(-v_1),if θ≥ 1;].,F_V_2(v_2) ={[ 1- 1-θμ/1+θμexp(-2v_2/1-θμ),if 0≤θμ<1; 1, if θμ≥ 1;].,for v_1,v_2>0. Notice that the last equalities in (<ref>), (<ref>) and (<ref>) are equivalent to (<ref>), (<ref>) and (<ref>), respectively.For the derivation of (<ref>) we have to use the fact that the Poisson cluster process defined in (<ref>) is a Neyman-Scott process <cit.> with Probability Generating Functional (PGFL) <cit.> G[υ]=exp(λ_a∫_ℝ^2(G_0(∫_ℝ^2v(𝐰+y)f_y(y)dy)-1)d𝐰). Based on g∼Exp(1) which allows to state v(𝐰+y) =𝔼_g[exp(-sgr_a^α||𝐰+y||^-α)]=1/1+sr_a^α||𝐰+y||^-α,and according to the cosine law, e.g., ||𝐰+y||=(r_𝐰^2+r_a^2-2r_𝐰r_acos(ω))^1/2, while substituting (<ref>) and PDFs expressions f_r_a(r_a)=2r_aR_a^2 and f_ω(ω)=12π into (<ref>), we reach ℒ_I_r(s) in (<ref>). § PROOF OF THEOREM <REF> Performing some algebraic transformations on (<ref>) we have ℒ_I_r(s) =exp(2πλ_a∫_0^∞r_𝐰(∑_u=0^Lc_uΥ(r_𝐰,s)^u-1)dr_𝐰) (a)=exp(2πλ_a(∫_0^∞r_𝐰(∑_u=0^L(c_uΥ(r_𝐰,s)^u-c_u))dr_𝐰))(b)=exp(2πλ_a∑_u=1^Lc_u(∫_0^∞r_𝐰(Υ(r_𝐰,s)^u-1)dr_𝐰))(c)=exp(2π c_1λ_a∫_0^∞r_𝐰(Υ(r_𝐰,s)-1)dr_𝐰)_T_1exp(2πλ_a∑_u=2^Lc_u(∫_0^∞r_𝐰(Υ(r_𝐰,s)^u-1)dr_𝐰))_T_2, where (a) comes from ∑_u=0^Lc_u=1, (b) is attained by regrouping terms, and (c) by pulling out the term associated with u=1. Notice that T_1 matches the Laplace transform of an HPPP with density c_1λ_a and one active MTD per channel per cluster, which is given by <cit.> T_1=exp(-χ c_1s^2/α) On the other hand, T_2 includes the contribution of the clustered MTDs, e.g., u≥ 2. For each u, the related term in T_2 matches the Laplace transform of an HPPP with density c_uλ_a and u active MTDs per channel per cluster. It has been observed in <cit.> that the SIR complementary CDFs, e.g., Laplace transform of the interference, for different point processes appear to be merely horizontally shifted versions of each other (in dB), as long as their diversity gain is the same. Thus, scaling the threshold s by this SIR gain factor (or shift in dB) G, we have ℒ_I(s)≈ℒ_I,ref(s/G). However, G is also a function of s but for many setups it keeps approximately constant and consequently it can be determined by finding its value for an arbitrary value of s <cit.>. Using the PPP as the reference model, the limit of G as s→ 0, G_0, is relatively easy to calculate <cit.> G_0=MISR_PPP/MISR,where the MISR is the mean of the interference-to-(average)-signal ratio IS̅R=I𝔼_h[S]. Since S=h here and 𝔼[h]=1, we have that MISR=𝔼[I]. Now, considering the contribution of each u in T_2 separately we have G_0,u=𝔼[I_PPP]/𝔼[I_r]=1/u.Notice that 𝔼[I_PPP] and 𝔼[I_r] are divergent measures for our system model because the resultant point process would no be locally finite since we assumed a path loss model ||x||^-α <cit.>. However, the quotient depends merely on the density of both process and since λ_PPP=c_uλ_a and λ=uc_uλ_a we attain the last equality in (<ref>). Obviously, ℒ_I,PPP(us) works almost surely[Except for relatively large s (<ref>) might serve as an upper bound but it becomes an accurate approximation for (<ref>).] as an upper bound for ℒ_I_r(s) in (<ref>) if G_0<1 (see <cit.> for a geometrical perspective), which holds here since u≥ 2.Now, using <cit.> as the HPPP of reference with one fixed MTD per orthogonal channel[The clustered process with one MTD per orthogonal channel (per cluster) is indeed a HPPP with density λ_a because the displacement theorem in stochastic geometry<cit.>.], we attainT_2a.s≤exp(-χ∑_u=2^Lc_u (us)^2/α) with asymptotic equality as s→ 0. Substituting (<ref>) and (<ref>) into (<ref>) yields (<ref>). On the other hand, the lower bound comes from the corresponding HPPP with same intensity that for each u in T_2, e.g., uc_uλ_a. This is because in our system model the performance in terms of success probability of an HPPP with the same intensity that any clustered process will be always worse since we are expecting a greater number of close interfering nodes. Once again using <cit.>, we have T_2≥exp(-χ∑_u=2^Luc_us^2/α). Now, substituting (<ref>) and (<ref>) into (<ref>) yields (<ref>). For both (<ref>) and (<ref>) the equality fits to the asymptotic cases λ_a→∞,0, R_a→∞,0. § PROOF OF THEOREM <REF> Let's assume the case where K>N since for K≤ N the system behaves exactly as in the RRS scheme and the problem is already solved, e.g., p^c (i)_j,u=p^r_j,u. According to high order statistic theory <cit.>, the PDF of the ith best channel power gain is f_H_i,K(x) =K!exp(-ix)(1-exp(-x))^K-i/(i-1)!(K-i)!. Now we have p^c (i)_j,u=(SNR^c (i)_j,u>θ)=(Θ^i_j,u>θ I_c), where Θ^i_1,1=h_i, Θ^i_1,2=a_ih_i-θ b_ih_i+N and Θ^i_2,2=b_ih_i+N-θμ a_ih_i. Unfortunately, the distribution of Θ^i_j,u, even for the simplest case Θ^i_1,1, conduces to very complicated expression for the CDF, preventing us to follow the same path we used for the RRS scheme. Instead we proceed as follows p^c (i)_j,u =(I_c<Θ^i_j,u/θ)(a)=1/2-1/π∫_0^∞1/φIm{ℒ_I_c(-𝐢φ)𝔼_Θ^i_j,u[exp(-𝐢φΘ^i_j,u/θ)]}dφ(b)≈1/2-1/π∫_0^∞1/φIm{ℒ_I_c(-𝐢φ)exp(-𝐢φ𝔼[Θ^i_j,u]/θ)}dφ, where (a) comes from the Gil-Pelaez inversion theorem <cit.> and the approximation in (b) comes from the Jensen inequality. Also, 𝔼[Θ^i_1,1] =𝔼[h_i]=∫_0^∞xf_H_i,K(x)dx =K!/(i-1)!(K-i)!∫_0^∞xexp(-ix)(1-exp(-x))^K-idx=(-1)^K-iK!/K^2(i-1)!(K-i)!(K^2xBeta(exp(x),-K,K-i+1) -exp(-Kx) _3F_2(-K,-K,i-K,1-K,1-K,exp(x)))|_x=0^x=∞=Γ(i)Γ(K-i+1)(ψ(K+1)-ψ(i))/(i-1)!(K-i)!=ψ(K+1)-ψ(i), while it is straightforward obtaining 𝔼[Θ^i_1,2] and 𝔼[Θ^i_2,2] from (<ref>), yielding (<ref>)-(<ref>). Substituting them into (<ref>) we attain (<ref>). To attain (<ref>) we require to use the same arguments than previously discussed when deriving the result in (<ref>). However, now the point process has marks 1,a_i,b_i, with b_i=δ-a_i, and we require to include the marks along with their probabilities when evaluating (<ref>) <cit.>. Notice that it is intractable evaluating (<ref>) efficiently when a_i,b_i are random since requires an additional integration[For instance, the uniform distribution, which is very simple and probably unrealistic for the scenario discussed, with PDF f_a_i(a_i)=1δ, is already cumbersome when evaluating (<ref>).] in the exponent, hence we propose using some approximations as follows ℒ_I_c(s) (a)≈exp(2πλ_a∫_0^∞r_𝐰(c_0+c_1Υ(r_𝐰,s)+c_2/2(Υ(r_𝐰,a_is)^2+Υ(r_𝐰,b_is)^2)-1)dr_𝐰)(b)=exp(2π c_1λ_a∫_0^∞(Υ(r_𝐰,s)-1)dr_𝐰)exp(2πc_2/2λ_a∫_0^∞(Υ(r_𝐰,a_is)^2-1)dr_𝐰)exp(2πc_2/2λ_a∫_0^∞(Υ(r_𝐰,b_is)^2-1)dr_𝐰), where (a) comes from using fixed values of a_i,b_i, thus, avoiding the additional integration, and using the relation between the geometric and arithmetic mean, x+y2≥√(xy)→ xy≤x^2+y^22, as an approximation, with equality when a_i=b_i; while (b) comes from regrouping terms and ∑_u=0^2c_u=1. Now, by using the same procedure than previously discussed in the proof of Theorem <ref>, e.g., finding upper and lower bounds for (<ref>), and averaging both,we attain ℒ_I_c(s)≈∑_t∈{1,2}β_t-1exp(-χ(c_1+c_2(t/2)^α-2/α(a_i^2/α+b_i^2/α))s^2/α), where β_0,β_1∈[0,1] and β_0+β_1=1. Since 2α<1, and the relation between the generalized mean (or power mean) with exponents 2α and 1, the following result holds (a_i^2/α+b_i^2/α/2)^α/2≤a_i+b_i/2a_i^2/α+b_i^2/α≤2(δ/2)^2/α.By substituting (<ref>) into (<ref>) we attain (<ref>). See Fig. <ref> for more insights on the accuracy. § PROOF OF THEOREM <REF> In order to attain a fair coexistence between the OMA and NOMA setups the interference must be kept the same. Thus, we need to match (<ref>) with the Laplace transform of the interference for an equivalent OMA setup as shown next. ∑_t∈{1,2}β_t-1exp(-χ(c_1+c_2t^α-2/αδ^2/α)s^2/α) =exp(-χ(1-c_0)s^2/α) exp(-χ c_1s^2/α)(exp(-χ c_2 (sδ)^2/α)+exp(-χ c_2 2^α-2/2(sδ)^2/α)) (a)=2exp(-χ(1-c_0)s^2/α) exp(-χ c_2 (sδ)^2/α)+exp(-χ c_2 2^α-2/2(sδ)^2/α) (b)=2exp(-χ(1-c_0-c_1)s^2/α) ξ^δ^2/α+ξ^2^α-2/αδ^2/α (c)=2ξ where (a) comes from setting β_0=β_1=0.5 and evaluating the left-hand sum,(b) comes from dividing both terms by exp(-χ c_1 s^2/α), and (c) by 1-c_0-c_1=c_2 and setting ξ=exp(-χ c_2 s^2/α). Dividing both terms by ξ we attain (<ref>). The solution is an approximation since (<ref>) is an approximation.Now, bounding the solution is simple since (<ref>) is a combination of upper, t=1, and lower, t=2, bounds. Thus, exp(-χ (c_1+c_2δ^2/α)s^2/α) ≥exp(-χ(1-c_0)s^2/α) c_1+c_2δ^2/α ≤1-c_0 c_2δ^2/α ≤ c_2 δ ≤ 1, exp(-χ (c_1+c_2 2^α-2/αδ^2/α)s^2/α) ≤exp(-χ(1-c_0)s^2/α) c_1+c_2 2^α-2/αδ^2/α ≥ 1-c_0 2^α-2/αδ^2/α ≥ 1 δ ≥ 2^2-α/2, completing the proof.IEEEtran
http://arxiv.org/abs/1708.07691v2
{ "authors": [ "Onel L. Alcaraz López", "Hirley Alves", "Pedro Nardelli", "Matti Latva-aho" ], "categories": [ "cs.IT", "cs.NI", "math.IT", "stat.AP" ], "primary_category": "cs.IT", "published": "20170825112416", "title": "Aggregation and Resource Scheduling in Machine-type Communication Networks: A Stochastic Geometry Approach" }
Symplectic rational G-surfaces and equivariant symplectic cones Weimin Chen, Tian-Jun Li, and Weiwei Wu December 30, 2023 =============================================================== § INTRODUCTIONThe strong coupling constant of Quantum Chromodynamics (QCD), , is, together with the quark masses, the main free parameter of the QCD Lagrangian. It enters into every process that involves the strong interaction and is the fundamental parameter of the perturbative expansion used in calculating cross sections for processes with large momentum transfers.The strong coupling is a function of a renormalisation scale μ.Its dependence on μ is governed by renormalisation group equations <cit.>, however its value at a given reference scale must be determined from experimental data.The current world average value for the coupling evaluated at the Z-boson mass scale, , as determined by the Particle Data Group (PDG), is 0.1181 ± 0.0011 <cit.>.The world average incorporates information from a wide variety of experimental data and of methods to deducefrom that data.It requires at least next-to-next-to-leading order (NNLO) accuracy in the perturbative expansions that are used.Even with the 1% precision that is quoted by the PDG, the uncertainty oncontributes significantly to uncertainties on physical predictions for colliders.For example, it leads to about 2% uncertainty on the gluon-fusion Higgs cross section, comparable with the largest of any of the other individual uncertainties <cit.>.Furthermore, while the bulk of the evidence points to values of the strong coupling that are compatible with ≃ 0.118, including precise lattice-QCD based determinations, e.g. <cit.>,there are a handful determinations with small quoted uncertainties that suggestvalues that are several standard deviations below the world average. Notable cases are those from the Thrust and C-parameter distributions in e^+e^- collisions, which yield 0.1135 ± 0.0011 and 0.1123 ± 0.0015 respectively <cit.>,[ An alternative analysis of the Thrust quotes a significantly larger uncertainty, 0.1137^+0.0034_-0.0027 <cit.>.]or the ABMP PDF fit <cit.>, 0.1147±0.0008.Of the various NNLO determinations of the strong coupling, so far only one is based on hadron collider data, using a measurement of the top-quark pair production cross section () performed by the CMS Collaboration at a centre-of-mass energy √(s)=7TeV <cit.>.It yields = 0.1151^+0.0028_-0.0027.This extraction is intriguingly placed between the world average and the outlying lowextractions, albeit compatible with both.However, it is based on a single, early and now outdated measurement of .It is of interest, therefore, to examine how it is affected by more recent precise measurements by the ATLAS and CMS Collaborations at CERN's Large Hadron Collider (LHC) <cit.> as well as by a combination of measurements from the D0 and CDF collaborations at the Tevatron <cit.>.In the course of our discussion, we will encounter issues related to the treatment of theoretical uncertainties and the choice of the parton distribution function (PDF) set that are of relevance more generally in the determination of the strong coupling and other fundamental constants (e.g. the top-quark mass) from collider data.Such studies may become increasingly widespread in the coming years, given the recent rapid progress in NNLO calculations, e.g. for vector-boson (e.g. Refs. <cit.>) and inclusive jet p_tdistributions <cit.> at hadron colliders and jet p_t distributions in Deep Inelastic Scattering (DIS) <cit.>. § DETERMINATION OFFROMCROSS SECTION MEASUREMENTS§.§ Theory prediction for the top pair production cross section Theory predictions for the dependence ofonare calculated using the program  <cit.>. It provides the computation of the total cross section up to NNLO <cit.>, with possible inclusion of soft-gluon resummation at next-to-next-to-leading logarithmic order (NNLL), as described in Refs. <cit.>.The predicted cross section is evaluated setting both the renormalisation scale μ_R and factorisation scale μ_F equal to the top-quark pole mass. The theoretical uncertainty associated with missing higher-order contributions is evaluated by independently varying μ_R and μ_F up and down by a factor of 2, under the constraint that 1/2≤μ_R / μ_F ≤ 2.The scale uncertainties are modelled as corresponding to a 68% confidence interval with a Gaussian-shaped uncertainty profile.This choice is more conservative than the (flat) 100% confidence interval that is sometimes taken for scale variations and used, notably, in Ref. <cit.>. The latter choice leads to a scale uncertainty contribution that is smaller by a factor √(3) (the ratio of the standard deviations of the two uncertainty profiles).Note that a 100% confidence interval for scale uncertainties is known to be inconsistent with the observation that a significant fraction of NNLO calculations is outside the scale uncertainty interval of the corresponding NLO calculation.[As discussed in <cit.> and also <cit.>. Note that the experience with NLO scale uncertainties may not apply to NNLO scale uncertainties.In particular, for the two cases of hadron-collider calculations available at N3LO accuracy, Higgs production in the gluon-fusion <cit.> and vector-boson-fusion <cit.> channels, while the central NNLO results are outside the NLO scale uncertainty bands, the N3LO results are well within the corresponding NNLO bands. ]A further choice that needs to be made is whether to include the NNLL threshold resummation for the cross section.This is a procedure that resums terms whose leading-logarithmic (LL) structure is (ln^2 N)^n, where N ∼ dlnσ_/dln s and s is the squared centre-of-mass energy.When m_^2/s approaches one, i.e. when one approaches the threshold for tt̅ production, N is proportional to 1/(1-m_^2/s) and the threshold resummation is a necessity.However, at the LHC and even at the Tevatron, top-pair production is far from threshold and N is not especially large: for the dominant gluon-gluon production channel at LHC, N ≃ 1.4 for m_tt̅ = 2 m_t and √(s) = 7; while for the dominant qq̅ production channel at the Tevatron, N ≃ 1.8.Accordingly, there is debate within the community as to whether threshold resummation is called for.On one hand, one may argue that it brings terms that have a certain physical meaning.On the other, one may argue that there is no reason why the terms brought by threshold resummation should dominate over other, neglected terms, and therefore it is more consistent to include just the fixed-order contributions, which are known exactly.We will take an agnostic approach to this question, carry out fits with and without NNLL resummation, and then average both the central values and the uncertainties in the two cases in order to obtain our final result.The theory prediction foralso depends on a choice of PDF set.Since that choice needs to be related to the data that we fit, we postpone our discussion of the PDF choice to section <ref>. §.§ Measurements of the top pair production cross sectionOurdetermination is performed using seveninputs, listed in Table <ref>.The six measurements at the LHC include three updated measurements by the CMS Collaboration at centre-of-mass energies of 7 TeV, 8 TeV <cit.> and 13 TeV <cit.>.These measurements were performed in the e μ decay channel, [Themeasurement by CMS at 13 TeV using events with one lepton and at least one jet in the final state <cit.> has a slightly better precision than the CMS result used in our analysis. However, the effect on the final result is marginal, and using measurements from the same decay channel yields a clearer correlation structure for the combination. ]where the W-bosons from the top quark decays each themselves decay into a charged lepton and a neutrino, one of the W decays producing an electron, the other producing a muon.The measurements are based on data collected in the years of 2011, 2012 and 2015 respectively, with integrated luminosities of 5.0 fb^-1, 19.7 fb^-1, and 2.2 fb^-1.From the ATLAS Collaboration, three similar measurements performed in the eμ decay channel are included, based on datasets with integrated luminosities of 4.6 fb^-1, 20.3 fb^-1 and 3.2 fb^-1 for the 7 TeV, 8 TeV <cit.> and 13 TeV <cit.> centre-of-mass energies respectively.A seventh input from the Tevatron collider <cit.> at a centre-of-mass energy of 1.96 TeV is included, which comprises a combination of measurements performed in multiple decay channels from both the CDF Collaboration and the D0 Collaboration.§.§ Choice of PDFSeveral considerations arise in our choice of PDF. Firstly, we restrict our attention to recent global fits that are available through the LHAPDF interface <cit.>.Secondly, we require that the PDFs should be available for at least threevalues, so that we can correctly determine thedependence of the cross section in the context of that PDF.These two conditions limit us to the CT14 <cit.>, MMHT2014 <cit.> and the NNPDF3.0 <cit.> series.Thirdly, we impose a requirement that the PDF should not have includeddata in its fitting procedure.As should be obvious qualitatively, and as we will discuss quantitatively elsewhere <cit.>, using a PDF with top-data included would bias our fits.Table <ref> summarises what data has been included in each of these PDF sets, including both the default NNPDF30 set and NNPDF30_nolhc, obtained without LHC data.One sees that the two options that are available to us are CT14 and NNPDF30_nolhc.[As this article was being completed the NNPDF31 series <cit.> of PDF sets became available. It includes a set fitted without top data, however only for a single value of the strong coupling, and accordingly is not suitable for use in a strong coupling determination.] We use PDF uncertainties calculated at the 68% confidence level, following the error propagation prescription from the individual PDF groups.The uncertainties from the CT14 PDF set, which are provided at a 90% confidence level by default, are scaled by a factor of 1/(√(2) erf^-1( 0.90) )≃ 0.608.The predicted cross sections for both PDF sets, with NNLO and NNLO+NNLL calculations, are listed in Table <ref>.The cross sections are 1-3% higher when including NNLL contributions.The scale uncertainties are in the 4-6% range for the NNLO results and get reduced by between one third and one half when including NNLL terms.At LHC energies, the cross sections with NNPDF30_nolhc are about 1% larger than those with CT14, however the opposite pattern is seen at Tevatron. Finally, the PDF uncertainties are somewhat larger with NNPDF_nolhc than with CT14.To understand the final errors on thedetermination it is important also to examine how the predicted cross sections depend on , a result of thedependence both of the hard cross section and of the PDFs.This is shown in Fig. <ref>: points correspond to the values offor which the given PDF is available, and lines correspond to a fit for ln using a polynomial of ln. We use polynomials of degree 3 and 1 respectively for the CT14 and NNPDF30_nolhc PDFs, chosen based on the available number ofpoints and requirements of stability of the extrapolation beyond the availablepoints.A steeper slope of thedependence (also quoted at =0.118 in the last column of Table <ref>) leads to a smaller final error onfor any given source of uncertainty on . For LHC energies, CT14 is generally steeper, while at the Tevatron it is NNPDF_nolhc that is steeper.Note also that CT14 curves have substantial curvature, and this will induce asymmetric uncertainties for , even in the case of uncertainties on the cross section that are symmetric.§.§ Top-mass dependenceThe top-quark pole mass is taken to be 173.2 ± 0.87GeV, which is consistent with the world average value computed by the Particle Data Group <cit.>.The experimentally measured cross section, ^exp (), depends onthrough the acceptance corrections, whose parametrization is given together with the individual measurements.The uncertainty on the experimentally measured cross section due to the top-quark pole mass is given in Tab. <ref>, where the uncertainty was calculated by shifting the top mass up and down by its uncertainty.An increase in the top mass leads to a decrease in the measured total cross section.This is because the experiments effectively measure a fiducial cross section (which is independent of m_t) and then extrapolate it to a total cross section by dividing by the acceptance for the fiducial cross section.For larger values of m_t the acceptance is larger, since decay products are more likely to pass transverse momentum cuts, and so the resulting total cross section is lower.The theoretically predicted cross section, ^pred (), also depends on , because of the structure of the underlying hard cross section and the x-dependence of the PDFs, cf. Tab. <ref>.It too decreases for an increase in the cross section, and this effect is larger than for the measured cross section.To define a single error contribution associated with the top-mass uncertainty, it is convenient to absorb these different sources of m_t dependence into an effective predicted cross section,σ_tt̅^eff(m_t) =^pred () ·^exp (^) /^exp (),where m_t^ = 173.2 is the central value of the world average top mass.For m_t = m_t^, this effective predicted cross section coincides with the actual predicted one.The final uncertainty on the effective predicted cross section associated with the error of Δ m_t = 0.87 on the world average top mass is then given byσ_tt̅^eff(m_t^±Δ m_t) -σ_tt̅^eff(m_t^) .This can be used in ourdetermination in a manner similar to any of the theoretical and PDF uncertainties on the predicted cross section.To a good approximation, the final top-mass uncertainty on the effective cross section is equal to the difference between the percentage uncertainties in Tabs. <ref> and <ref>. §.§ Strong coupling determination procedureIn the determination offrom , the theory prediction is treated as a Bayesian prior (one prior for any given value of ) and the experimental result as a likelihood function.The multiplication of these is the joint posterior probability function from whichand its uncertainties are determined after marginalisation of .The procedure is mostly analogous to that used by the CMS Collaboration in Ref. <cit.>.The construction of the Bayesian prior from the theory dependence necessitates a single probability distribution function given all individual theory uncertainties. The three theory uncertainties are each interpreted as corresponding to an asymmetric Gaussian function: f^Unc. source( | ) ={[1/√(2π)Δ_- e ^-1/2(- ^pred() /Δ_- )^2if ≤^pred; 1/√(2π)Δ_+ e ^ -1/2(- ^pred() /Δ_+ )^2if> ^pred ]. , where ^pred() is the predicted central value at a given value of , and Δ_+(-) is the positive (negative) uncertainty from a given theory uncertainty source. This function has the advantage that the integral normalizes naturally to one, and that the integral from (^pred-Δ_-) to (^pred+Δ_+) corresponds to a 68%confidence interval.On average there is a 20% difference between Δ_+ and Δ_-, and up to a difference of about 85% for the most asymmetric uncertainty.The central value forcorresponds to the median of the distribution.The combined probability distribution function of the predicted cross section, f^pred( | ), is computed by taking the numerical convolution of the individual asymmetric Gaussian functions: f^pred( | ) =f^PDF( | )⊗ f^( | ) ⊗ f^Scale( | ) , where the convolution is performed such that the probability distribution functions are centred around ^pred. While the individual uncertainty distributions contain a discontinuity at = ^pred(), the convolution is a smooth function.The dependence onof the width of the uncertainty band is neglected.[ With this approach of fixed absolute uncertainties on , theory uncertainties onwill turn out relatively smaller for determinations with a higher centralvalue.One concern is that this might affect the relative weights of different determinations in the combination that is described later in Sect. <ref>.To address this concern, a cross-check was performed in which the individual theory errors from our procedure were scaled relative to the default approach by a factor ^determination/^ref.That is equivalent to taking fixed relative (rather than fixed absolute) theory uncertainties on .With the combination procedure of section <ref>, the difference induced by this change was below the per mille level.For the alternative combination procedure in Appendix <ref>, the effect is less than half apercent on , which remains much smaller than the difference between the two combination procedures.]The probability distribution function of the predicted cross section is multiplied by the probability distribution function of the measured cross section f^exp( | ), yielding the joint Bayesian posterior in terms ofand . The Bayesian confidence interval ofcan be computed through marginalisation of the posterior by integrating over : L() = ∫ f^pred( | )·f^exp( | )d . Here, f^exp( | ) is taken to be independent of . Technically a small dependence onis introduced in f^exp( | ) through the acceptance corrections; however, in the region of relevance around ^ref = 0.118, the effect of this on the uncertainty of the cross section is below the percent level <cit.>, and can thus be safely neglected.The marginalised joint posterior L() can be treated as a probability distribution function. The central value for thedetermination is taken to be the location of the peak of L(), and theuncertainty is extracted by computing the 68% confidence interval whose left and right bounds are at equal height.[This is somewhat different from the prescription to define an asymmetric probability distribution in Eq. (<ref>), but coincides with widespread practice in ATLAS and CMS likelihood fits.]The procedure is illustrated in Fig. <ref>, showing the experimental and theory probability distribution functions and the unmarginalised posterior (Fig. <ref>(a)) as well as the marginalised posterior with extracted central value and uncertainties (Fig. <ref>(b)).The combination of determinations from different experiments necessitates a breakdown of the total uncertainty into components that can be assigned to the individual uncertainty sources.To this end, the determination is repeated each time omitting a different uncertainty source, and the squared difference of the resulting uncertainty with respect to the total uncertainty is computed.A relative contribution to the total uncertainty is then computed per uncertainty source. §.§ Individual results forper σ_tt̅ measurementThe results of ourdetermination are listed for the CT14nnlo PDF set in Tables <ref> and <ref> and for the NNPDF30_nolhc PDF set in Tables <ref> and <ref>.The individualdeterminations are all compatible with the world average to within uncertainties.The central values are rather similar with the CT14 and NNPDF sets.The largest individual sources of uncertainty onare the PDF uncertainties and the scale uncertainties.For the LHC determinations, the PDF uncertainties tend to be larger with NNPDF, in part a consequence of the larger uncertainties in the cross section in Table <ref>. However the other uncertainties are also larger with NNPDF, because of its weaker dependence on .The NNLO+NNLL determinations all have smallerresults, consistent with the larger cross sections in Table <ref>.The scale uncertainties are also noticeably smaller.Other uncertainties are largely unchanged.A final comment concerns the somewhat larger scale, m_t and PDF uncertainties with the CT14 PDF for the CMS 7 TeV case as compared to the ATLAS 7 TeV case, or also ATLAS 8 TeV as compared to ATLAS 7 TeV.In general with the CT14 PDF, a smaller value ofcorresponds to larger uncertainties, because thedependence of the cross section is weaker for smallvalues, cf. Fig. <ref>. Note however, that the scale and other uncertainties on the cross section predictions have been evaluated only for the reference value of =0.118, and in general the question of how one should correlate uncertainties with the central value is a delicate one.[As an example, imagine that we had used scale uncertainties that depended on : then for an experimental measurement with cross section that fluctuates low, one would deduce a smaller scale uncertainty than for a cross section that fluctuates high; when combining them, depending on the procedure, this might then lead to a larger weight for the smaller value of .]Accordingly one should be wary of reading too much into the variation of uncertainties with the centralvalue.§ COMBINATION OFDETERMINATIONS §.§ Correlation coefficientsA combination of measurements can strongly depend on the assumed or calculated correlations <cit.>.It is therefore necessary to carefully evaluate the correlation coefficients used for the combination. In the case ofdeterminations many correlations can be reasonably motivated or computed. The correlation coefficients between individual measurements are motivated per uncertainty source. * Statistical uncertainties are considered uncorrelated for all experimental inputs. * Systematic uncertainties are considered fully correlated only for measurements obtained with the same detector. This concerns the measurements performed by CMS and ATLAS at different centre-of-mass energies. * Uncertainties due to beam energy are fully correlated between ATLAS and CMS and are taken to be correlated across energies.The beam-energy uncertainty at the Tevatron was tiny and is neglected, as outlined in the caption of Table <ref>. * Uncertainties due to luminosity are partially correlated between ATLAS and CMS. The correlated component of the luminosity uncertainty stems from the uncertainty on the bunch current density and similarities in the Van der Meer scan fit model.The correlated and uncorrelated uncertainties are estimated using the same principles as used for the top-quark pair production cross section combinations between ATLAS and CMS at 7 and 8 TeV <cit.>, updated with the latest luminosity determinations <cit.>.The luminosity uncertainty (as a percentage of the top-quark pair production cross section) is displayed in Table <ref>. The luminosity uncertainties onare taken to have the same correlation coefficient. The uncertainties on the predicted cross sections (due to the PDF, the top-quark mass and the renormalisation and factorisation scale)are generally strongly correlated. The combination result strongly depends on the assumed correlation structure of these theoretical uncertainties if included in the combination, which is usually not known precisely in particular for the scale uncertainty.We therefore adopt a different procedure: The individual results are simultaneously shifted up and down by their respective total theory uncertainties, and the combination is re-evaluated.The difference between the upper and lower bounds and the original combination is taken to be the (asymmetric) theoretical uncertainty.The impact of the alternative procedure of including also the theory uncertainties within a single combination is discussed in Appendix <ref>.§.§ Combining correlated measurements: Likelihood-based approach In order to combine the individual results, we opted for a likelihood-based approach <cit.>.[As a cross-check, we also used the Best Linear Unbiased Estimate procedure (BLUE) <cit.>. This is only suitable for symmetric errors and in that case we found essentially identical results.]In this approach a global likelihood function is constructed from the probability distribution functions of individual determinations.Let us suppose we have n_m measurements of the top cross section and associated determinations of . For each determination i, _,i, we have n_u uncorrelated error components, each specific to that determination.The magnitude of the k^th uncorrelated error for determination i is labelled Δ^k_i.We additionally have n_c error components that are correlated across all determinations.For each of the correlated components, j, we introduce a nuisance parameter θ_j that is common across all measurements.Its impact on measurement i is governed by a coefficient δ_i^j. The full set of θ_j will be denoted θ.The likelihood will be composed of a product of probability distribution functions (pdf)[ pdf, for probability density function, is not to be confused with PDF, for parton distribution function. ]. For each nuisance parameter we will have one pdf, a Gaussian distribution with a standard deviation of one: pdf_θ_j =1/√(2π) e^-θ_j^2/2 . There will also be a pdf for each combination of measurement i and associated uncorrelated error Δ^k_i. It is given by pdf_i, k(, θ) = 1/√(2π)Δ_i^k exp[ - (_,i+ ∑_j θ_j ·δ_i^j - )^2 / 2(Δ_i^k)^2 ]. To address the issue of errors that are not symmetric, we adopt the following prescription for the Δ_i^k and δ_i^j: Δ_i^k = {[ Δ_i^k, - if≤_,i; Δ_i^k, + if > _,i ]. ,δ_i^j = {[ δ_i^j, - if≤_,i; δ_i^j, + if > _,i ]. . An overview of the values used for δ_i^j, ± and Δ_i^k, ± is given in Appendix <ref>.The probability distribution function of determination i including all uncorrelated uncertainties is then constructed by convolution: pdf__,i(, θ) = pdf_i, 1(, θ) ⊗pdf_i, 2(, θ) ⊗⋯⊗pdf_i, n_u(, θ) where the convolution is performed such that the probability distribution functions are centred around _,i.The global likelihood function L(, θ ) is constructed by multiplication of the probability distribution functions of the determinations and the nuisance parameters: L(, θ ) =∏_i=1^n_mpdf__,i(, θ)× ∏_j=1^n_cpdf_θ_j . In order to complete the formalism of a statistical test the test statistic q is introduced: q() = -2 log L(, θ̂' ) / L( α̂_s , θ̂ ). Here L is maximized for variables that carry a hat and in general θ̂' will take on different values from θ̂.The quantity L( α̂_s , θ̂ ) is therefore the global maximum likelihood, and the ratio cannot be larger than one. The normalisation is such that q can be treated as χ^2-distributed with one degree of freedom.The test statistic q is scanned over a range ofvalues. The minimum of the scan, by construction at q=0, is the maximum likelihood value for , and the 1σ confidence interval is extracted from the interval between the intersection points of the scan with q=1. Any skewness of the parabola of the scan is due to the inclusion of asymmetric uncertainties.Figure <ref> shows the scan and the corresponding combination results for each of the PDF sets. § RESULTS AND DISCUSSIONThe combination procedure is performed for each of the two PDF sets taken into consideration at NNLO and at NNLO+NNLL separately. The combination results and their unweighted average are displayed numerically in Table <ref>, and graphically in Fig. <ref>.There is no unique way to quote a final best estimate ofbased on the results obtained from the different PDF sets and QCD calculation choices (NNLO v. NNLO+NNLL).An unbiased approach for combining results from different PDFs, in line with the PDF4LHC recommendations <cit.>, is to average without applying any further weighting.In accordance with that approach we take the straight average of the mean values and the uncertainties of the individual combinations.This coincides with the procedure for combiningresults from a single class of observables in Ref. <cit.>.The final result is=^+_- which can be compared to the result of Ref. <cit.>, (m_Z) = 0.1151^+0.0028_-0.0027.Our central value is larger mainly because recent measurements of the cross sections are higher than that used in Ref. <cit.>, but also in part because of our choice to take the average of results from NNLO and NNLO+NNLL cross sections (a 0.6% increase relative to just NNLO+NNLL).Our symmetrised uncertainty of % is somewhat increased with respect to that of Ref. <cit.>, 2.4% (symmetrised).The difference in uncertainty is due to several choices.On one hand we have taken a smaller uncertainty on the top-quark mass, in line with the PDG determination.One the other hand, we have been somewhat more conservative in our treatment of theoretical and PDF uncertainties.Firstly, the choice of treating the scale uncertainties onas a 68% confidence interval instead of a (flat) 100% confidence interval increases the scale uncertainty component by roughly a factor of √(3).Secondly, we have used an average of the uncertainties from NNLO and NNLO+NNLL cross section determinations, which also yields a larger uncertainty than using NNLO+NNLL cross section determinations only.Finally, the PDF sets used for the determination were chosen with minimization of potential biases in mind, rather than the ones with smallest uncertainty.§ CONCLUSIONS We have used seven measurements of the top-antitop quark production cross section at the LHC and the Tevatron in order to determine the strong coupling constant , using the CT14 PDF set and the NNPDF30_nolhc PDF set at NNLO and NNLO+NNLL.Overall, our determination ofyields a value that is compatible with the world average value and uncertainties that are somewhat larger than the best individual determinations, though comparable to that from the electroweak precision data <cit.>.The largest uncertainties are associated with unknown higher-order contributions and PDF uncertainties. § ACKNOWLEDGEMENTS TK is supported by grant Nr. 200020_162665 of the Swiss National Science Foundation.GPS is supported in part by ERC Advanced Grant Higgs@LHC (No. 321133) and also wishes to thank the Munich Institute for Astro- and Particle Physics for hospitality while this work was being completed.We are grateful to Andreas Jung regarding discussions of the Tevatron beam-energy uncertainty. § INCLUDING STRONGLY CORRELATED UNCERTAINTY SOURCES IN THE COMBINATIONOur approach of excluding strongly correlated uncertainties from the combination is generally recommended when using the covariance matrix to fit strongly correlated data <cit.>.To illustrate the effect of strong correlations, the combination is here performed again with the PDF, scale anduncertainties included in the combination one by one.The PDF and scale uncertainties are considered fully correlated between measurements made with the LHC, and partially correlated between measurements made with the LHC and Tevatron.In the case of the PDF uncertainties the degree of correlation between LHC and Tevatron measurements was determined using the procedure described in Ref. <cit.>.Theuncertainties are considered fully correlated for all measurements. Tables <ref> and <ref> show the results for the CT14 PDF set, using NNLO and NNLO+NNLL respectively, and Tables <ref> and <ref> for the NNPDF30_nolhc PDF set.As expected, the total uncertainty decreases as more sources are included in the combination.As the sensitivity tois stronger for a larger cross section, determinations that deviate up can have a smaller uncertainty, and therefore obtain a larger weight in the combination.This is the case for the determination from the ATLAS measurement at 7 when using the CT14 PDF set.A larger weight may also be obtained for determinations that are more independent with respect to the others. This is primarily the case for the Tevatron determination, for both PDF sets.These effects are enhanced if the overall correlation is increased by including strongly correlated uncertainty sources, which explains why the combination yields increasing values ofas more sources are included.The results found this way are larger than both the straight average and the median of the individual determinations, though the difference is well within one standard deviation. Taking them as our final results would imply a high degree of trust in the assumed correlations.Due to the inherent difficulty of determining correlations, notably as concerns the scale variations, and the importance of the subtle interplay between an individual determination's α_s result and its error, the conservative approach is to exclude the strongly correlated sources from the combination.§ OVERVIEW OF ASYMMETRIC UNCERTAINTIES USED IN THE COMBINATIONTables <ref> and <ref> show the numerical values for the uncertainty coefficients used in the combination procedure for the CT14 PDF set, using NNLO and NNLO+NNLL cross sections respectively.Only experimental uncertainties are listed. Theoretical uncertainties, which are taken into account after the combination procedure, can be found in Tables <ref>-<ref>.The correlations for the correlated uncertainties (with a δ symbol) are described in Section <ref>.99Baikov:2016tgj P. A. Baikov, K. G. Chetyrkin and J. H. Kühn, “Five-Loop Running of the QCD coupling constant,” Phys. Rev. Lett.118 (2017) no.8,082002[1606.08659 [hep-ph]].Herzog:2017ohr F. Herzog, B. Ruijl, T. Ueda, J. A. M. Vermaseren and A. Vogt, “The five-loop beta function of Yang-Mills theory with fermions,” JHEP 1702 (2017) 090[1701.01404 [hep-ph]].PDG C. Patrignani et al. [Particle Data Group Collaboration], “Review of Particle Physics,” Chin. Phys. C 40 (2016) 100001. Anastasiou:2016cez C. Anastasiou, C. Duhr, F. Dulat, E. Furlan, T. Gehrmann, F. Herzog, A. Lazopoulos and B. Mistlberger, “High precision determination of the gluon fusion Higgs boson cross-section at the LHC,” JHEP 1605 (2016) 058[1602.00695 [hep-ph]]. Aoki:2016frl S. Aoki et al., “Review of lattice results concerning low-energy particle physics,” Eur. Phys. J. C 77 (2017) no.2,112[1607.00299 [hep-lat]].McNeile:2010ji C. McNeile, C. T. H. Davies, E. Follana, K. Hornbostel and G. P. Lepage, “High-Precision c and b Masses, and QCD Coupling from Current-Current Correlators in Lattice and Continuum QCD,” Phys. Rev. D 82 (2010) 034512[1004.4285 [hep-lat]]. Bruno:2017gxd M. Bruno et al., “The strong coupling from a nonperturbative determination of the Λ parameter in three-flavor QCD,” [1706.03821 [hep-lat]].Abbate:2010xh R. Abbate, M. Fickinger, A. H. Hoang, V. Mateu and I. W. Stewart, “Thrust at N^3LL with Power Corrections and a Precision Global Fit for (m_Z),” Phys. Rev. D 83 (2011) 074021[1006.3080 [hep-ph]].Hoang:2015hka A. H. Hoang, D. W. Kolodrubetz, V. Mateu and I. W. Stewart, “Precise determination of α_s from the C-parameter distribution,” Phys. Rev. D 91 (2015) no.9,094018[1501.04111 [hep-ph]]. Gehrmann:2012sc T. Gehrmann, G. Luisoni and P. F. Monni, “Power corrections in the dispersive model for a determination of the strong coupling constant from the thrust distribution,” Eur. Phys. J. C 73 (2013) no.1,2265[1210.6945 [hep-ph]].Alekhin:2017kpj S. Alekhin, J. Blümlein, S. Moch and R. Placakyte,Phys. Rev. D 96 (2017) no.1,014011 doi:10.1103/PhysRevD.96.014011 [arXiv:1701.05838 [hep-ph]].CMS-ttbar-alphasS. Chatrchyan et al. [CMS Collaboration], “Determination of the top-quark pole mass and strong coupling constant from the t t-bar production cross section in pp collisions at √(s) = 7 TeV,” Phys. Lett. B 728 (2014) 496 [Corrigendum Phys. Lett. B 728 (2014) 526][1307.1907 [hep-ex]].CMS78TeV V. Khachatryan et al. [CMS Collaboration], “Measurement of the tt̅ production cross section in the e-μ channel in proton-proton collisions at √(s) = 7 and 8 TeV,” JHEP 1608 (2016) 029[1603.02303 [hep-ex]].CMS13TeV V. Khachatryan et al. [CMS Collaboration], “Measurement of the top quark pair production cross section in proton-proton collisions at √(s) = 13 TeV,” Phys. Rev. Lett.116 (2016) no.5,052002[1510.05302 [hep-ex]].CMS13TeVoneleptononejet A. M. Sirunyan et al. [CMS Collaboration], “Measurement of the tt̅ production cross section using events with one lepton and at least one jet in pp collisions at √(s)=13 TeV,” CMS-TOP-16-006, CERN-EP-2016-321 [1701.06228 [hep-ex]].ATLAS78TeV G. Aad et al. [ATLAS Collaboration], “Measurement of the tt̅ production cross-section using eμ events with b-tagged jets in pp collisions at √(s) = 7 and 8TeV with the ATLAS detector,” Eur. Phys. J. C 74 (2014) no.10,3109 Addendum: [Eur. Phys. J. C 76 (2016) no.11,642][1406.5375 [hep-ex]].ATLAS13TeV M. Aaboud et al. [ATLAS Collaboration], “Measurement of the tt̅ production cross-section using eμ events with b-tagged jets in pp collisions at √(s)=13 TeV with the ATLAS detector,” Phys. Lett. B 761 (2016) 136[1606.02699 [hep-ex]].D0CDFcombinationT. Aaltonen et al. [CDF Collaboration, D0 Collaboration], “Combination of measurements of the top-quark pair production cross section from the Tevatron Collider,” Phys. Rev. D 89 (2014), 072001. Boughezal:2015ded R. Boughezal, J. M. Campbell, R. K. Ellis, C. Focke, W. T. Giele, X. Liu and F. Petriello, “Z-boson production in association with a jet at next-to-next-to-leading order in perturbative QCD,” Phys. Rev. Lett.116 (2016) no.15,152001[1512.01291 [hep-ph]].Ridder:2016nkl A. Gehrmann-De Ridder, T. Gehrmann, E. W. N. Glover, A. Huss and T. A. Morgan, “The NNLO QCD corrections to Z boson production at large transverse momentum,” [1605.04295 [hep-ph]].Currie:2017tfd J. Currie, A. Gehrmann-De Ridder, T. Gehrmann, E. W. N. Glover, A. Huss and J. Pires, “Differential single jet inclusive production at Next-to-Next-to-Leading Order in QCD,” [1705.08205 [hep-ph]].Currie:2017tpe J. Currie, T. Gehrmann, A. Huss and J. Niehues, “NNLO QCD corrections to jet production in deep inelastic scattering,” [1703.05977 [hep-ph]].topplusplusM. Czakon and A. Mitov, “Top++: A Program for the Calculation of the Top-Pair Cross-Section at Hadron Colliders,” Comput. Phys. Commun.185 (2014) 2930[1112.5675 [hep-ph]].topplusplus2 M. Czakon, P. Fiedler and A. Mitov, “Total Top-Quark Pair-Production Cross Section at Hadron Colliders Through O(α_s^4),” Phys. Rev. Lett.110 (2013) 252004[1303.6254 [hep-ph]]. Beneke:2009rj M. Beneke, P. Falgari and C. Schwinn, “Soft radiation in heavy-particle pair production: All-order colour structure and two-loop anomalous dimension,” Nucl. Phys. B 828 (2010) 69[0907.1443 [hep-ph]].Czakon:2009zw M. Czakon, A. Mitov and G. F. Sterman, “Threshold Resummation for Top-Pair Hadroproduction to Next-to-Next-to-Leading Log,” Phys. Rev. D 80 (2009) 074017[0907.1790 [hep-ph]].Bagnaschi:2014wea E. Bagnaschi, M. Cacciari, A. Guffanti and L. Jenniches, “An extensive survey of the estimation of uncertainties from missing higher orders in perturbative calculations,” JHEP 1502 (2015) 133[1409.5036 [hep-ph]].GavinLHCPtalk G. P. Salam, p. 14 of “QCD Theory Overview: Towards Precision at LHC,” talk at fourth Annual Large Hadron Collider Physics Conference, June 2016, Lund, Sweden, <https://indico.cern.ch/event/442390/contributions/1095992/attachments/1290565/1921904/LHCP-QCD-43.pdf>Dreyer:2016oyx F. A. Dreyer and A. Karlberg, “Vector-Boson Fusion Higgs Production at Three Loops in QCD,” Phys. Rev. Lett.117 (2016) no.7,072001[1606.00840 [hep-ph]].LHCBeamMomentumCalibration E. Todesco and J. Wenninger, “Large Hadron Collider momentum calibration and accuracy,” Phys. Rev. Accel. Beams 20 (2017) no.8, 081003 OldBeamEnergy J. Wenninger,“Energy Calibration of the LHC Beams at 4 TeV”, Accelerators & Technology Sector Reports http://cds.cern.ch/record/1546734CERN-ATS-2013-040,CERN, 2013. Johnson:1988bx R. Johnson, “Tevatron energy calibration for the '87 collider run,” FERMILAB-EXP-156.LHAPDF A. Buckley, J. Ferrando, S. Lloyd, K. Nordström, B. Page, M. Rüfenacht, M. Schönherr and G. Watt, “LHAPDF6: parton density access in the LHC precision era,” Eur. Phys. J. C 75 (2015) 132[1412.7420 [hep-ph]]. CT14 S. Dulat et al., “New parton distribution functions from a global analysis of quantum chromodynamics,” Phys. Rev. D 93 (2016) no.3,033006[1506.07443 [hep-ph]].MMHT2014 L. A. Harland-Lang, A. D. Martin, P. Motylinski and R. S. Thorne, “Parton distributions in the LHC era: MMHT 2014 PDFs,” Eur. Phys. J. C 75 (2015) no.5,204[1412.3989 [hep-ph]].NNPDF30 R. D. Ball et al. [NNPDF Collaboration], “Parton distributions for the LHC Run II,” JHEP 1504 (2015) 040[1410.8849 [hep-ph]].InPrep S. Bethke, G. Dissertori, T. Klijnsma and G. P. Salam, in preparation. Ball:2017nwa R. D. Ball et al. [NNPDF Collaboration], “Parton distributions from high-precision collider data,” [1706.00428 [hep-ph]]. BLUE1L. Lyons, D. Gibaut and P. Clifford, “How to combine correlated estimates of a single physical quantity,” Nucl. Instrum. Meth. A 270 (1988) 110. topcombination_7TeV ATLAS Collaboration, CMS Collaboration, “Combination of ATLAS and CMS top-quark pair cross section measurements using proton-proton collisions at sqrt(s) = 7 TeV”, CMS-PAS-TOP-12-003 (2013). topcombination_8TeV ATLAS Collaboration, CMS Collaboration, “Combination of ATLAS and CMS top quark pair cross section measurements in the eμ final state using proton-proton collisions at √(s)= 8 TeV”, ATLAS-CONF-2014-054, CMS-PAS-TOP-14-016,(2014). Aad:2013ucp G. Aad et al. [ATLAS Collaboration], “Improved luminosity determination in pp collisions at sqrt(s) = 7 TeV using the ATLAS detector at the LHC,” Eur. Phys. J. C 73 (2013) no.8,2518[1302.4393 [hep-ex]]. Aaboud:2016hhf M. Aaboud et al. [ATLAS Collaboration], “Luminosity determination in pp collisions at √(s) = 8 TeV using the ATLAS detector at the LHC,” Eur. Phys. J. C 76 (2016) no.12,653[1608.03953 [hep-ex]]. CMS:2012rua CMS Collaboration [CMS Collaboration], “Absolute Calibration of the Luminosity Measurement at CMS: Winter 2012 Update,” CMS-PAS-SMP-12-008. CMS:2013gfa CMS Collaboration [CMS Collaboration], “CMS Luminosity Based on Pixel Cluster Counting - Summer 2013 Update,” CMS-PAS-LUM-13-001. CMS:2016eto CMS Collaboration [CMS Collaboration], “CMS Luminosity Measurement for the 2015 Data Taking Period,” CMS-PAS-LUM-15-001. likelihoodapproach G. Cowan, K. Cranmer, E. Gross and O. Vitells, “Asymptotic formulae for likelihood-based tests of new physics,” Eur. Phys. J. C 71 (2011) 1554, Erratum: [Eur. Phys. J. C 73 (2013) 2501][1007.1727 [physics.data-an]]. BLUE2A. Valassi, “Combining correlated measurements of several different physical quantities,” Nucl. Instrum. Meth. A 500 (2003) 391. PDF4LHC J. Butterworth et al., “PDF4LHC recommendations for LHC Run II,” J. Phys. G 43 (2016) 023001[1510.03865 [hep-ph]].Baak:2014ora M. Baak et al. [Gfitter Group], “The global electroweak fit at NNLO and prospects for the LHC and ILC,” Eur. Phys. J. C 74 (2014) 3046[1407.3792 [hep-ph]].DAgostini:1993arp G. D'Agostini, “On the use of the covariance matrix to fit correlated data,” Nucl. Instrum. Meth. A 346 (1994) 306.
http://arxiv.org/abs/1708.07495v2
{ "authors": [ "Thomas Klijnsma", "Siegfried Bethke", "Günther Dissertori", "Gavin P. Salam" ], "categories": [ "hep-ph", "hep-ex" ], "primary_category": "hep-ph", "published": "20170824170522", "title": "Determination of the strong coupling constant $α_s(m_Z)$ from measurements of the total cross section for top-antitop quark production" }
Seismic waves and earthquakes in a global monolithic model. Tomáš Roubíček^1,2 ^1 Mathematical Institute, Charles University, Sokolovská 83, CZ-18675 Praha 8, Czech Republic^2Institute of Thermomechanics, CAS, Dolejškova 5,CZ-18200 Praha 8, Czech RepublicAbstract: The philosophy that a single “monolithic” model can “asymptotically”replace and couple in a simple elegant way several specialized models relevanton various Earth layers is presented and, in special situations, alsorigorously justified. In particular, global seismicity and tectonics iscoupled to capture e.g. (here by a simplified model) ruptures oflithospheric faults generating seismic waves which then propagate through thesolid-like mantle and inner core both as shear (S) or pressure (P) waves,while S-waves are suppressed in the fluidic outer core and also in the oceans.The “monolithic-type” models have the capacity to describe all thementioned features globally in a unified way together with correspondinginterfacial conditions implicitly involved, only when scaling its parametersappropriately in different Earth's layers. Coupling of seismic waves withseismic sources due to tectonic events is thus an automatic side effect.The global ansatz is here based, rather for an illustration, only on arelatively simple Jeffreys' viscoelastic damageable material at small strainswhose various scaling (limits) can lead to Boger's viscoelastic fluid or evento purely elastic (inviscid) fluid. Self-induced gravity field, Coriolis,centrifugal, and tidal forces are counted in our global model, as well.The rigorous mathematical analysis as far as the existence of solutions, convergence of the mentioned scalings, and energy conservationis briefly presented.AMS Classification:35K51, 35L20,35Q86,74J10,74R20,74F10,76N17,86A15,86A17. Key words: waves; global seismicity; tectonic earthquakes;mathematical models; energy conservation; scaling;convergence proofs, existence of weak solutions.§ INTRODUCTIONGlobal geophysical models typically (have to) deal with several verydifferent phenomena and couple various models due to the layered characterof our planet Earth as well as our Moon and many other terrestrial planetsor other moons. This paper wants to demonstrate (and, in special situations, also rigorously justify) the philosophy that a single“monolithic” model can replace several specialized models coupled together. Such a single model can beeasier to implement on computers algorithmically.Of course, computationally, such a monolithic-type model may be notalways easier to produce really relevant simulations at computerswe have at our disposal nowadays. As purely seismic global 3D modelsare already treatable as well as their coupling with earthquake sourceat least locally, cf. e.g.<cit.> as well as <cit.>,respectively, thereis a hope that the coupling monolithic approach may become more amenablein future within ever increasing computer efficiency. The phenomena we have in mind in this paper involve global seismicity andtectonics. In particular the latter involves e.g. ruptures oflithospheric faults generating seismic waves which then propagate through thesolid-like silicate mantle and iron-nickel inner core both in shear (S) orpressure (P) modes. In contrast to the P-waves (also called primary or compressional waves), the S-waves (also called secondary waves) are suppressedin the fluidic iron-nickel outer core and also in the water oceans(where P-waves emitted from the earthquakes in the crust may manifest asTsunami at the end). Typically, Maxwell-type rheologies are used in geophysical models of solidmantle to capture long-term creep effects up to 10^5 yrs. Sometimes,more attenuation of Kelvin-Voigt type is also involved, which leadsto the Jeffreys rheology <cit.>, cf. also <cit.>. This seems more realistic in particular because it covers (in the limit) also the Kelvin-Voigt model applied to the volumetricstrain whereas the pure Maxwell rheology allowing for big creep during long geological periods is not a relevant effect in the volumetric part. We therefore take the Jeffreys model as a basic global “monolithic”ansatz and, in various limits in the deviatoric and the volumetric parts,we model different parts of the planet Earth.Respecting the solid parts of the model, we use theLagrangian description, i.e. here all equations are formulatedterms of displacements rather than velocities, while the reference and the actual space configurations automatically coincide with each other in our small strain (and small displacement) ansatz, which is well relevant in geophysical short-time scales of seismic events. In the solid-like part, various inelastic processes are consideredto model tectonic earthquakes on lithospheric faults together with long healing periods in between them, as well as aseismic slips, and various other phenomena. To this goal, many internal variables may be involved as aging/damage, inelastic strain, porosity, water content,breakage, and temperature, cf. <cit.>. On the other hand, those sophisticated models are focused onrather local events around the tectonic faults without ambitionsto be directly coupled with the global seismicity.Cf. also <cit.> for models ofrupturing lithospheric faults and, in particular, a relation withthe popular Dieterich-Ruina rate-and-state friction model. Here, rather forthe lucidity of the exposition, we reduce the set of internal variablesto only one scalar variable, namely damage/aging, which however hasa capacity to trigger a spontaneous rupture (so-called dynamic triggering)with emission of seismic waves and, in a certain simplification, can serve as a seismic source coupledwith the overall global model. Also, this simple model alreadywill well illustrate mathematical difficulties related to nonlinearitiesin the solid parts coupled with linear but possibly hyperbolic fluidic regions.Let us emphasize that usual models are focused only either on propagation ofseismic waves along the whole globe while their source is considered given, oron description of seismic sources due to tectonic events,but not their mutual coupling.If a coupling is considered, then is concerns rather local modelsnot considering the layered structure of the whole planet, cf. e.g. <cit.>. The reality ultimatelycaptures very different mechanical properties of differentlayers of the Earth, in particular the mantle and the inner core which aresolid from the short-time scales versus the outer core and the oceanswhich are fluidic even on the short-time scales.The goals of this article are:)to propose a model that might capture simultaneously propagation of seismic waves over the whole planet and their nonlinearly behaved sources (like ruptures of tectonic faults,here modelled only in a very simplified way for a relatively lucidillustration of the model procedure), both mutually coupled.) by proper scaling to approximate viscoelastic Boger-type <cit.> fluids that is relevant in outer core and in the oceans (with a very lowviscosity) where S-waves can then only slightly penetrate the outer core or the oceans but are fast attenuated, while P-waves are only refracted. ) by limiting further the viscosity in the outer core or the oceans to zero,further to approximate this viscoelastic fluid towardselastic (completely inviscid) fluid, respecting the phenomenon thatS-waves cannot penetrate into these fluidic regions and are fully reflectedon the interfaces between the outer core and the mantle(=Gutenberg's discontinuity) and the inner core or on the ocean beds, i.e. betweenand , while P-wavespropagate through these interfaces, being both refracted and reflected on them.) perform the rigorous analysis as far as the existence of solutions, a-priori estimates in specific norms, and convergence towards other models that justifies the particular models, their energetics and asymptotics, and can support numerical stability and convergence when discretised and implemented on computers. The plan of this article is the following: In Section <ref>, the generalmonolithic model is introduced and also specialized for viscoelasticlinear isotropic material undergoing possibly damage in the elastic shear part. In Sections <ref> and <ref>, we formulate the limit towardviscoelastic and purely elastic fluids, respectively. This limit concerns only the sudbomain , i.e. the outer core and oceans. To make thepresentation more lucid and readable also for the geophysical specialistoutside mathematical-analyst community, the mathematical analysis of themodels and necessarily a bit technical proofs of the statements formulated inSections <ref>–<ref> are intentionally presented as Appendix later inSection <ref>. For it, we use a conceptually constructive approximation by the Galerkin method, which is in some variant also used in geophysicalliterature, cf. e.g. <cit.>.§ PHILOSOPHY AND AN EXAMPLE OF A MONOLITHIC MODELπRather for illustration of our main focus to coupling of solid and fluidic models, we ignore most of the above mentioned internal variables andkeep only the scalar-value aging/damage variable, denoted by α, and then we consider the mentioned Jeffreys rheology combined with damageinfluencing only the elastic but not viscous part.The differential relation governing this rheology is of the type“σ+σ= e+ e” with some specificcoefficients (here ignored), some of them being possibly subject to damage.Here and in what follows, the dot-notation (·)^ stands for the time derivative ∂/∂ t. Alternatively, to make the damage dependence morelucid, one can use system offirst-order equations in time when implementing the concept of internal variables, i.e. here introducing a Maxwellian-typecreep strainand making a Green-Naghdi <cit.> additive decomposition of the total strain e(u)=e_ el+.So, considering also the damage/aging variable, altogether our set ofinternal variables will be (,α).The main ingredients of the model are then, beside the mass densityϱ=ϱ(x), the specific stored energy and the dissipationpotential as a property of the materialφ=φ(x,u,e_ el,α,∇α,ϕ,∇ϕ), ζ=ζ(x, e_ el,,α).Note that φ and ζ naturally do not explicitlydepend on the total strain e(u) butrather on the elastic strain e_ el and possibly on thecreep strainand their rates; in fact, φ is independent ofbecause naturally no hardening-like effects are relevant in geophysical models. In what follows, we will often omitthe explicit dependence on x variable.For readers' convenience, Table 1 summarizes the main nomenclature usedin this paper.To formulate the equations for the dynamics of displacement u, it is morenatural in the Kelvin-Voigt model (which is a part of the Jeffreys rheology) to express the stored-energy and dissipationpotentialsrather in terms of the total strain e=e(u) and the creepfrome-decompose as φ(x,u,e,,α,∇α,ϕ,∇ϕ) =φ(x,u,e-,α,∇α,ϕ,∇ϕ), ζ( e,,α ) = ζ( e-,,α ).In terms of these potentials, our system in the abstract formis: ϱ u+φ_u'(∇ϕ) - div ( ζ_ e'(e( u),)+φ_e'(e(u),,α)) =f__ COR( u) inΩ, ζ_'(e( u),)+φ_'(e(u),,α) =0inΩ, ζ_α'(α) +φ_α'(e(u),,α)- div(φ_∇α'(∇α)) =0inΩ,φ_ϕ'(u,ϕ)- div(φ_∇ϕ'(∇ϕ))=ϱ_ ext(t) in^3,where (·)' denotes the (partial) derivatives for whichwe already have assumed a certain particular form of phi-zeta so that the partial derivatives of φ and ζ do not depend on the full list of arguments, thus e.g. instead ofφ_e'(u,e(u),,α,∇α,ϕ,∇ϕ) we wrote onlyφ_e'(e(u),,α), etc. Note that, in system-u, weused also a bulk force f__ COR( u) which may not come from the dissipation potential by a classical way f__ COR( u)=ζ_ u, which in particular the case of the Coriolis (pseudo) force which itself does notvanish but nevertheless the dissipation rate f__ COR( u) u=0.Rheological response under the volume variation and under the shear may (and do) substantially differ from each other. To distinguish these geometrical aspects,the total strain is decomposed to the spherical (also called hydrostatic or volumetric)strainstrain!spherical and the deviatoric (also called shear)strain:strain!deviatoric e= sph e+ dev ewith sph e= tr e/3and dev e=e- tr e/3,whereis the identity matrix and “tr” denotes the trace of a matrix. Note that the deviatoric and the spherical strainsfrom decomposition-of-strain are orthogonal to each other. In terms of this decomposition, for example the isotropic elastic(Lamé) material at small strains has the quadratic stored energyφ__(e)=1/2λ( tr e)^2+G__ E|e|^2= 1/2(λ+2/3G__ E)( tr e)^2 +G__ E|e- tr e/3|^2 = 1/2(3λ+2G__ E)| sph e|^2 +G__ E| dev e|^2 =3/2K__ E| sph e|^2+G__ E| dev e|^2withK__ E=λ+2/3G__ E.where we used | sph e|^2=( tr e/3)^2:= ( tr e/3)^23=( tr e)^2/3 andthe mentioned orthogonality of sph e and dev e,whereK__ E is called the bulk modulus and λ isthe (first) Lamé coefficient while G__ E is the shear modulus (= the second Lamé coefficient). Then the corresponding stress writes as:φ__'(e) =λ( tr e)+2G__ E e=:σ_ dev+σ_ sph withσ_ dev=2G__ E dev eand σ_ sph=3K__ E sph e. We will use this decomposition both for elastic and for the viscous parts of our model and let the elastic bulk modulus G__ E and thus alsoφ dependent on damage. Yet, this dependence would bring mathematicaldifficulties (like lack of the integrability of the term α G__E(α) | dev e_ el|^2 that would arise in system-alpha-w-, forexample). Here, various modifications of the Lamé-type model can help.One option is to consider higher strain gradients, i.e. the concept of nonsimple continua, cf. also <cit.>. Other (conceptually and technically simpler) option is to modify the stored energy by considering an non-quadratic e↦__ E(α, dev e) instead of e↦ G__E(α)| dev e|^2. A canonical option one may have in mind is__ E (α,e_ el)=__ E (α, dev e_ el)= G__ E(α)| dev e_ el|^2/√(1+| dev e_ el|^2/E__ M^2) for (presumably large) regularizing parameter E__ M.In accord with Figure <ref>, in KV-limit+ and in what follows, we quite naturally assume that __ E does not dependon the spherical part of the strain which itself usually cannot make any damage in the rock. The meaning of E__ M in (<ref>) is clear from realizing that, if the deviatoricstrain is substantially smaller in the sense| dev e_ el|<<E__ M, then the difference from the originalLameé-type model is negligible, i.e. the corresponding stress contribution∂_e __ E(α, e_ el)∼ 2G__ E(α)dev e_ el. On the other hand, this modification is effective for large strains and makese_ el↦φ(x,u,e_ el,α,∇α,ϕ,∇ϕ)nonquadratic but still keeps convex now with at most linear growth, so that the elastic part of deviatoric stress is always belowG__ E(α)E__ M. Thus, for E__ M large, this is effectively not a substantial (and physically well acceptable) modificationof the original model. Thus we consider the material stored energy and thematerial dissipation-force potential from phi-zeta governing theproblem asφ=φ(x,u,e_ el,α,∇α,ϕ,∇ϕ)= 3/2K__ E| sph e_ el|^2+__ E(α, dev e_ el)+κ/2|∇α|^2+ϱϕ +ϱ∇ϕ·u+1/8 g|∇ϕ|^2 +1/2ϱ(|ω·(x+u)|^2-|ω|^2|x+u|^2)if x∈Ω,= φ(∇ϕ)=1/8 g|∇ϕ|^2 if x∈^3∖Ω,ζ=ζ(x, e_ el,,α )=3/2K__ KV| sphe_ el|^2 +G__ KV| deve_ el|^2+3/2K__ MX| sph |^2 +G__ MX| dev |^2 +η(α)with g≐6.674×10^-11m^3kg^-1s^-2 the gravitational constant.For the ϕ-terms in def-of-phi see e.g.<cit.>.All the coefficients K's, G's, and κ'sare naturally considered defined on Ω and x-dependent (which is notexplicitly written in def-of-phi-zeta for brevity). Also the mass density is consideredx-dependent and, because of system4, defined on the whole Universe ^3by putting ϱ(x)=0 if x∈^3∖Ω. Keeping K__ E constant in def-of-phi reflects the phenomenon that compression does not cause damage and the pure tension is nota relevant mode in geophysical models, so only shear does cause damage because of dependence of the elastic shear modulusG__ E in KV-limit+ and def-of-phi on α. The last term in def-of-phi is the potential of the centrifugal force ϱω×(ω×(x+u)) and, noteworthy, it violates coercivity due to the term -1/2ϱ|ω|^2|x+u|^2, which reflects the real phenomenon that, given the angular velocity ω, the centrifugal force can indeed inflate the body in an unlimited way in the direction orthogonal to the rotation axis. Of course, in reality, this is either notobserved in planetary/satellite bodies or the angular velocity cannot be taken apriori fixed in some large-type bodies.For shorter notation, we define the 4th-order tensors corresponding to the isotropic material expressed in terms of the (K,G)-moduli indef-of-phi-zeta, namely [(α,e)]_ijkl= K__ Eδ_ijδ_kl +∂_ee^__ E(α,e)(δ_ikδ_jl+δ_ilδ_jk-2δ_ijδ_kl/3), [__ MX]_ijkl=K__ MXδ_ijδ_kl +G__ MX(δ_ikδ_jl+δ_ilδ_jk-2δ_ijδ_kl/3), [__ KV]_ijkl=K__ KVδ_ijδ_kl +G__ KV(δ_ikδ_jl+δ_ilδ_jk-2δ_ijδ_kl/3),where δ's denote the Kronecker symbol. In what follows, we will take specificallyf__ COR( u)=2ϱω× u which is the standard Coriolis force with ω the given vector (angular velocity related with the constant rotation of the Earth with respect to the inertial system). The general system system then takes the more specific form:ϱ u- div σ+f=0 withσ=__ KV e_ el +(α, dev e_ el)e_ el and f=ϱ(ω×(ω×(x+u))+2ω× u+∇ϕ)inΩ,__ MX=σ inΩ, η(α)+∂_α__ E(α, dev e_ el) - div(κ∇α)∋0inΩ, div(1/4 g∇ϕ+ϱ u)=ϱ +ϱ_ ext(t) in^3.The bulk force f in system-u involves the centrifugal force, theCoriolis force, and thegravity force due to the self-induced gravity field. This last force playsa certain role in ultra-low frequency (i.e. very long wavelength) seismicwaves, cf. e.g. <cit.>.For such right-hand side in system-u+ together withsystem4 see e.g. <cit.>. Actually, this self-gravity interaction results after a certain linearizationof the system originally written at large strains, cf. also<cit.>. The external time-varyingmass density ϱ_ ext^=ϱ_ ext^(x,t) occurring insystem4 allows for involvement of tidal forces arising fromother bodies (stars, planets, moons) moving around. The notation η in system-alpha stands for a subdifferential of the convex function η which standardly generalizes the usual derivative if η is non-smooth and, by definition, the inclusion“∋” there actually means, for any v and a.a. (t,x)∈ Q, the inequality (v-α(t,x)) (∂_α__ E(x,α(t,x), dev e_ el(t,x))- div(κ∇α(t,x))) +η(v)≥η(α(t,x)).To facilitate spontaneous rupture on some weak narrow layers (pre-existingfaults in the solid crust) that leads to earthquakes, some“enough pronounced” nonconvexity of φ is necessary. One option wasdevised in <cit.>, introducing non-convexity in terms of e_ el if damage develops enough, and used in a series of papers,see e.g. <cit.> and references therein.Mathematically rigorous setting of this model requires howeversome higher-order strain gradients (the so-called 2nd-grade nonsimple materials) and is rather technical, cf. <cit.>.Alternative nonconvexity can be implemented through__ E(·,e) causingpossible weakening when damage develops, being opposite to so-called cohesivedamage. To avoid the mentioned mathematical technicalities, we have chosen the latter option. To see the energetics behind the system system, we test the particularequations in system by u, π, α, and ϕ,respectively. Thus we obtain (at least formally) the energy balance /ṭ(∫_Ωϱ/2| u|^2 x̣+ ∫_^3φ(u,e(u),,α,∇α,ϕ,∇ϕ) x̣) +∫_Ωξ(e( u),,α) x̣ =∫_^3ϱ_ ext(t)ϕ x̣ ,where ξ=ξ( e,,α) is the dissipation rate relatedwith the dissipation potential ζ=ζ( e,,α) byξ( e,,α) =( e,,α)^⊤∂ξ( e,,α). Using def-of-phi-zeta and C-D, we can make itmore specific for the system system+ when using also the calculus∫_Ωϱ∇ϕ u- div(ϱ u)ϕ x̣ =∫_Ωϱ∇ϕ u+ϱ u∇ϕ x̣ =/ṭ∫_Ωϱ∇ϕ u x̣. and realizing that, in view ofdef-of-zeta- and def-of-zeta,ξ( e,,α) =3K__ KV| sph( e-)|^2 +2G__ KV| dev( e-)|^2+3K__ MX| sph |^2 +2G__ MX| dev |^2 +α η'(α)=__ KV( e-π)∷( e-π) +__ MXπ∷π+αη'(α)when using also the notation (<ref>b,c). Thus engr-abstract reads more speficially here as /ṭ(∫_Ωϱ/2| u|^2+3/2K__ E| sph(e(u)-π)|^2 +__ E(α, dev(e(u)-π)) +ϱϕ+ϱ∇ϕ u+κ/2|∇α|^2 +1/2ϱ(|ω·(x+u)|^2-|ω|^2|x+u|^2)x̣ +∫_^31/8 g|∇ϕ|^2 x̣) +∫_Ω__ KV(e( u)-π)∷ (e( u)-π) +__ MXπ∷π+α∂η(α) x̣ =∫_^3ϱ_ ext(t)ϕ x̣.Here one should naturally assume that η is (naturally) nonsmooth possiblyonly at α=0 so that the dissipation rateα∂η(α) is actuallya single-valued function. The Coriolis force does not occur in engr because ω× u is always orthogonal tou so that (ω× u) u=0. This is a wellknown effect that the Coriolis (pseudo) force does not make any work.Denoting n⃗ the (unit) normal to the boundary ∂Ω ofΩ, i.e. the Earth surface, we complete the system by natural initialand boundary conditions: u|_t=0=u_0, u|_t=0=v_0,π|_t=0=π_0,α|_t=0=α_0. σn⃗=0 on ∂Ω, ϕ(∞)=0, and (κ∇α)·n⃗=0on ∂Ω,Actually, it would make a good sense to consider also some period instead of initial conditions, but the analysis of such a problem would be more difficult.It is a conventional modelling property that α ranges the interval[0,1], with α=0 meaning no damage while α=1 corresponds to acompletely disintegrated rock. To make the damage model relatively simplewithout causing unnecessary analytic complications (leading e.g. toimplementing the concept of nonsimple materials), the simplest modellingassumption that ensures the mentioned constraint 0≤α≤1 is ∂_α__ E(0,e)=0=∂_α__ E(1,e).This facilitates the analysis of the model, sketched in Appendix below. Withoutgoing into details here, we can here summarize the main theoretical result as: The initial-boundary-value problem system+–BC-IC admits a weaksolution in the sense of Definition <ref> below. This solution conserves energy in the sense that engr holds when integrated over any time interval [0,t] with 0<t≤ T.Some other modification of the “solid” model used in geophysical literature modifies the evolution of the creepto beactivated in the spirit what it standardly used in plasticity. Thenζ should be augmented by a term like | dev| and, to facilitate mathematical analysis, then φ is to be augmented by the term |∇ dev|^2. Thenis called an inelastic strain, rather than a Maxwellian creep. This model may be relevant in particularto provide an additional dissipation of energy important during big earthquakes and damage-dependentyield stress that may facilitate fast rupture.In the standard presentation of the selfgraviting model, the evolving potential ϕ is rather a perturbation ofanother gravitational potential (constant in time), resulted froma steady-state equilibrium configuration at a chosen initial time with the Coriolis force zero and the centrifugal forceϱω×(ω×x). The sum of these two is sometimes referred as a geopotential. Then another force occurs insystem-u+, causing the body Ω to be pre-stressed. Cf. e.g.<cit.>.§ TOWARDS BOGER VISCOELASTIC FLUID IN THE OUTER CORE AND OCEANS A standard specification of so-called Boger's viscoelastic fluidsis as constant-viscosity elastic (non-Newtonian) fluids thatbehave as both liquids and solids. Originally, this concept was devised ratherfor materials like dilute polymer solutions.Yet, such (idealized) material may model the fluid outer core in the Earth, which is a 2200 km thick layer between the (rather solid) innercore of the radius about 1300 km and the (rather solid) mantle,cf. Fig. <ref>. The elasticity in the volumetric part is essentialbecause it allows for a propagation of P-waves, which is a well documentedphenomenon in the outer core, while S-waves are practically not penetratingthis region because of its fluidic character as far as the shear responseconcerns, cf. Figure <ref>.Knowing quite reliably the speed v__ P of the P-wavesand the mass density ϱ depending on the depth,the elastic bulk modulus K__ E=K__ E(x) in the outer core is easy tobe seen from the formula v__ P=√(M/ϱ) whereM=λ+2G__ E=K__ E+4/3G__ E is the so-called P-wave modulus; here G__ E=G__ E(α) in KV-limit+. In theelastic fluid G__ E=0 so that M=K__ E. One can indeed determine simply as K__ E=ϱ v__ P^2 when knowing ϱ and v__ P.According the reliably documented speed of the seismic P-waves varying from 8 km/s (top) to 10 km/s (bottom) and the mass densityvarying from 10000 kg/m^3 (top) to 12000 kg/m^3 (bottom), the elastic bulk modulus ranges from K__ E=640GPa (top) to1200 GPa (bottom).For oceanic layers, it is relevant that the bulk modulus of seawater is aboutK__ E=M=2.3GPa (increasing with pressure) which is, e.g., about80× smaller than in steel and about 5× smaller than in the crust,and thus water is remarkably elastically compressible.Modelling oceans as elastic fluid is thus relevant for propagation seismic P-waves, which may manifest themselves on the surface by theTsunami waves, cf. e.g.<cit.>.Note that, countingthe speed of P-waves close to thesea surface where ϱ∼10^3 kg/m^3 isv__ P=√(M/ϱ)∼1.5km/s, which is the sound speed, so underwater acoustics exploits the same mechanism as seismic P-waves.In addition to the elastic response, there is a (rather small) viscosity(K__ KV,G__ KV)in the fluidic outer core and in oceans, too. In the outer core, some dataindicates it of the order 10^-2Pa s, cf. <cit.>, while someuncertainty and big variation within depth 10^4Pa s seems also documented in literature,see e.g. <cit.>. The viscosity of the water is even lower, of the order 10^-3Pa s, varying with temperature andpressure. Here we can come to a viscoelastic (Boger's) fluid as a limit from theprevious damageable Jeffreys' rheology when sending G__ E(α)→0so that, in particular, the resulting material naturally willnot be subjected to any damage, and simultaneouslyK__ MX→∞, and G__ MX→∞. As a result, π→0 and eventually system-pi is to hold only oninstead of the whole Ω while system-alpha still holds on the whole Ω but _α__ E(α,e_ el) is zero onand values of the damage α are actually irrelevant system-u+ on .More specifically, the system that results from system+ by this limit procedure looks as:ϱ u- div(σ_ sph+σ_ dev) +ϱ(ω×(ω×(x+u))+2ω× u+∇ϕ)=0inΩ withσ_ sph=3K__ KV sphe_ el +3K__ E sphe_ elin,3K__ KV sph e( u) +3K__ E sph e(u) in, and σ_ dev= 2G__ KV deve_ el +∂_e__ E(α, dev e_ el) in,2G__ KV sph e( u) in,__ MX=σ_ sph+σ_ dev in,η(α)+ ∂_α G__ E(α, dev e_ el)- div(κ∇α)∋0in, div(1/4 g∇ϕ+ϱ u) =ϱ+ϱ_ ext(t)in^3.The interface conditions for displacement/stress on the interior boundariesbetween the mantle and theouter core and the inner and the outer core as well between themantle (crust) and the oceans are automatically involved as system-u++ holds on the whole Ω. In particular, the stress-vector equilibrium andcontinuity of the displacement across these interfaces are automatically involved and does not need to be written explicitly. On the other hand, note in particular that both π and α in system++ are now needed and defined only in the solid part . Therefore, the condition for the “flux” of α is now to be prescribedon all interior boundaries (i.e. on the mantle/core and mantle/oceans and inner/outer-core interfaces). More specifically, BC-IC-alpha is to be replaced by(κ∇α)·n⃗=0on ∂. We can justify this limit-passage scenario rigorously. To this goal, let uschoose G__ MX,(x)=G__ MX(x)/, G__ MX(x),K__ MX,(x)=K__ MX(x)/ for x,K__ MX(x) for x,__ E,(x,α,e)=__ E(x,α,e), __ E(x,α,e),κ_(x)= κ(x)for x,κ(x)for x, η_(x,α)= η(x,α) for x,η(x,α) for x,while the other visco-elastic moduli K__ E, K__ KV, andG__ KV are kept fixed. The following statement will be made more specific and proved in the Appendix: Let be some solution (u_,_,α_,ϕ_)of the initial-boundary-value problem system+–BC-ICwith the data G-K-eps, which do exist due to Proposition <ref>. Then (u_,_|_^,α_|_^,ϕ_)converge (in terms of subsequences) for →0 to weak solutions tosystem++ with the initial/boundary conditions(<ref>a,b)–BC-IC-alpha+. In particular, a weak solution to the initial-boundary-value problem system++–(<ref>a,b)–BC-IC-alpha+in the sense of Definition <ref> below does exist. In addition, this solution conserves energy.Moreover, the energy dissipated through the Maxwell-type attenuation andby damage in the fluidic regionsover the time interval I converges to zero, i.e. ∫_3/2K__ MX,| sph _|^2 +G__ MX,| dev _|^2 +α_η(α_) x̣ṭ→0for→0.It should be emphasized that, although Max-limit similarly as KV-limit below is intuitively quite clear and generally expected in geophysicalliterature, its rigorous proof is not entirely trivial and relies on a regularity which is rather automatic in linear systems but may be highly nontrivial oreven false in nonlinear hyperbolic systems, as basically also here,cf. <cit.>. Instead of G__ MX→∞ and G__ E(α)→0, one can think about sending G__ KV→∞. Then, assumingthe initial condition dev e_ el|_t=0=0, we would havedev e_ el→0. Hence, in the limit we would see the model from Fig. <ref> only with G__ KV replaced by G__ MX. Since the viscosity in fluidic regions is typically small (as discuss in particular in the next section <ref>)like also G__ KV while G__ MX is typically large in solid-like geophysical materials, our choice G-K-eps is more straightforward. Another difference would be that, instead of π→0, we would have only sph π→0 whiledev π→ dev e(u).Actually, the limit K__ MX→∞ is relevant also in the solid part . This causes sph π→0 and in the limit one obtains a model where π is trace-free and accumulates only shear strain, which is the most common ansatz in plasticity and creep, and ingeophysical modelling, too.Passing simultaneously K__ E→∞, we obtain in the limit the Stokesincompressible fluid. The bulk viscosity K__ KV then becomes irrelevant,so that only the shear viscosity G__ KV remains relevant inFigure <ref>. Such model is a great idealization and has limited application because, beside v__ S→0, now nonphysicallyv__ P→∞. In particular, in this limit, sph e(u) isconstant in time, namely sph e(u(x,t))=1/3 div u_0(x). Therefore divu=0, which expresses the usual incompressibility condition. Such models are often used in geophysics in short-time-scale models when the convective term ϱ( u·∇) u,occurring in Navier-Stokes equation, can be neglected.Cf. e.g. <cit.> for self-gravitating incompressible Stokes model in layered geoid. § SUPPRESSING VISCOSITY TOWARDS PURELY ELASTIC FLUIDAs already said, the viscosity (K__ KV,G__ KV)in the fluidic domains is only very small (and even not certainly knownin deep parts of the outer core). Note also that, when G__ E=0 in the fluidic domains, G__ KV is in the position of theMaxwellian viscosity. For example, in mantle, 10^2-10^4Pa s isconsidered in <cit.>. As for the fluidic regions, as said above, the oceans exhibit viscosity around10^-3Pa s while the outer core around 10^-2-10^4Pa s. In any case, these viscosities are much smaller than typical viscosity in the crust which is of the order 10^22-10^24Pa s or in the inner core of the order about 10^14-10^15Pa s,see e.g. <cit.>. This certainly gives a good motivation to study the asymptotics when thisviscosity goes to zero. In the limit, it yields an inviscid, purely elastic model where theHooke elasticity counts only with the spherical response while theshear-stress-free response imitates the ideal inviscid fluid; sometimes, such sort of models are called compressible Euler fluids. Such materials are called elastic fluids, or more specifically just compressible inviscid fluids. These fluids allow still for propagation ofP-wavesP-wave!in elastic fluids(whose speed is ϱ^-1/2K__ E^1/2 as already mentioned)while S-waves are completely excluded.The resulted system is again (<ref>a,d-f) but now with σ_ sph=3K__ KV sph(e( u)-π) +3K__ E sph(e(u)-π) in ,3K__ E sph e(u) in , σ_ dev= 2G__ KV dev(e( u)-π) + ∂_e^__ E(α, dev(e(u)-π))in ,0 in . The weak formulation for the limit problem as far as theforce equilibrium(<ref>a,d)with system+++ concernsarises when testing system-u++ by a smooth test functionv with v(T)=0 and dev e(v)=0 onand makingby part integration in time and applying Green formula in space, cf.Definition <ref> in the Appendix.We can justify this limit-passage scenario rigorously. To this goal, let uschoose K__ KV,(x)= K__ KV(x), K__ KV(x), G__ KV,(x)= G__ KV(x) for x, G__ KV(x) for x,while G__ E(α)=0 and K__ MX=G__ MX=∞ on , resulting already from the limit in Section <ref>. The following statement will be proved in the Appendix: If the initial conditions are enough smooth (cf. the assumption in Definition <ref>below), the solutions (u_,_,α_,ϕ_)of the initial-boundary-value problemsystem++–(<ref>a,b)–BC-IC-alpha+ with the data KV-eps, which does exist due to Proposition <ref>, converge (in terms of subsequences) for →0 to weak solutions to (<ref>a,d-f)–(<ref>a,b)–BC-IC-alpha+ with system+++. In particular, a weak solution to this initial-boundary-value problem in the sense of Definition <ref> below does exist. Moreover, this solution conserves energyand the energy dissipated through the Kelvin-Voigt-type attenuation in the fluidic regionsover the time interval I converges to zero, i.e. ∫_3/2K__ KV| sph e( u_)|^2 +G__ KV| dev e( u_)|^2 x̣ṭ→0 for→0.§ APPENDIX: ANALYSIS SKETCHEDIn what follows, we will use the (standard) notation for the LebesgueL^p-spaces and W^k,p for Sobolev spaces whose k-th distributionalderivatives are in L^p-spaces, and the abbreviation H^k=W^k,2.Moreover, we will use the standard notation p'=p/(p-1). In the vectorial case, we will write L^p(Ω;^3)≅ L^p(Ω)^3and W^1,p(Ω;^3)≅ W^1,p(Ω)^3.For the fixed time interval I=[0,T], we denote by L^p(I;X) thestandard Bochner space of Bochner-measurable mappings I→ X with X aBanach space. Also, W^k,p(I;X) denotes the Banach space of mappingsfrom L^p(I;X) whose k-th distributional derivative in time is also inL^p(I;X). The dual space to X will be denoted by X^*.The scalar product between vectors, matrices, or 3rd-order tensors will be denoted by “·”, “:”, or “”,respectively. Finally, in what follows, K denotes a positive, possiblylarge constant.We will impose the basic assumptions on the data and incorporate them directly into Definitions <ref>-<ref>. In particular,inf_x∈Ω^ϱ(x)>0,ϱ=0on ^3∖Ω,ϱ∈ L^∞(Ω)∩ W^1,3(Ω), ϱ_ ext W^1,1(I;L^∞(^3)),ϱ_ ext=0outside a bounded set in I×^3, K__ E,G__ KV,K__ KV,G__ MX,K__ MX,__ E(·,α,e), κ:Ω→[0,∞)measurable andinf_x∈Ωmin(G__ KV(x),K__ KV(x),G__ MX(x),K__ MX(x),K__ E(x),κ) >0, sup_x∈max(K__ KV(x)/K__ MX(x),G__ KV(x)/G__ MX(x))<1,__ E(x,·,·): [0,1]×^3×3→[0,∞)twice continuously differentiable, __ E(x,α,·):^3×3→[0,∞)convex,|_e__ E(x,α,e)|≤ C(1+|e|), |_α__ E(x,α,e)|≤ C(1+|e|),|_e e^2__ E(x,α,e)|≤ C, |_α e^2__ E(x,α,e)|≤ C, η:→ uniformly convex, η(0)=0, u_0 H^1(Ω;^3), v_0 L^2(Ω;^3),π_0 L^2(Ω;^3× 3),π_0=0on,and α_0 H^1(Ω)with 0≤α_0(x)≤1 for all xΩ. Let us emphasize that ass-small-KV/MX is used only for the second limit passage for estimationest-regular–est-regular+ and is well satisfied in geophysical models where the ratio of the Kelvin-Voigt and Maxwell viscosities in solid regions is surely below 10^-8,as said in Section <ref>. Also, let us emphasize that the growth restrictions imposed on_α^__ E and_α e^2__ E in ass-growth-G are compatible with the ansatz KV-limit+ and usedin system-alpha-w and for est-regular below. We furtherintegrate system-alpha-w- over Q and apply the Green theorem and the by-part integration in time to the term (v-α) div(κ∇α), cf. Remark <ref>. The quadruple (u,π,α,ϕ) is called a weak solution tothe initial-boundary-value problem system+–BC-ICprovided the data satisfies ass and u W^1,∞(I;L^2(Ω;^3))∩ H^1(I;H^1(Ω;^3)), π H^1(I;L^2(Ω;^3× 3)), α∈ L^∞(I;H^1(Ω))∩ H^1(I;L^2(Ω)), ϕ L^∞(I;H^1(^3)), and the integral identity ∫_Qσ:e(v)-ϱ u· v+f·v x̣ṭ=∫_Ωϱ v_0·v(0) x̣holds for any v∈ H^1(Q;^3) with v(T)=0 and with σ and f from system-u+,∫_Q(v-α) ∂_α__ E(α, dev(e(u)-π))+κ∇α∇ v+η(v) x̣ṭ +∫_Ωκ/2|∇α_0|^2 x̣≥∫_Qη(α) x̣ṭ +∫_Ωκ/2|∇α(T)|^2 x̣for all v∈ L^2(I;H^1(Ω)),∫_I×^3(1/4π g∇ϕ+ϱ u)·∇ v+ (ϱ+ϱ_ ext)v x̣ṭ=0 for all v∈ L^2(I;H^1(^3)), and also system-pi holds a.e. on Q, and eventually the restinginitial conditions u(0)=u_0, π(0)=π_0, and α(0)=α_0 holda.e. on Ω.Sketched proof of Proposition <ref>. Without going into details, we may expect that we applied some approximation method (e.g. a Galerkin-type approximation in space) to obtain some approximate solution to theinitial-boundary-value problem system+–BC-IC which, in principle, can be implemented oncomputers e.g. by the finite-element method. Then the a-prioriestimates in the space as in est hold also for the approximate solutions, which may be interpreted as a numerical stabilityof the specific approximation scheme. This approximation leads (aftersmoothening of the potential η) to the initial-value problem for a system of ordinary-differential equations. Existence of its solution, let us denote it by (u_k,π_k,α_k,ϕ_k) with k∈ referring to thefinite-dimensional subspaces used for the Galerkin discretisation, can beproved by successive-continuation argument, using the L^∞(I)-estimatesbelow. Here we may assume that the initial conditions lie in thefinite-dimensional spaces used for the Galerkin approximation so that no further approximation is needed. The energy balance engr can serve to see basic apriori estimates and to perform analysis of the system system+. Integrating engr over [0,t] and using the by-part integration in timefor the power of tidal load ϱ_ extϕ and the notation(<ref>b,c), we write∫_Ω(ϱ/2| u_k(t)|^2+3/2K__ E| sphe_ el,k(t)|^2 +__ E(α_k(t), dev e_ el,k(t)) +κ/2|∇α_k(t)|^2 +1/2ϱ|ω·(x+u_k(t))|^2) x̣ +∫_^31/ 8 g|∇ϕ_k(t)|^2 x̣+∫_0^t∫_Ω__ KVe_ el,k∷ e_ el,k +__ MXπ_k∷π_k+α_kη'(α_k) x̣ṭ ≤∫_Ω -ϱ∇ϕ_k(t) u_k(t)-ϱϕ_k(t) x̣+∫_^3ϱ_ ext(t)ϕ_k(t) x̣ -∫_0^t∫_^3ϱ_ extϕ_k x̣ṭ+E_0with e_ el,k=e(u_k)-π_k and withthe upper bound for the initial energy E_0 =∫_Ωϱ/2|v_0|^2+3/2K__ E| sphe_ el,0|^2 +__ E(α_0,e_ el,0)+ϱϕ(0)+ϱ∇ϕ(0) u_0+1/2ϱ|ω·(x+u_0)|^2+κ/2|∇α_0|^2 x̣ +∫_^3ϱ_ ext(0,x)ϕ(0,x) x̣,where e_ el,0=e(u_0)-π_0 and where ϕ(0,x)∈ H^1(Ω) is the gravitational potentialsolving system4 for t=0. The inequality in engr-int have arisenfrom the energy equality engr written for the approximate solutionand integrated over the time interval [0,t] by forgettingsome parts of the centrifugal potential with a guaranteed sign.The non-coercive contribution of the centrifugal force on the right-hand side of engr-int is to be estimated by Hölder's inequality as ∫_Ω1/2ϱ|ω|^2|x+u_k(t)|^2 x̣ =∫_Ω1/2ϱ|ω|^2 |x+u_0(x)+∫_0^t u_k(τ,x) τ̣|^2 x̣≤ C(1+∫_0^t u_k(τ)_L^2(Ω;^3)^2 τ̣)with some constant C, and then treated by Gronwall's inequality, exploitingthe kinetic energy on the left-hand side of engr-int. Furthermore, we can estimate ∫_Ω∇ϕ_k·u_k x̣≤ϵ∇ϕ_k_L^2(Ω;^3)^2 +C_ϵ(∫_0^t u_k_L^2(Ω;^3)^2 ṭ+u_0_L^2(Ω;^3)^2). Also use the estimate ϕ_k_L^2(Ω)≤ C∇ϕ_k_L^2(^3;^3)relying on the boundedness of Ω, provided ϕ(∞)=0 which is a standard “boundary” condition for the gravitational potential used in geophysics.The last integral on the right-hand side of engr-int bears the estimation∫_0^t∫_^3ϱ_ extϕ_k x̣ṭ≤∫_0^tϱ_ ext_L^∞(^3)ϕ_k_L^1(^3)ṭ≤ C∫_0^tϱ_ ext_L^∞(^3)∇ϕ_k_L^2(^3;^3)ṭ≤∫_0^tϱ_ ext_L^∞(^3)(1+ϕ_k_H^1(^3)^2) ṭ,where we used ϕ_k_ L^1(^3)≤ C∇ϕ_k_L^2(^3;^3);here the “boundary” condition ϕ_k(∞)=0 together withϱ+ϱ_ ext compactly supported is used. Altogether, using the Gronwall inequality, we obtain the a-priori estimatesfor the approximate solution:u_k_W^1,∞(I;L^2(Ω;^3))∩H^1(I;H^1(Ω;^3))^≤ C, π_k_H^1(I;L^2(Ω;^3× 3))^≤ C, α_k_L^∞(I;H^1(Ω))∩ H^1(I;L^2(Ω))^≤ C, ϕ_k_L^∞(I;H^1(^3))^≤ C. By Banach's selection principle, we consider a weakly* convergent subsequence respecting the topologies specified in est+.For the limit passage in the nonlinear term _α__ E insystem-alpha-w, we need to improve it for thestrong convergence of e_ el,k→ e_ el in L^2(Q;^3×3).To this goal, we use the test function u_k-u and π_k-π forthe Galerkin approximation of system-u+ and system-pi, respectively, integrate it over the time interval [0,t], and estimate∫_Ω1/2__ KV(e_ el,k(t)-e_ el(t))∷(e_ el,k(t)-e_ el(t))x̣≤1/2∫_Ω__ KV(e_ el,k(t)-e_ el(t))∷(e_ el,k(t)-e_ el(t)) +__ MX(π_k(t)-π(t))∷(π_k(t)-π(t))x̣ +∫_0^t∫_ΩK__ E| sph(e_ el,k-e_ el)|^2 +(_e^__ E(α_k,e_ el,k) -_e^__ E(α_k,e_ el)) ∷(e_ el,k-e_ el) x̣ṭ =∫_0^t∫_Ω-(ϱ u_k+f_k)(u_k-u)-(__ KV e_ el+K__ E sphe_ el +_e^__ E(α_k,e_ el))∷(e_ el,k-e_ el) -__ MXπ∷(π_k-π) x̣ṭ=∫_0^t∫_Ωϱ u_k( u_k- u)-f_k(u_k-u) +(_e^__ E(α,e_ el)-_e^__ E(α_k,e_ el) )∷(e_ el,k-e_ el)-(__ KV e_ el+K__ E sphe_ el +_e^__ E(α,e_ el))∷(e_ el,k-e_ el) -__ MXπ∷(π_k-π) x̣ṭ -∫_Ωϱ u_k(t)(u_k(t)-u(t)) x̣ → 0with f_k=ϱ(ω×(ω×(x+u_k))+ 2ω× u_k+∇ϕ_k); note that f_k isbounded in L^2(Q;^3) due to the a-priori estimates (<ref>a,d). Actually, KV-damage-strong-e(t) is to be understood rather asa conceptual strategy: the mentioned test functions u_k-u and π_k-π are not legitimate for the Galerkin approximation and (u,π) is still to be approximated strongly to be valued in the respective finite-dimensional subspaces - we omitted these standard technical details for simplicity.For the convergence to 0 in KV-damage-strong-e(t), we used thatu_k→ u strongly L^2(Q;^3) due to the Aubin-Lions theorem, relying on the estimate est-u+ together with an information about u from the equation system-u+ itself, and also u_k(t) is bounded in L^2(Ω;^3) whileu_k(t)→ u(t) strongly in L^2(Ω;^3) due to the Rellich compact embedding H^1(Ω)⊂ L^2(Ω), and also_e^__ E(α_k,e_ el)→_e^__ E(α,e_ el) strongly in L^2(Q;)due to the continuity of the Nemytskiĭ mappinginduced by _e^__ E(·,e_ el) andα_k→α in L^2(Q) again just by the Rellich theorem since both α_k and ∇α_k is estimated in L^2-spaces.Thus, from KV-damage-strong-e(t), we obtain e_ el,k(t)→ e_ el(t) strongly in L^2(Ω;^3× 3)for all t∈ I. Usingit for a general t I, we obtain e_ el,k→ e_ el strongly inL^2(Q;^3× 3) by the Lebesgue theorem. The limit passage in the Galerkin approximation towards theintegral identities system-w is then simple by weak/strong continuity or, in case of the variational inequality system-alpha-w,also semicontinuity.For the energy conservation, the essential needed facts are that√(ϱ) u∈ L^2(I;H^1(Ω;^3)^*) is in duality with√(ϱ) u∈ L^2(I;H^1(Ω;^3)) and also thatdiv(κ∇α)∈ L^2(Q) is in duality withα∈ L^2(Q) so that the by-part integration formulas rigorously hold:∫_0^t⟨√(ϱ) u,√(ϱ) u⟩ ṭ =∫_Ωϱ/2| u(t)|^2-ϱ/2| u(0)|^2 x̣,and∫_0^t∫_Ω div(κ∇α)α x̣ṭ =∫_Ωκ/2|∇α(0)|^2 -κ/2|∇α(t)|^2 x̣ ; see e.g. <cit.>. More in detail, forcalculus1 we have used the comparison√(ϱ) u=( div σ-f)/√(ϱ)and the estimate √(ϱ) u_L^2(I;H^1(Ω;^3)^*) = sup_v_L^2(I;H^1(Ω;^3))≤1∫_Q div σ-f/√(ϱ)v x̣ṭ =sup_v_L^2(I;H^1(Ω;^3))≤1∫_Qσ∷(√(ϱ)∇ v- v⊗∇ϱ/√(ϱ)) +f/√(ϱ)v x̣ṭ≤ Cfor which the smoothness ϱ∈ W^1,3(Ω) is needed due to the occurrence of ∇ϱ in est-of-DDTu,while for calculus2 we have used the comparison div(κ∇α)∈ζ(α) +_α__ E(α, dev e_ el). The quadruple (u,π,α,ϕ) is called a weak solution tothe initial-boundary-value problem (<ref>a,b)–system++–BC-IC-alpha+ with the data G-K-eps provided again the data satisfies ass and(<ref>a,d) holds together with π H^1(I;L^2(;))and α∈ L^∞(I;H^1())∩ H^1(I;L^2())and the integral identity momentum-w holds withσ=σ_ sph+σ_ dev from (<ref>b,c),also system-alpha-w holds withQ and Ω replaced respectivelyand , furthermore gravity-w holds,and also system-pi holds a.e. on , and eventuallyu(0)=u_0 holds a.e. on Ω, and π(0)=π_0 andα(0)=α_0 hold a.e. on .Sketched proof of Proposition <ref>.Like engr-int but now with G-K-eps taken into account, we have∫_Ωϱ/2| u_(t)|^2 +3/2K__ E| sph e_ el,(t)|^2+ 1/2ϱ|ω·(x+u_(t))|^2 x̣ +∫_^31/ 8 g|∇ϕ_(t)|^2 x̣ +∫___ E(α_(t), dev e_ el,(t)) +κ/2|∇α_(t)|^2 x̣+ ∫___ E(α_(t), dev e_ el,(t)) +κ/2|∇α_(t)|^2 x̣+∫_0^t∫_Ω__ KVe_ el,∷ e_ el, x̣ṭ +∫_0^t∫___ MXπ_∷π_ +α_η'(α_) x̣ṭ +∫_0^t∫_1/__ MXπ_∷π_ +α_η'(α_) x̣ṭ ≤∫_Ω -ϱ∇ϕ_(t) u_(t)-ϱϕ_(t) x̣+∫_^3ϱ_ ext(t)ϕ_(t) x̣ -∫_0^t∫_^3ϱ_ extϕ_ x̣ṭ+E_0with E_0 again from def-E0, relying that ≤1. By the Gronwall-inequality arguments like used for engr-int, we obtain the a-priori estimates (<ref>b,d) now for (π_,ϕ_) instead of (π_k,ϕ_k) and also u__W^1,∞(I;L^2(Ω;^3))∩H^1(I;H^1(;^3))^≤ C α__L^∞(I;H^1())∩ H^1(I;L^2())^≤ C, π__L^2(;^3× 3)^≤√()C, α__L^2(;^3× 3)^≤ C/√(), and∇α__L^∞(I;L^2(;^3))≤ C/√().Therefore, for a subsequence, we haveu_→ uweakly* in W^1,∞(I;L^2(Ω;^3))∩H^1(I;H^1(Ω;^3)), π_→πweakly in H^1(I;L^2(Ω;)), π_|_^→0strongly inH^1(I;L^2(;)), α_|_^→αweakly* in L^∞(I;H^1())∩ H^1(I;L^2()), ϕ_→ϕweakly* inL^∞(I;H^1(^3)),and also e_ el,→ e_ elstrongly in L^2(Q;). For conv-pi-L, we used π_→0 strongly inL^2(;^3× 3) due to est-pi-2 together with the assumptionπ_0=0 on . For conv-el-2, like KV-damage-strong-e(t) when taking into account that π=0 on , we have∫_Ω1/2__ KV(e_ el,(t)-e_ el(t))∷(e_ el,(t)-e_ el(t))x̣≤∫_Ω1/2__ KV(e_ el,(t)-e_ el(t))∷(e_ el,(t)-e_ el(t)) x̣ +∫_1/2__ MX(π_(t)-π(t))∷(π_(t)-π(t)) x̣+∫_1/2__ MXπ_(t)∷π_(t) x̣ +∫_0^t∫_ΩK__ E| sph(e_ el,-e_ el)|^2 +(_e^__ E,(α_,e_ el,) -_e^__ E,(α_,e_ el)) ∷(e_ el,-e_ el) x̣ṭ =∫_0^t(∫_Ω-(ϱ u_+f_)(u_-u)-(__ KV e_ el+K__ E sphe_ el +_e^__ E,(α_,e_ el))∷∷(e_ el,-e_ el) x̣ -∫___ MXπ∷(π_-π) x̣) ṭ=∫_0^t(∫_Ωϱ u_( u_- u) -f_(u_-u) +(_e^__ E,(α,e_ el)-_e^__ E,(α_,e_ el)-__ KV e_ el-K__ E sphe_ el -_e^__ E,(α,e_ el))∷(e_ el,-e_ el) x̣ -∫___ MXπ∷(π_-π) x̣) ṭ -∫_Ωϱ u_(t)(u_(t)-u(t)) x̣ → 0with f_=ϱ(ω×(ω×(x+u_))+ 2ω× u_+∇ϕ_) bounded in L^2(Q;^3) and where we now usedthe continuity of the Nemytskiĭ mapping induced by_e^__ E,(·,e_ el)=_e^__ E(·,e_ el)onwhile, on , we use that simply_e^__ E,(α_,e_ el)= _e^__ E(α_,e_ el)→0 strongly inL^2(); recall the scaling G-K-eps. The limit passage towards the weak solution in the sense ofDefinition <ref> is then simple by the continuity withrespect to the convergences conv-2. More details deserveonly the limit passage insystem-alpha-w which, written for (u_,π_,α_) and omitting the terms ∫_η(α_) x̣ṭ≥0 and ∫_κ/2|∇α_(T)|^2 x̣≥0, reads as:∫_(v-α_) ∂_α__ E,(α_, dev(e(u_)-π_))+κ∇α_∇ v+η(v) x̣ṭ +∫_κ/2|∇α_0|^2 x̣ +(∫_(v-α_) ∂_α__ E,(α_, dev(e(u_)-π_)) +κ∇α_∇ v+η(v) x̣ṭ +∫_κ/2|∇α_0|^2 x̣) ≥∫_η(α_) x̣ṭ +∫_κ/2|∇α_(T)|^2 x̣.Now we use |∫_α_∂_α__ E,(α_,e_ el,) x̣ṭ| ≤ Cα__L^2()^1+|e_ el,| _L^2(Q)^ =𝒪(√())→0 due to est-DT-a-2, and|∫_κ∇α_∇ vx̣ṭ| ≤(sup_Ωκ))∇α__L^2(;^3)^∇ v_L^2(Q;^3)^ =𝒪(√())→0 due to est-nabla-a-2. In the limit we thus obtain system-alpha-w on .Altogether, we proved that the limit (u,π,α,ϕ) solves the initial-boundary-value problem system++–(<ref>a,b)–BC-IC-alpha+ in the sense of Definition <ref>.In addition, this solution conserves energy. This can be shown again by using that √(ϱ) u in duality with√(ϱ) u so that calculus1 holds, andthat div(κ∇α)∈ L^2() andα∈ L^2() so that also calculus2 holds but now withandinstead of Q and Ω, respectively.To prove Max-limit, we use engr-int written for(u_,π_,α_,ϕ_) with t=T and the scalingBC-IC-alpha+, namelylim sup_→0∫_1/__ MXπ_∷π_ +α_ζ(α_) x̣ṭ≤lim_→0∫_^3ϱ_ ext(T)ϕ_(T) x̣ -∫_Ωϱ∇ϕ_(T) u_(T)-ϱϕ_(T) x̣ -∫_0^T∫_^3ϱ_ extϕ_ x̣ṭ-∫_Q(ω×(ω×(x+u_)· u_ x̣ṭ -∫___ E(α_(T),e_ el,(T)) x̣ -lim inf_→0(∫_Ωϱ/2| u_(T)|^2+3/2 K__ E| sph e_ el,(T)|^2x̣+∫___ E(α_(T),e_ el,(T)) +κ/2|∇α_(T)|^2 x̣ + ∫_^31/ 8 g|∇ϕ_(T)|^2 x̣ +∫_Q__ KV e_ el,∷ e_ el,x̣ṭ+∫___ MXπ_∷π_ +α_η'(α_) x̣ṭ) +E_0 ≤∫_^3ϱ_ ext(T)ϕ(T)-1/ 8 g|∇ϕ(T)|^2 x̣ -∫_Ωϱ∇ϕ(T) u(T)-ϱϕ(T) x̣ -∫_0^T∫_^3ϱ_ extϕ x̣ṭ -∫_Q__ KV e_ el∷ e_ el+(ω×(ω×(x+u)· u x̣ṭ-∫_Ωϱ/2| u(T)|^2+3/2 K__ E| sph e_ el(T)|^2x̣ -∫___ E(α(T),e_ el(T))+κ/2|∇α(T)|^2 x̣-∫___ MXπ∷π +αη'(α) x̣ṭ+E_0=0,where we again used the notation C-D and E_0 fromdef-E0 now with κ=0 on . Thefirst inequality arises from “forgetting” theterm κ/2|∇α_(T)|^2 on , while the second inequality is by weak lower semicontinuity. The last equality in engr-int-eps expresses the energy conservationfor the limit system, discussed already above. Thus Max-limit is proved. From KV-damage-strong-e(t)-2, one can even see that also π_→π strongly so that, together with conv-el-2, even the total strain e(u_)=e_ el,+π_ converges strongly in L^2(Q;). Wedid not need this additional result in the above convergence proof, however.Since ∂ζ(α) is bounded in L^2(Q) in our model, by comparison, we have alsodiv(κ∇α)∈η(α) +∂_α__ E(α, dev e_ el) bounded in L^2(Q), cf.system-alpha++. Since also α L^2(Q),the formula ∫_Qα div(κ∇α) x̣ṭ =1/2∫_Ωκ|∇α_0|^2-κ|∇α(T)|^2 x̣ rigorously holds, and we can write system-alpha-w in as the original inequality system-alpha-w- holding a.e. on Q. The integral form system-alpha-w is however suitable for the limit passages, in contrast to system-alpha-w-. Assuming, beside ass, alsoe_ el,0=e(u_0)-π_0∈ H^1(Ω;^3×3) andv_0∈ H^2(Ω;^3), then the weak solution to the system(<ref>a,d-f)–(<ref>a,b)–BC-IC-alpha+ with system+++ is understood as a five-tuple (u,σ_ sph,σ_ dev,π,ϕ)∈ W^1,∞(I;L^2(Ω;^3))× L^2(Q;)× L^2(Q;)× H^1(I;L^2(;))× W^1,∞(I;H^1(^3))if the integral identity∫_σ_ dev∷ dev e(v) x̣ṭ+ ∫_Q σ_ sph∷ sph e(v)-uv+f v x̣ṭ =∫_Ωv̊_0 v(0,·) x̣with f from system-u+ holds for any v H^1(Q;^3) with v(T)=0, further system-pi++ hold a.e. onand system+++ relating(σ_ dev,σ_ sph) with (u,π) hold a.e. on Q, and alsothe initial conditions u(0,·)=u_0 and π(0,·)=π_0 holda.e. on Ω. Note that, controlling σ_ sph and σ_ dev in theabove definition, we have implicitly also included the information u|_ H^1(I;H^1(Ω;^3))andu L^2(I;L_ div^2(Ω;^3))with L_ div^2(Ω;^3) ={u L^2(Ω;^3);div u L^2(Ω)},while the strain e(u) is not defined in the fluidic regions . Also one has e(u)-π H^1(I;L^2(;)). Sketched proof of Proposition <ref>. We first prove a certain regularity by differentiating in time the system (<ref>a,d) with system+++ written for the solution obtained inProposition <ref> and denoted now as (u_,π_,α_,ϕ_),and employing the test by ( u_,π_). Taking into account thescaling KV-eps and using again the orthogonality for the Coriolis force(now in terms of accelerations) as (2ϱω× u)· u=0,this gives∫_Ωϱ/2| u_(t)|^2 +3/2K__ E| sphe_ el,(t)|^2 x̣ +∫_0^t∫___ KV e_ el,∷ e_ el, +__ MXπ_∷π_ x̣ṭ +∫_0^t∫___ KVe( u_)∷ e( u_) x̣ṭ =∫_Ωϱ/2| u_(0)|^2 +3/2K__ E| sphe_ el,(0)|^2x̣ -∫_0^t∫_Ωϱ(ω×(ω×(x+ u_))+∇ϕ_) u_ -∫_0^t∫_(_ee^2__ E(α_,e_ el,)∷ deve_ el,+ _eα^2__ E(α_,e_ el,)α_) ∷ deve_ el,x̣ṭ ≤1/2ϱ_L^∞(Ω)^ u_(0)_L^2(Ω;^3)^2 +3/2K__ E sph(e(v_0)-π_0)_L^2(Ω;^3× 3)^2+√(ϱ)_L^∞(Ω)^ω×(ω×(x+ u_))+∇ϕ__L^2(Q;^3)^2+∫_0^t∫_Ωϱ/2| u_(t)|^2 x̣ṭ +1/c( _ee^2__ E(α_,e_ el,)_L^∞(;^3^4)^2 e( u_)_L^2(Q;^3× 3)^2+ _eα^2__ E(α_,e_ el,)_L^∞(Q;^3× 3)^2 α__L^2(Q)^2)+ϵe_ el,_L^2([0,t]×;^3× 3)^2with c:=1/2__ KV_L^∞(Ω;^3^4);then the last term in est-regular can be absorbed in the left hand side. Note that we needed the assumptions ass-growth-G to have _eα^2__ E(α_,e_ el,) and _ee^2__ E(α_,e_ el,) apriori bounded. We further useu_(0)=1/ϱ( div σ_0, +ϱ(ω×(ω×(x+u_0))+ 2ω×v_0+∇ϕ_0) )∈ L^2(Ω;^3)with the initial stress σ_0,=σ_ sph+σ_ devwith σ_ sph and σ_ dev from (<ref>b,c) with (K__ KV,G__ KV)=(K__ KV,,G__ KV,) from KV-eps,and with e_ el=e(u_0)-π_0 and e_ el=e(v_0)-π_0, and ϕ_0∈ H^1(^3) solvingdiv(∇ϕ_0/(4 g)+ϱ u_0)=ϱ+ϱ_ ext(0). Note that indeed u_(0)∈ L^2(Ω;^3) due to the assumptionse_ el,0=e(u_0)-π_0∈ H^1(Ω;^3×3) andv_0∈ H^2(Ω;^3) involved in Definition <ref>.Also note that π_0:=π(0) is involved in est-regular and implicitly also in est-regular+, although we do notprescribe any initial condition on π. Yet, from system-pi++, we can read π(0)=__ MX^-1σ_0 onwhileπ(0)=0 onbecause we have now __ MX^-1=0 on .To estimate σ_0 let us realize that σ_0_H^1(Ω;^3×3)≤__ KV(e(v_0)-π(0)) +_e_ E(α_0,e_ el,0)_H^1(Ω;^3×3)≤ C+__ KV__ MX^-1σ_0_H^1(Ω;^3×3) with some C depending on v_0_H^2(Ω;^3), α_0_H^1(Ω), and e_ el,0_H^1(Ω;^3×3); note that these quantitieshave been supposed finite in Definition <ref>. Thus, using ass-small-KV/MX, we get the desired bound on σ_0. Hence, e_ el(0) which occurs in est-regular itself can be estimated.Using Gronwall's inequality for est-regular then yields the estimates u__W^2,∞(I;L^2(Ω;^3)) ∩ H^2(I;H^1(;^3))^≤ C,π__L^2(;^3× 3)^≤ C, and u__H^2(I;H^1(;^3))^≤ C/√(), which are now at disposal together with est-phi andest-a-2.Also we have σ__L^2(Q;)^≤ C withσ_=__ KVe( u_)+3K__ KV sph e(u_) on , __ KV e_ el,+__ E(α_,e_ el,)on ,where we again use the notation C-D and e_ el,=e(u_)-π_.By Banach's selection principle, we consider a weakly* convergent subsequence respecting the topologies specified in (<ref>a,b) and est-a-2 together withthe convergence conv-phi and also σ_→σ weakly inL^2(Q;). Then we put naturally σ_ sph:= sph σ andσ_ dev:= dev σ.As e( u_)_L^2(;^3×3)=𝒪(1/√()), we have __ KVe( u_)_L^2(;^3×3) =𝒪(√())→0 so that σ=3K__ KV sph e(u) on . To identify σ=__ KV e_ el+__ E(α_,e_ el) in the solid regions , due to nonlinearities __ E(α,·), we again need to provee_ el,→ e_ el strongly inL^2(;). To this goal, one is to modifyKV-damage-strong-e(t)-2 to be used onrather than Ω. The peculiarity is that u_-u is no longer a legitimate test function because e(u) is not well defined in the fluidic regions. For this reason, we take some smooth ũ_ that will approximate u strongly in L^∞(I;L_ div^2(Ω;^3))and even ũ_|_→ u|_ strongly inH^1(I;H^1(Ω;^3)), and we can assume that this convergence issufficiently slow so that ũ__L^2(I;H^1(;^3))≤ 1/√().Then, denoting ẽ_ el,=e(ũ_)-π, we have ẽ_ el,(t)→ e_ el(t) for a.a. t I and we can writelim sup_→0∫_1/4__ KV(e_ el,(t)-e_ el(t))∷(e_ el,(t)-e_ el(t))x̣≤lim sup_→0∫_1/2__ KV(e_ el,(t)-ẽ_ el,(t))∷(ẽ_ el,(t)-e_ el,(t)) x̣ +lim_→0∫_1/2__ KV(ẽ_ el,(t)-e_ el(t))∷(ẽ_ el,(t)-e_ el(t)) x̣≤lim sup_→0(∫_(1/2__ KV(e_ el,(t)-ẽ_ el,(t))∷(e_ el,(t)-ẽ_ el,(t)) + 1/2__ MX(π_(t)-π(t))∷(π_(t)-π(t))) x̣+∫_0^t∫_ΩK__ E| sph(e_ el,-ẽ_ el,)|^2 x̣ṭ +∫_0^t∫_(_e^__ E(α_,e_ el,) -_e^__ E(α_,ẽ_ el,)) ∷(e_ el,-ẽ_ el,) x̣ṭ) ≤lim_→0( ∫_/2__ KVe(u_0) x̣ -∫_0^t∫___ KVe( u_)∷ e(ũ_)x̣ṭ- ∫_0^t∫_Ω(ϱ u_+f_)(u_-ũ_) -∫_0^t∫___ MXπ∷(π_-π) +(__ KVẽ_ el, +K__ E sphẽ_ el, +_e^__ E(α_,ẽ_ el,)) ∷(e_ el,-ẽ_ el,)x̣ṭ)=0,where the third inequality have arisen by “forgetting” the nonnegative term ∫___ KVe(u_(T))∷ e(u_(T)) x̣. Note the we used approximation together with est-pi-3+ for the estimate|∫_0^t∫___ KVe( u_)∷ e(ũ_)x̣ṭ|≤__ KV_L^∞(Ω;^3^4)××e( u_)_L^2(;^3×3)e(ũ_)_L^2(;^3×3)) =𝒪(1/√()) =𝒪(√())→0.From KV-damage-strong-e(t)-3, we thus havee_ el,|_(t)→ e_ el|_(t) at a.a. t I. Then,instead of conv-el-2, by the Lebesgue theorem, we now provede_ el,|_→ e_ el|_strongly in L^2(;),which is to be used for the limit passage in the nonlinear term _e^__ E.The energy conservation now holds due to the proved regularity, as√(ϱ) u∈ L^∞(I;L^2(Ω;^3)) is surely in duality with√(ϱ) u∈ W^1,∞(I;L^2(Ω;^3)), cf. est-u-3.Eventually, like we did in engr-int-eps, we now can show thatlim sup_→0∫___ KV e( u_)∷ e( u_) x̣ṭ≤∫_^3ϱ_ ext(T)ϕ(T)-1/ 8 g|∇ϕ(T)|^2 x̣-∫_Ωϱ∇ϕ(T) u(T)-ϱϕ(T) x̣-∫_0^T∫_^3ϱ_ extϕ x̣ṭ -∫_Q(ω×(ω×(x+u)· u x̣ṭ-∫_Ωϱ/2| u(T)|^2+3/2 K__ E| sph e_ el(T)|^2x̣-∫___ E(α(T),e_ el(T))+κ/2|∇α(T)|^2 x̣-∫___ KV e_ el∷ e_ el+__ MXπ∷π +αη'(α) x̣ṭ+E_0=0,with E_0 from def-E0. The last equality is due to the mentioned energyconservation. Thus KV-limit is proved.Note that, in est-regular, we benefited from havingG__ E already pushed to zero onbecause the viscosity onis (intentionally) not uniformly controlled, being limited to zero. Thus the direct merging of bothlimit processes in Propositions <ref>and <ref> is not possible. Of course, a suitable scaling between these two would facilitate such connection. The interfaces between the ocean beds and the mantle as well as theGutenberg's discontinuity andbetween the inner and outer core regionstypically also exhibit discontinuities in mass density ϱ, which is incompatible with the assumptionϱ∈ W^1,3(Ω) in ass-rho used in est-of-DDTu. Note that the additional regularity of the initial conditions involved in Definition <ref> allowed us to avoid this restriction and consider a general, possibly discontinuous ϱ∈ L^∞(Ω). A respective modification of the proofs ofPropositions <ref>–<ref> would be possible, too. Acknowledgments.The author is thankful to Katharina Brazda and Ctirad Matyska for many fruitful discussions about the models. Moreover, deep thanks are also for hospitality and support of theErwin Schrödinger Institute, Univ. Vienna,and for the partial support of theCzech Science Foundation projects 16-03823S and 17-04301S and alsoby the Austrian-Czech projects 16-34894L (FWF/CSF), as well as through the institutional support RVO: 61388998 (ČR).99BeZi01DRRM Y. Ben-Zion. Dynamic ruptures in recent models of earthquake faults. J. Mech. Phys. Solids, 49:2209–2244, 2001.BZamp09SRRS Y. Ben-Zion and J.-P. Ampuero. Seismic radiation from regions sustaining material damage. Geophys. J. Int., 178:1351–1356, 2009.Bog77HECV D. Boger. A highly elastic constant-viscosity fluid. J. Non-Newtonian Fluid Mechanics, 3:87–91, 1977.Braz17EGEG K. Brazda. The elastic-gravitational equations in global seismology with low regularity. PhD thesis, Univ. Wien, 2017.BrHoHo17VFEE K. Brazda, M. V. de Hoop, and G. Hoermann. Variational formulation of the earth's elastic-gravitational deformations under low regularity conditions. Preprint arXiv:1702.04741, 2017.DahTro98TGS F. A. Dahlen and J. Tromp. Theoretical global seismology. Princetown Univ. Press, Princetown, NJ, 1998.GreNag65GTEP A. Green and P. Naghdi. A general theory of an elastic-plastic continuum. Arch. Rational Mech. Anal., 18:251–281, 1965.HBAD09DERC R. A. Harris et al. The SCEC/USGS dynamic earthquake rupture code verification exercise. Seismological Res. Lett., 80:119–126, 2009.HuAmHel14ERMW Y. Huang, J.-P. Ampuero, and D. V. Helmberger. Earthquake ruptures modulated by waves in damaged fault zones. J. of Geophysical Research: Solid Earth, B9:3133–3154, 2014.KaLaAm08SEMS Y. Kaneko, N. Lapusta, and J.-P. Ampuero. Spectral element modeling of spontaneous earthquake rupture on rate and state faults: Effect of velocity-strengthening friction at shallow depths. J. Geophysical Res., 113:B09317, 2008.KomTro02SESG D. Komatitsch and J. Tromp. Spectral-element simulations of global seismic wave propagation - I. validation. Geophys. J. Int., 149:390–412, 2002.KomTro02SESG2 D. Komatitsch and J. Tromp. Spectral-element simulations of global seismic wave propagation - II. three-dimensional models, oceans, rotation and self-gravitation. Geophys. J. Int., 150:303–318, 2002.KooDum11VEIC L. Koot and M. Dumberry. Viscosity of the Earth's inner core: Constraints from nutation observations. Earth and Planetary Science Letters, 308(3):343–349, 2011.LayWal95MGS T. Lay and T. C. Wallace. Modern global seismology. Acad. Press, San Diego, 1995.LyaBZ14DBRM V. Lyakhovsky and Y. Ben-Zion. Damage-breakage rheology model and solid-granular transition near brittle instability. J. Mech. Phys. Solids, 64:184–197, 2014.LHAB09NDRW V. Lyakhovsky, Y. Hamiel, J.-P. Ampuero, and Y. Ben-Zion. Non-linear damage rheology and wave resonance in rocks. Geophys. J. Int., 178:910–920, 2009.LyHaBZ11NLVE V. Lyakhovsky, Y. Hamiel, and Y. Ben-Zion. A non-local visco-elastic damage model and dynamic fracturing. J. Mech. Phys. Solids, 59:1752–1776, 2011.LyaMya84BECS V. Lyakhovsky and V. P. Myasnikov. On the behavior of elastic cracked solid. Phys. Solid Earth, 10:71–75, 1984.MeaFur13SSWO T. Maedae and T. Furumura. FDM simulation of seismic waves, ocean acoustic waves, and tsunamis based on tsunami-coupled equations of motion. Pure Appl. Geophys., 170:109–127, 2013.PPAB12TDDR C. Pelties, J. de la Puente, J.-P. Ampuero, G. B. Brietzke, and M. Käser. Three-dimensional dynamic rupture simulation with a high-order discontinuous galerkin method on unstructured tetrahedral meshes. J. Geophys. Res., 117:B02309, 2012.RajRou03EDSM K. R. Rajagopal and T. Roubíček. On the effect of dissipation in shape-memory alloys. Nonlinear Anal., Real World Appl., 4:581–597, 2003.Roub13NPDE T. Roubíček. Nonlinear Partial Differential Equations with Applications. Birkhäuser, Basel, 2nd edition, 2013.Roub14NRSF T. Roubíček. A note about the rate-and-state-dependent friction model in a thermodynamical framework of the Biot-type equation. Geophysical J. Intl., 199:286–295, 2014.Roub17GMHF T. Roubíček. Geophysical models of heat and fluid flow in damageable poro-elastic continua. Cont. Mech. Thermodyn., 29:625–646, 2017.RoPaMa13QACV T. Roubíček, C. G. Panagiotopoulos, and V. Mantič. Quasistatic adhesive contact of visco-elastic bodies and its numerical treatment for very small viscosity. Zeitschrift angew. Math. Mech., 93:823–840, 2013.RoSoVo13MRLF T. Roubíček, O. Souček, and R. Vodička. A model of rupturing lithospheric faults with re-occurring earthquakes. SIAM J. Appl. Math., 73:1460–1488, 2013.Secc13VOC R. A. Secco. Viscosity of the outer core. In T. Ahrens, editor, Mineral Physics & Crystallography: A Handbook of Physical Constants, pages 218–226. Willey, 2013.SmyPal07VEOC D. E. Smylie and A. Palmer. Viscosity of Earth's outer core. Preprint arXiv:0709.3333, 2007.ToCaMa09SSLV N. Tosi, O. Čadek, and Z. Martinec. Subducted slabs and lateral viscosity variations: effects on the long-wavelength geoid. Geophys. J. Int., 179:813–826, 2009.TAKS13EEEE V. Tsai, J.-P. Ampuero, H. Kanamori, and D. Stevenson. Estimating the effect of Earth elasticity and variable water density on tsunami speeds. Geophysical Res. Letters, 40:492–496, 2013.DKVD98VLIP G. A. D. Wijs, G. Kresse, L. Vočadlo, D. Dobson, D. Alfé, M. J. Gillan, and G. D. Price. The viscosity of liquid iron at the physical conditions of the Earth's core. Nature, 392 (6678):805–807, 1998.WooDeu09TOEF J. H. Woodhouse and A. Deuss. Theory and observations - Earth's free oscillations. In B. Romanowicz and A. Dziewonski, editors, Seismology and Structure of the Earth: Treatise on Geophysics, volume 1, chapter 1.02, pages 31–65. Elsevier, 2009.
http://arxiv.org/abs/1708.07910v1
{ "authors": [ "Tomáš Roubíček" ], "categories": [ "math.AP", "35K51, 35L20, 35Q86, 74J10, 74R20, 74F10, 76N17, 86A15, 86A17" ], "primary_category": "math.AP", "published": "20170826002216", "title": "Seismic waves and earthquakes in a global monolithic model" }
§ INTRODUCTION During the last decade, the success of the Fermi-Large Area Telescope (LAT) and the development and refinement of the Atmospheric Cherenkov Telescope (IACT) technique by MAGIC, H.E.S.S., and VERITAS has revolutionised our understanding of the non-thermal high-energy Universe. The next generation of IACTs is currently under development, led by the Cherenkov Telescope Array (CTA).[http://www.cta-observatory.org/] CTA will provide unprecedented insights into the γ-ray sky from 20 GeV to 300 TeV, improving the sensitivity of current IACTs by more than an order of magnitude.CTA will be composed of two observatories providing (near to) full-sky coverage. One array will be located in the Northern Hemisphere at La Palma (Spain) and a second array in the Southern Hemisphere at Paranal (Chile). To maximise the scientific output, these arrays are planned to have different designs: the Northern array (hereafter, CTA-N) is planned to be composed by 4 Large-Sized Telescopes (LST) and 15 Medium-Sized Telescopes (MST), whereas the Southern array (hereafter, CTA-S) is planned to be larger, composed by 4 LSTs, 25 MSTs, and 70 Small-Sized Telescopes (SST). The combination of space-borne γ-ray telescopes and ground-based IACTs has proven to be successful on expanding our knowledge on persistent as well as transient phenomena. First, the large collection area of IACTs makes up for the limited payload of space telescopes, allowing us to explore shorter variability time scales with an improved sensitivity for moderate observation times <cit.>. Second, space-borne instruments, in particular the Fermi-LAT, compensates for the limited field of view of IACTs, typically of a few degrees in diameter, as it surveys the whole sky in a matter of hours, having accumulated observation time over the whole γ-ray sky for nearly a decade. For this reason, the all-sky survey conducted by the has been a key asset for ground-based telescopes, guiding follow-up observations on potential very-high-energy (E>100 GeV) emitters and triggering targets of opportunity on transient phenomena.The CTA improved differential sensitivity will enable the observation of fainter sources, significantly increasing the detectable population of Active Galactic Nuclei (AGN), including their average (quiescent) flux states, which in general are not detectable by the current generation of IACTs. Here, we use the information provided above 10 GeV by the Third Catalog of Hard Sources (the 3FHL catalog, <cit.>) to make predictions on the extragalactic source populations that both CTA-N and CTA-S will be able to detect over short and moderate telescope exposures. Note that the results presented here make use of average flux states, integrated over the 7 years of LAT exposure, and do not take into account the strong variability of these sources.§ DESCRIPTION OF THE 3FHL CATALOG The surveys the whole sky every three hours with excellent sensitivity and angular resolution. The LAT Collaboration recently released the 3FHL catalog, which describes the hardest γ-ray sources in the sky by increasing the lower energy threshold of the analysis to 10 GeV <cit.>, relative to the broad-band catalog 3FGL <cit.>. The 3FHL is built from 7 years of Pass 8 data (while the 3FGL contained 4 years), which provides several improvements in comparison with previous versions of the event-level analysis. The 3FHL lists the position and spectral characteristics of 1,556 sources over the whole sky. Most of these sources 1,231 (79% of the total catalog) are associated with sources of extragalactic nature, and 526 (43% of the extragalactic sources) have a known redshift. Note that only 72 of the 3FHL extragalactic sources have been already detected by ground-based telescopes.[See: http://tevcat.uchicago.edu/]Given the low energy threshold of the future CTA, expected to be of approximately 20 GeV, the 3FHL is the best available sample of targets for the observatory, providing an excellent opportunity to derive robust and realistic predictions on the persistent sources that will be accessible in the near future.§ CTA DETECTABILITY COMPUTATIONThe 3FHL provides spectral fluxes in 5 energy bands from 10 GeV to 2 TeV. The energy limits of these bands are 10, 20, 50, 150, 500 GeV and 2 TeV. For most cases, only upper limits are given for the higher energy bands. Since CTA will be sensitive to photons with energies >1 TeV, there are several steps that need to be performed with caution to extrapolate blazar spectra to higher energies:* Spectral shape of the intrinsic emission: Given the 3FHL energy range and the low photon statistics at TeV energies, the spectral flux extrapolation to higher energies is extremely dependent on the function considered to fit the fluxes. * Extragalactic background light (EBL): We must take into account the flux attenuation produced by the pair-production interaction between γ-rays traveling over cosmological distances and photons from the diffuse extragalactic background light (EBL), e.g. <cit.>. This flux attenuation depends on the energy of the γ-ray photon and distance to the source.We approach these issues as follows. First, by testing different flux extrapolations to the TeV range. These extrapolations are modeled with the following functions that include the exponential attenuation from the EBL effect. Second, assuming the flux attenuations provided by <cit.>, which are compatible with the current EBL knowledge.* Power-law + EBL attenuation (PL)* Power-law with exponential cutoff + EBL attenuation. An exponential cutoff is added to the power-law at 1/(1+z) TeV (PLE)* Broken Power-law + EBL attenuation. For hard sources (with a power-law index Γ > 2), spectra are softened to Γ = 2.5 at 100/(1+z) GeV (BPL)* Log-Parabola + EBL attenuation (LP) Note that we are listing above the extrapolation scenarios from most optimistic to most pessimistic, meaning that a PL will predict a larger flux for a given TeV energy than a LP.As mentioned in Section <ref>, a large fraction (∼ 57%) of the 3FHL extragalactic sources do not have a known redshift. This situation adds an additional uncertainty to our flux extrapolations. We address this problem in this way. The 3FHL provides source classes for most of the extragalactic objects. The redshift distributions for BL Lacs, flat-spectrum radio quasars (FSRQs), and blazars of uncertain type (BCUs) are shown in Figure <ref>. We sample randomly these distributions, according to their source class, to attach a redshift to each source of unknown redshift. This procedure is of course not expected to give robust redshifts for individual blazars but should work in a statistical sense, which is the goal of our analysis. Note that we do not take into account the possible bias induced by the increased difficulty in measuring the redshift of a source with increasing distance, which would be difficult to model.Several tools are used for the spectral simulations and detectability forecasts, obtaining consistent results between them. These tools are CTAmacros,[See https://github.com/cta-observatory/ctamacros] GAEtools <cit.> and Gammapy <cit.>. They use instrument response functions (IRFs) generated from detailed Monte Carlo simulations, which allow a CTA performance estimate (these IRFs correspond to the third large-scale production). Common conditions are used in these tools to claim a detection: 5σ significance level assuming an off-to-on source exposure ratio of 5, with a minimum number of 10 excess events, and, at least, five times the expected systematic uncertainty in the background estimation, which is about 1%.As described in <cit.>, these IRFs were generated using analysis cuts that optimize differential sensitivities for point-like sources, which is ideal for our science case. At the moment, given the huge computation resources required for the production of these IRFs, they have been calculated only for zenith angles of 20^∘ and 40^∘. In our analysis, we estimate the altitude of culmination of each simulated source from each site (La Palma and Paranal, with latitudes 28.76^∘ N and 24.63^∘ S, respectively). For sources culminating between 0 and 30^∘ in zenith angle, the IRF corresponding to 20^∘ is used, whereas sources that culminate at zenith angles of 30 to 50^∘ are simulated using the 40^∘ IRF. We assume that no source is detectable if it culminates at zenith angles larger than 50^∘. This condition excludes only less than 5% of the extragalactic sample from both of the CTA sites together. § RESULTSTable <ref> shows the number of extragalactic sources detected for each of the proposed spectral models along with the number of those sources already detected with current IACTs. This TeV information is provided by the 3FHL catalog. These calculations follow the recipe detailed in Section <ref>. As expected, the total number of detectable sources is strongly affected by the intrinsic emission model that is assumed. Note the LP fits should be taken with caution as existing TeV observations disfavour such an extrapolation of LAT spectra <cit.>. In the 3FHL, for a typical blazar, there are detections only at the lower energy bins, approximately between 10 and 150 GeV, therefore a log-parabola extrapolation tends to predict TeV fluxes with a significantly stronger curvature than the one observed by IACTs. Complementarily, <cit.> discusses that using a PLE extrapolation roughly reproduces average flux states observed at TeV energies. Under this extrapolation scheme, the number of detected sources as a function of their class is shown in Table <ref>. Figure <ref> shows in Galactic coordinates (Hammer-Aitoff projection) all extragalactic sources that are predicted to be detected (≥ 5 σ) by CTA-N and CTA-S assuming the PLE extrapolations. This figure shows the improvements that CTA will provide on the extragalactic TeV source population studies, which will be carried out by Key Science Projects as well as dedicated proposals. Figure <ref> shows the photon index versus integrated flux (in the LAT energy band) above 10 GeV of all 3FHL extragalactic sources that are visible either from CTA-N or CTA-S (at culminations lower than 50^∘), in comparison with those that could be detected by CTA in 5 and 20h of telescope exposure. Interestingly, we can see in Figure <ref> that even in 5h, there are sources detected at the 3FHL flux limit. This result indicates that CTA could probably detect a new population of sources with hard spectra and low fluxes still not seen by the LAT. CTA will not just increase the number of detected extragalactic sources in the TeV regime, it will also expand the frontier on the farthest VHE sources detected from the ground. As shown in Figure <ref>, a significant number of high redshift sources in their average state will be detectable by CTA, a few of them even as far as z∼ 1.5. These blazars at high redshift are appealing for different scientific topics, such as estimating the EBL spectral intensities and their evolution <cit.>, studying intergalactic magnetic fields <cit.> and axion-like particles <cit.>, evaluating cosmological properties <cit.> and also testing Lorentz invariance violations <cit.>. It is well known that blazars suffer flaring episodes that will make their detectability possible in shorter exposures, and will enable the detection of sources at even higher redshifts <cit.>.§ CONCLUSIONSThese results show the CTA potential to expand our understanding of the extragalactic populations of γ-ray emitters and their evolution over redshift. CTA will not only dramatically increase the amount of detectable blazars in their average state, it will also enable their spectral study over more than five decades in energy by combining its observations with data. Above 10 GeV the sky is completely dominated by BL Lacs <cit.>, however we have shown that with moderate observation times, there will be other classes of extragalactic source detected in their average state such as tens of FSRQs, radio galaxies, and starburst galaxies. We have also shown the CTA potential in discovering new source populations of hard spectra and low fluxes. As mentioned above, there are fundamental science topics that will benefit from the large number of extragalactic sources that CTA will detect, such as questions related with gamma-ray propagation on cosmic scales.We stress that our results are based on the more persistent sky and that we should expect a large amount of detections of new flaring sources, which are not included in our analysis. Information provided by the will be key to optimising follow-up observations strategies for CTA. An important uncertainty in using our spectral flux extrapolations comes from the large number of blazars with unknown redshift. We encourage dedicated observational campaigns to reduce these uncertainties and also guide follow-up observations <cit.>. §.§.§ AcknowledgmentsThis work was conducted in the context of the CTA Extragalactic Working Group. We gratefully acknowledge financial support from the agencies and organizations listed here : http://www.cta-observatory.org/consortium_acknowledgmentsJHEP
http://arxiv.org/abs/1708.07704v3
{ "authors": [ "T. Hassan", "A. Domínguez", "J. Lefaucheur", "D. Mazin", "S. Pita", "A. Zech" ], "categories": [ "astro-ph.HE", "astro-ph.CO", "astro-ph.IM" ], "primary_category": "astro-ph.HE", "published": "20170825115957", "title": "Extragalactic source population studies at very high energies in the Cherenkov Telescope Array era" }
Correlating Satellite Cloud Cover with Sky Cameras Robin Ciardullo1, Caryl Gronwall1================================================== Shilpa Manandhar^1 ^*, Soumyabrata Dev^2, Yee Hui Lee^3, and Yu Song Meng^4^1Nanyang Technological University Singapore, Singapore 639798, email:^2Nanyang Technological University Singapore, Singapore 639798, email:^3 Nanyang Technological University Singapore, Singapore 639798, email:^4National Metrology Centre, Agency for Science, Technology and Research (A^*STAR), Singapore 118221, email: ^* Presenting author and Corresponding author The role of clouds is manifold in understanding the various events in the atmosphere, and also in studying the radiative balance of the earth. The conventional manner of such cloud analysis is performed mainly via satellite images. However, because of its low temporal- and spatial- resolutions, ground-based sky cameras are now getting popular. In this paper, we study the relation between the cloud cover obtained from MODIS images, with the coverage obtained from ground-based sky cameras. This will help us to better understand cloud formation in the atmosphere – both from satellite images and ground-based observations. IntroductionThe Moderate Resolution Imaging Spectroradiometers (MODIS) installed on National Aeronautics and Space Administration (NASA) Earth Observing System’s (EOS) Terra and Aqua satellites is an excellent source of such important and long-term record of climate data. MODIS data are increasingly being used in weather related studies like, the PWV products available from MODIS are now used in severe weather simulations <cit.>, analyzing cloud optical properties <cit.>, and study of aerosol properties <cit.>. Amongst other useful products, one of the important contributions from MODIS images is cloud mask that indicates the presence of cloud over a particular area. However, its use is limited because it can provide a top-view of cloud formation, effectively neglecting the low-lying clouds. It also has low temporal and spatial resolutions, as its satellites passes over Singapore only twice a day. Therefore, images captured using ground-based sky cameras are now slowly gaining popularity amongst the remote sensing analysts. We analyze and compare the cloud cover obtained from both satellite- and ground-based- images.Data Collection Ground-based CamerasWe designed and deployed our custom-built sky cameras at the rooftop of our university building (1.3483^∘ N, 103.6831^∘ E). We refer our sky cameras as WAHRSIS, that stands for Wide Angled High Resolution Sky Imaging System <cit.>. Such ground-based sky camera captures the sky scene at an interval of 2 minutes. It captures images in the visible-light spectrum, and have a higher temporal and spatial resolutions, as compared to the conventional satellite images. We use the ratio of red and blue color channels to detect clouds in the captured sky/cloud image <cit.>. This captured sky/cloud images assist us in computing the cloud coverage, which is defined as the ratio of the sky scene covered by clouds.MODIS images There are different levels of MODIS products available online  [The MODIS data are available online at <https://ladsweb.modaps.eosdis.nasa.gov/archive/allData/?process=ftpAsHttp path=allData>.]. For this paper we use MODIS level 5 products, which gives information on the cloud mask values. The cloud mask values calculated from MODIS products have spatial resolution of 1 km^2 (1 pixel) and is available twice a day (4 UTC and 7 UTC). The value of cloud mask can take any values 0, 64, 128 or 192; 0 indicates 100 % cloudy condition and 192 indicates the cloud free condition. For this paper, we take the average cloud coverage a certain pixel area and normalize it to the range of 0 and 100, such that the 0 represent no cloud and 100 represent full cloud conditions.Experiments & Results Comparison between WAHRSIS images and MODIS imagesFigure <ref> shows the comparison between MODIS images and WAHRSIS images. For the comparison purpose, MOD and MYD MODIS images captured on 25-Dec-2012 are used. Here, four different colors are used to represent the cloud mask values; blue represents cloud mask values of 192 indicating 98 % cloud free condition, green represents cloud mask values of 128 indicating 92 % cloud free condition, red represents cloud mask values of 64 indicating 60 % cloud free condition and grey represent cloud mask values of 0 indicating 100 % cloudy conditions. A green circle, clearly visible on MODIS image taken on 2015/12/25 is used to represent the location of NTU (1.34^∘, 103.68^∘). The MODIS plot shows the sky condition taking NTU position as a center pixel.We observe that the MODIS MOD image captured on 2015/12/25 at 11:40 AM shows that the sky is almost cloud free during that time, as the color plot shows majority of blue color. A WAHRSIS image taken at the same time is shown which also shows a clear sky condition. Similarly, for the same day, the MYD image captured at 14:35 PM shows a cloudy condition at NTU location as the area is represented by grey color in majority. And the sky image captured by WAHRSIS at the same time also shows fully cloudy condition. These preliminary results suggests a visual correlation between the cloud condition indicated by the MODIS images and that by the sky images. In the next section of the paper, we discuss the correlation in further details. Cloud coverage from sky images and cloud mask from MODIS In this experiment [The source codes of these experiments are available online at <https://github.com/Soumyabrata/MODIS-cloud-mask>.], all the MODIS data in the year 2015 was considered. We process cloud mask values for 3 km × 3 km area (9 pixels) with the sky camera as the center location.We compute the average of the entire 9 pixel block (excluding invalid data points). We also compute the cloud coverage of the nearest sky camera image, captured by our sky camera. Figure <ref> shows the statistical analysis of the average cloud mask value with respect to the cloud coverage from images. We bin the average cloud mask values into 4 distinct bins, as the actual cloud mask has 4 distinct levels.We observe the general trend between cloud mask and cloud coverage – the cloud coverage increases with an increase in the cloud mask values. However, there is a higher variation in cloud coverage for higher cloud mask values. This is because of the possible area mismatch between MODIS and ground-based sky cameras. Conclusion In this paper, cloud mask values obtained from MODIS images are compared to the cloud coverage calculated from sky images. A good agreement is found between the two which could be seen from the time series plot and the statistical plot as well. This analysis between satellite- and ground-based observations will provide the remote sensing analysts further insights into cloud formation, and understanding the role of clouds in the radiative balance of the earth. In the future, we plan to deploy multiple sky cameras employing advanced compression schemes <cit.>, for continuous weather analysis.The authors would like to thank Joseph Lemaitre for automatizing the acquisition and processing of MODIS multi bands images into a user-friendly framework.1MODIS S. H. Chen, Z. Zhao, J. S. Haase, and F. Vandenberghe A Chen and, “A study of the characteristics and assimilation of retrieved MODIS total precipitable water data in severe weather simulations,” American Meteorological Society, vol. 136, pp. 3608––3628, Sept 2008.PIERS17b S. Manandhar, S. Dev, Y. H. Lee, and Y. S. Meng, “Analyzing cloud optical properties using sky cameras,” in Proc. Progress In Electromagnetics Research Symposium (PIERS), 2017.aerosol J. Wei and L. Sun, “Comparison and evaluation of different modis aerosol optical depth products over the beijing-tianjin-hebei region in china,” Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 10, no. 3, pp. 835–844, Mar 2017.WAHRSIS S. Dev, F. M. Savoy, Y. H. Lee, and S. Winkler, “WAHRSIS: A low-cost, high-resolution whole sky imager with near-infrared capabilities,” in Proc. IS&T/SPIE Infrared Imaging Systems, 2014.IGARSS2015 S. Dev, F. M. Savoy, Y. H. Lee, and S. Winkler, “Design of low-cost, compact and weather-proof whole sky imagers for High-Dynamic-Range captures,” in Proc. International Geoscience and Remote Sensing Symposium (IGARSS), 2015, pp. 5359–5362.ICIP1_2014 S. Dev, Y. H. Lee, and S. Winkler, “Systematic study of color spaces and components for the segmentation of sky/cloud images,” in Proc. International Conference on Image Processing (ICIP), 2014, pp. 5102–5106.Deepu2015 C. J. Deepu and Y. Lian, “A joint QRS detection and data compression scheme for wearable sensors,” IEEE Transactions on Biomedical Engineering, vol. 62, no. 1, pp. 165–175, Jan. 2015.
http://arxiv.org/abs/1709.05283v1
{ "authors": [ "Shilpa Manandhar", "Soumyabrata Dev", "Yee Hui Lee", "Yu Song Meng" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170824175457", "title": "Correlating Satellite Cloud Cover with Sky Cameras" }
§ INTRODUCTION The discovery of new very fast transient phenomena such as Fast Radio Bursts (FRBs) <cit.> motivates the development of new detection techniques to search for brief electromagnetic bursts all along the multi-wavelength spectrum <cit.>. Imaging Atmospheric Cherenkov Telescopes (IACTs) provide excellent sensitivity for transient phenomena in the Very High Energy (VHE) gamma-ray regime (E > 50 GeV) mainly due to their large collection area. These telescopes are sensitive to the optical Cherenkov flashes produced by the effect of charged secondary particles generated within extensive air showers. In addition, their large reflecting surface and isochronicity makes them suitable detectors to detect faint short (1 ms – 1 s) optical pulses <cit.>.The central pixel of MAGIC-II telescope allows simultaneous observations within both VHE and optical energy ranges. This system will be particularly beneficial for searching for fast optical variable sources as well as other transient phenomena such as FRBs. This device has been classically used to measure the Crab Pulsar light curve <cit.> but here we also explore its sensitivity for orphan optical flashes.§ THE UPGRADED MAGIC CENTRAL PIXEL SYSTEM The MAGIC central pixel system consists of a fully modified photosensor-to-readout chain at the center of the MAGIC-II telescope camera <cit.>. A standard MAGIC pixel comprises of a PMT followed by a preamplifier step that splits the signal in two branches: a so-called AC branch, which processes Cherenkov light pulses with a high bandwidth, and a DC branch, monitoring the PMT anode current. The central pixel system involves modifying such DC branch in order to increase itsbandwidth from the 8 Hz of a normal DC branch to over 3 kHz, which dominates the bandwidth of the whole system. That new DC branch is fed to both the standard DC monitoring system and to an additional optical transmitter that sends the signal down to the Control house. Once there, it is converted and adapted to be read-out both by standard MAGIC DAQ system (dubbed hereon Domino readout), and also delivered to thecentral pixel PC for a dedicated readout. In what follows, a brief summary of the MAGIC central pixel system is presented.The old camera of MAGIC-I telescope, prior to the MAGIC upgrade <cit.>, was also incorporating a central pixel system <cit.> that was instrumental in the first detection of the Crab pulsar in the VHE regime by the MAGIC telescopes <cit.>. After the above-mentioned upgrade, the central pixel system was also fully refurbished and installed in MAGIC-II telescope camera.§.§ The Central Pixel Photomultiplier and preamplifier The PMTs installed in MAGIC have attached to them several electronic boards, one of them implementing the preamplifier system, which include both the AC and DC branches. In order to increase the bandwidth of the DC branch up to few kHz, a few components of the preamplifier had to be exchanged. Care was taken so that the AC branch high-bandwidth was preserved, so that the central pixel can also operate as a standard MAGIC pixel.§.§ The Central Pixel signal transmission system The standard MAGIC DC-branch monitoring system involves low-pass filter of a few Hz. Therefore, in order to preserve the required few kHz bandwidth, a dedicated signal transmission system has been implemented, which includes an optical transmitter at the MAGIC camera, the optical fiber transporting the signal down to the MAGIC counting house and an optical receiver at the counting house itself.§.§ The Optical Transmitter The central pixel signal uses one MAGIC optical fibers to deliver its slow (few kHz) signal to thecounting house. In order to achieve such optical transmission, a dedicated electrical to optical converter has been developed, adapted to the needs of the special central pixel optical receiver. Basically, it contains a single stage consisting of an operational amplifier in a non-inverting configuration driving a VCSEL laser.In Figure <ref>, the board implementing the optical transmitter is shown inside the MAGIC cluster hosting the central pixel. Also in green, it is shown the special socket to connect the optical fiber that transports the central pixel optical signal.§.§ The Optical Receiver mezzanine In order to process the optical signal that comes from the camera to the counting house for its digitalization, a dedicated receiver circuit is needed. The low frequency signal characteristics of the central pixel output (few kHz) makes it impossible to use the standard MAGIC receiver analog channels, involving typical high-pass filters of several MHz. Thus, one channel in a MAGIC receiver board is replaced by the central pixel Optical receiver, implemented in a mezzanine board, which is connected to that MAGIC Receiver board. In this mezzanine, the optical signal is converted back to electrical, conditioned, split so that it is sent both (Fig. <ref>) to a Domino readout channeland to thecentral pixel PC via a LEMO bipolar connector (described in detail in <ref>). §.§ The Central Pixel digitizing system In order to achieve the maximum sensitivity for the central pixel system, its signals are digitized at the MAGIC counting house in a dedicated way, independent of the MAGIC standard readout system. Therefore, the differential signal produced at the Optical receiver mezzanine isdigitizedby a National Instrument PCIe 6251 M Series ADC card, with 16-bit resolution. The ADC card is hosted in a dedicated computer, the so-called central pixel PC, that will also store the central pixel digitized signals produced by the ADC. The ADC samples the central pixel signal at a 10 kHz rate, using an external signal provided by th e Rubidium clock Oscillator of the MAGIC Timing system.It is worth mentioning, as it has been described in <ref>, that the central pixel signal is also digitized in a standard MAGIC Domino channel. In this way, the Optical Crab pulsation can also be detected by the standard MAGIC DAQ, although with lower sensitivity than with the dedicated ADC channel. However, this detection allows to verify the MAGIC time-stamping system against the most precise clock the nature can provide.§ CENTRAL PIXEL SENSITIVITY FROM PERIODIC OPTICAL PULSES: CRAB PULSARAs previously demonstrated, MAGIC is capable of detecting the Crab pulsation in very short observation times <cit.>. Left panel of Fig. <ref> shows the folded light-curve of the Crab Pulsar resulting from 5 minutes of central pixel data, compared with the one obtained recently by Aqueye <cit.>. This light-curve was obtained using an equivalent analysis as in <cit.>, folding the absolute times of each sample with the expected Crab period, in this case using Jodrell Bank observatory[http://www.jb.man.ac.uk/pulsar/crab.html] radio ephemeris. After the upgrade described in section <ref>, the required time for detecting the Crab pulsation has been reduced down to less than 10 s.As a first attempt to estimate the sensitivity of the MAGIC central pixel to orphan ms pulses, we compare the RMS noise with the well understood Crab Pulsar light-curve. As the central pixel data are affected by higher frequency noise (mainly caused by the frequency of the power supply) a 1 ms averaging filter is applied to estimate the RMS of the background by using off-source data. By using the average flux of the Crab Pulsar within the U band <cit.>, and assuming a pulse shape equivalent to the (finely binned) Crab light-curve taken by Aqueye <cit.>, we convert the voltage of the measured light-curve to magnitudes in the U band. As shown in the right panel of Fig. <ref>, Crab pulses are well below the 1σ level of our background noise, showing the large sensitivity improvement produced by the phase-folding analysis.The estimated sensitivity using the Crab Pulsar light-curve and a 1 ms averaged background off-source data sample on the minimum detectable magnitude (U filter) is m_U∼ 13.4.§ SENSITIVITY TO ISOLATED OPTICAL PULSES: SLEWING TESTTo test the central pixel's sensitivity to optical flashes, dedicated observations were performed. Central pixel's data were collected during MAGIC's slewing. By using this method, optical flashes (produced by the stars passing by the central pixel field of view - FoV) of known brightness and length were guided into the central pixel. This method allows to experimentally determine the correlation between the maximum voltage of a pulse with the known magnitude of the stars producing them. During standard operation, MAGIC maximum slewing speed is ∼ 4.7 deg/s, so stars passing by the central pixel FoV (∼ 0.1 deg) would produce optical pulses of ∼ 20 ms, smeared by the optical point-spread-function (PSF) of the telescope optics. The coordinates of the FoV at a given time are taken directly from MAGIC drive system reports <cit.>. The slewing was performed in the azimuthal direction, fixing the zenith angle to the one corresponding to Polaris, in order to test the central pixel saturation and different sub-systems clock matching.As an example, Fig. <ref> shows a 0.16 s time window during the slewing test. Our aim is to correlate the optical pulses measured by the central pixel with the theoretical ones, calculated by the Gaussian smearing of the the stars magnitude (using <cit.>) within the central pixel's FoV. The width of the Gaussian used is the MAGIC optical PSF. This method will allow a direct measurement of the voltage vs. magnitude dependence. The central pixel's sensitivity to isolated optical pulses is determined by extrapolating the fitted curve to the voltage corresponding to a 5 σ excess over the background noise level (as defined in section <ref>). To eliminate the possibility of uncorrelated pulses, only those with profiles matching the telescope slewing speed were considered. As an additional quality cut, only the amplitudes of consecutively correlated peaks were used as input. The resulting calibration curve is shown in Figure <ref>. It must be noted that the quality of this measurement is limited by the instabilities of the MAGIC II telescope while slewing. Both the exact pointing position and specially the optical PSF may be unstable during these observations, which might account for the dispersion observed in Fig. <ref>. Nevertheless, the linear relation between the voltage and expected flux of correlated pulses, and the good agreement with the sensitivity extracted from the Crab Pulsar light-curve seem to demonstrate that the method is working as desired.§ CONCLUSIONS The operating principles of the upgraded MAGIC central pixel, a fine-time-resolution photometer with multi-channel readout, have been described. MAGIC telescopes are able to simultaneously operate both as VHE and optical telescopes, with excellent sensitivity in the two regimes. After the upgrade, MAGIC is able to detect the Crab optical pulsation in less than 10 s.The discovery of new fast transient phenomena, such as FRBs, motivates the study of MAGIC sensitivity to short-time-scale (millisecond) isolated flashes. By studying the dispersion measured within off-source data and making use of the well known flux and phaseogram of the Crab Pulsar, we estimated that MAGIC central pixel is able to detect orphan millisecond optical flashes as faint as ∼ 13.4 magnitudes (in the U band).To test this claim and measure the central pixel response to optical flashes of known brightness, we performed dedicated observations collecting central pixel data while slewing. By fitting the voltage of correlated pulses with respect to the magnitude of the stars producing them, and extrapolating this linear relation, the sensitivity to detect a 1 ms optical flash is m = 13.5 ± 0.6.§ ACKNOWLEDGEMENTSWe would like to thank the IAC for the excellent working conditions at the ORM in La Palma. We acknowledge the financial support of the German BMBF, DFG and MPG, the Italian INFN and INAF, the Swiss National Fund SNF, the European ERDF, the Spanish MINECO, the Japanese JSPS and MEXT, the Croatian CSF, and the Polish MNiSzW.JHEP
http://arxiv.org/abs/1708.07698v2
{ "authors": [ "T. Hassan", "J. Hoang", "M. López", "J. A. Barrio", "J. Cortina", "D. Fidalgo", "D. Fink", "L. A. Tejedor", "M. Will" ], "categories": [ "astro-ph.IM", "astro-ph.HE" ], "primary_category": "astro-ph.IM", "published": "20170825114506", "title": "MAGIC sensitivity to millisecond-duration optical pulses" }
Department of Physics, Harvard University, Cambridge, MA 02138, USA Wolfram Research, Somerville, MA 02144, USADepartment of Physics, Harvard University, Cambridge, MA 02138, USA School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USAWe study the density of specular reflection points in the geometrical optics limit when light scatters off fluctuating interfaces and membranes in thermodynamic equilibrium. We focus on the statistical mechanics of both capillary-gravity interfaces (characterized by a surface tension) and fluid membranes (controlled by a bending rigidity) in thermodynamic equilibrium in two dimensions. Building on work by Berry, Nye, Longuet-Higgins and others, we show that the statistics of specular points is fully characterized by three fundamental length scales, namely, a correlation length ξ, a microscopic length scale ℓ and the overall size L of the interface or membrane. By combining a scaling analysis with numerical simulations, we confirm the existence of a scaling law for the density of specular reflection points, n_spec, in two dimensions, given by n_spec∝ℓ^-1 in the limit of thin fluctuating interfaces with the interfacial thickness ℓ≪ξ_I. The density of specular reflections thus diverges for fluctuating interfaces in the limit of vanishing thickness and shows no dependance on the interfacial capillary-gravity correlation length ξ_I. Although fluid membranes under tension also exhibit a divergence in n_spec∝(ξ_Mℓ)^-1/2, the number of specular reflections in this case can grow by decreasing the membrane correlation length ξ_M.Statistical mechanics of specular reflections from fluctuating membranes and interfaces David R. Nelson December 30, 2023 =======================================================================================§ INTRODUCTIONThe intricate dancing pattern of bright points and curves which can be seen as sea surface reflections or as caustics at the bottom of a swimming pool, in the presence of sunlight, echoes the geometry of rippling water surfaces. These specular points and caustic reflections were known historically to the ancient Greeks, and were sketched and appreciated by Leonardo da Vinci <cit.>. Provided light wavelengths are short compared to ripple sizes, they obey the principles of classical geometric optics as embodied in Fermat's principle <cit.>. However, systematic studies of the statistics of the specular points and optical caustics which encode the geometrical features of fluctuating surfaces are relatively recent. Among the theoretical and experimental studies <cit.>, of particular interest is the pioneering work by Munk and Cox <cit.> who developed a method to measure the roughness of the sea surface agitated with wind driven waves, thus determining the effect of wind on the mean square surface slope from the spatial distribution of specular reflections. The distribution of specular reflections is also connected to problems in condensed matter and stochastic processes which include the statistics of zero crossings of random functions, and the noise currents embodied in the Shot noise <cit.>. The statistics of maxima, minima and saddle points, for two dimensional membranes and interfaces is related to the density of specular reflection points of a stochastic scalar field, in one and two dimensions. The generalization to a variety of physical systems is the subject of extensive works by Berry, Upstill<cit.>, Longuet-Higgins <cit.> and others inspired in part by the catastrophe theory of caustic formation by light <cit.>. In Refs. <cit.>, Halperin et al., have studied related statistical properties of the zeros of the n-dimensional vector fields in the d-dimensional space. These points for n=d (or curves in n=d-1), characterize the topological singularities in the orientation of a vector field <cit.>.Here we apply related ideas to specular reflections of various types membranes and interfaces in thermodynamic equilibrium. By “interface" we mean a boundary between a liquid and gas phase, where the restoring forces are gravity and surface tension. We use “membrane" to denote the system such as lipid bilayers with aqueous phase above and below, where the dominant forces are a bending rigidity and an effective surface tension.An analysis of specular reflections off an undulating water surface illuminated by a distant light source such as the sun <cit.> presents fascinating problems linked to the rich statistical dynamics of capillary and gravity waves. With wind-driven wave excitations, generated by the non-equilibrium dynamics of the atmosphere, one must deal with complex nonlinear mode-couplings and energy transfers across length scales associated with driven wave turbulence <cit.>. In this paper, we investigate two simpler models of reflecting surfaces, associated with interfaces and membranes in thermodynamic equilibrium in two dimensions. Exact results are readily generated, which can be checked by straightforward computer simulations. In this sense, our investigation is analogous to the “absolute equilibrium" models of homogeneous, isotropic turbulence, which are interesting in their own right and can sometimes provide insights into more complex problems, such as the direction of turbulent energy cascades <cit.>.Our first system, designed to model specular reflections off, say, air-water interfaces, assumes that the configurations f(x⃗), of a single-valued interface height profile in d-dimensions (see Fig. (<ref>(a)) are governed by an equilibrium probability distribution P_int[f(x⃗)]∝exp[-F_int[f(x⃗)]/k_BT], with an equilibrium interfacial free energy given by F_int[f(x⃗)]=1/2∫ d^d-1x [σ|∇⃗ f(x⃗)|^2+ρ_0 g f^2(x⃗)]. Here, the first term arises from the gradient expansion of the contribution from a surface tension σ,F_s=σ∫ d^d-1 x √(1+ |∇⃗ f(x⃗)|^2) and the second from the gravitational potential energy, ρ_0g∫_0^h(x⃗)f'df'=1/2ρ_0 g h^2(x⃗), integrated over the surface of an incompressible liquid with height h(x⃗) and mass density ρ_0, in equilibrium with a vapor phase of negligible density. Here g is the gravitational constant and f(x⃗) is the deviation of the fluid height from its equilibrium value h_0. The interfacial length scale ξ_I=√(σ/ρ_0 g) marks the boundary between capillary and gravity wave excitations <cit.>. A similar long-wavelength description describes solid-vapor interfaces above the roughening transition <cit.>.Our second model describes specular reflections off a membrane, e.g., a thermally fluctuating lipid bilayer, suspended in water across a hole of fixed cross-sectional area with zero osmotic pressure difference (see Fig. (<ref>(b)). We use the Monge representation to describe a nearly flat membrane embeded in d dimensions located at r⃗(x_1,...,x_d-1)=[x_1,...,x_d-1, f(x⃗)]. In the absence of this constraint, fluctuating membranes are controlled by a bending free energy, F_b=1/2κ∫ d^d-1x |∇^2 f(x⃗)|^2, characterized by a bending rigidity κ. We assume an incompressible lipid bilayer, and implement the constraint of fixed total membrane area, A_0 via a Lagrange multiplier α, and thus describe the statistical mechanics by a total free energy, F_tot=F_b-α∫ d^d-1 x√(1+|∇⃗ f(x⃗)|^2). Upon expanding the square root in this integral and neglecting a constant term, the probability of a particular membrane configuration is given byP_mem[f(x⃗)]∝exp[-F_mem[f(x⃗)]/k_BT], where F_mem[f(x⃗)]=1/2∫ d^d-1x [κ|∇^2 f(x⃗)|^2+α|∇⃗ f(x⃗)|^2]. The constant α is fixed by the condition that ∫ d^dx⟨√(1+|∇⃗ f(x⃗)|^2)⟩=A_0>A, where the brackets indicate a α-dependent thermal average over P_mem[f(x⃗)] in an ensemble specified by Eq. <ref>. Here A_0 is the area an incompressible membrane would have at zero temperature and A is the projected area, shown in Fig. 1b.In the small gradient approximation, assumed throughout this paper, α is thus determined by, ∫ d^d-1x⟨ |∇⃗ f(x⃗)|^2⟩=A_0-A (If A<A_0, α is negative and the membrane can be subject to a buckling instability; in this paper, we assume α>0). Note that the effect of the constraint of fixed area leads to a tension-like contribution to the effective free energy. Eq. (<ref>) now defines a membrane length scale ξ_M=√(κ/α). The dominant restoring force for fluctuations with wavelengths λ less than ξ_M is the bending rigidity, while the tension term dominates for λ>ξ_M. Similar free energies arise for one-dimensional polymers in the wormlike-chain approximation with a pulling force applied to the ends <cit.>.In this article, by combining analytics and numerical simulations, we study the interplay between the geometry and statistical mechanics of specular reflection with material properties that describe membranes and interfaces in thermal equilibrium. We begin by calculating the number of zeros of a random Gaussian scalar field, f(x) (or more generally the number of times f(x)=y, some fixed height), controlled by probability distributions such as Eqs. (<ref>) and (<ref>) using path integral methods. A generalization of this method allows us to calculate the distribution of specular points, generated by a light source far from the plane of the membrane or interface, as seen by a distant observer, due to reflections from a random surface. In this paper, we focus for simplicity on d=2, i.e. one-dimensional membranes and interfaces. We hope to publish a paper on specular reflections off two-dimensional surfaces in d=3, including an experimental test using reflecting undulating surfaces generated by a three-dimensional printer, in the future.In the following sections, we focus first on the density of specular points for the special case where the light source and the observer both reside on a single line perpendicular to the average plane of the reflecting surface. We then adapt this calculation, using the paraxial approximation <cit.>, to the case of grazing angles, directions far from the average surface normal where the observer is nevertheless remains far from the reflecting interfaces or membranes. These calculations allow us to explore how the density of specular points is related to the correlation lengths, ξ_I and ξ_M, introduced above to characterize the height-height correlation functions of membranes and interfaces. After treating membranes and interfaces in two dimensions, we explore certain geometrical features of the fluctuating surfaces and membranes such as average mean curvature, which is closely related to the density of specular reflection points. The interface and membrane correlation lengths ξ_I and ξ_M determine the density of specular reflection points, when combined with a microscopic length scale ℓ and an overall interface or membrane size L. Having tabulated the results of specular reflection theory, we then test the validity of the scaling laws for the density of specular points with Monte-Carlo simulations of interfaces and membranes with varying correlation lengths. § EQUILIBRIUM THERMODYNAMICS OF FLUCTUATING INTERFACES AND MEMBRANES §.§ Free energy and height correlations in two dimensionsThe configuration of a one-dimensional fluctuating interface or membrane (curve) in two dimensions can be described by the height function f(x), which we assume is single valued, that characterizes the deviation from the flat state. A long wavelength free energy that can be used to describe both interfaces and membranes is given by, F_1d = 1/2∫ dx [c(∇^2 f)^2+b(∇⃗ f)^2+a f^2].If we set c=0, and take a to be proportional to the gravitational constant and b to the surface tension, we obtain the capillary-gravity interface model of Eq. (<ref>). Note that the first term in eq. (<ref>) ∼ c(∇^2 f)^2, could then be introduced in the capillary-gravity model to suppresses short wave length fluctuations. Here we shall instead simply forbid wavevectors with q≳π/ℓ in the Fourier expansion of the free energy,F_I = 1/2∫ dx [b(∇⃗ f)^2+a f^2],where ℓ is a microscopic length of order the interface thickness.On the other hand, neutrally buoyant membranes subject to an isotropic tension due to an area constraints, as in Fig. 1, can be described by setting a=0,F_M = 1/2∫ dx [c(∇^2 f)^2+b(∇⃗ f)^2].The probability of a given equilibrium configuration f(x) at absolute temperature T is given by P(f)∝exp[-F/k_BT], where k_B is Boltzmann's constant. Henceforth, we rescale energy units such that k_BT=1. It will be convenient to expand f(x) in Fourier modes, f(x)=1/L∑_qf_qe^iqx, where L is the macroscopic interface or membrane size, and we assume periodic boundary conditions. We first calculate theheight correlation function which describes fluctuations of interfaces and membranes in thermodynamic equilibrium. Upon passing to Fourier space, the correlation function for the out-of-plane fluctuation of the two dimensional interface is described, in the limit of small c, by C(y) = ⟨ f(x_0)f(x_0+y)⟩≈1/2√(ab)e^-|y|/ξ_Iwhere the capillary-gravity interfacial correlation length is ξ_I=√(b/a). This limit sends the effective ultraviolet cutoff in Fourier space 1/ℓ=√(b/c), to large values, and allows us to use the contour integration for q in the complex plane with δ=b^2-4ac>0 (see Appendix A for details). Similar methods show that, in contrast, the second derivative of this correlation function depends explicitly on the ultraviolet cutoff parameter c. When c becomes small, we haveC^(2)(y) ≈-1/21/√(bc)e^-|y|/ξ_I,where C^(2)(y)≡d^2C(y)/dy^2. To determine quantities such as the densities of zero crossings and specular points, we shall need (see below) the ratio of the correlation function to its second derivative at y=0. This quantity is governed by two distinct length scales,C^(2)(0)/C(0) ≈ -√(a/c)=-1/ℓξ_Iwhere the last inequality holds, provided we regard the free energy Eq. <ref> with a, b and c>0 as model of an interface with an ultraviolet cutoff imposed by the parameter c andwhere the effective short distance cutoff for this model is ℓ=√(c/b). As discussed in Appendix A, we work in a regime such that ℓ≪ξ_M≪ L, where L is the macroscopic size of the interface. The first inequality is equivalent to the condition δ=b^2-4ac≫0. §.§ Capillary-gravity wave and fluctuating membrane modelsWe now focus on the two limiting cases a=0 and c=0 in more detail. The free energy given by Eq. (<ref>) with finite a and b and vanishing c is a model of capillary-gravity waves at the liquid-air interface, as discussed above. In this case, a is proportional to the liquid density ρ and the gravitational constant g, a∝ρ g, and b describes the line tension at the one dimensional interface. It is instructive to recompute the correlation functions for the capillary-gravity model in the Fourier space with c=0, while imposing a hard upper cutoff π/ℓ on the allowed wavevectors,C(0) =1/2π∫^+π/ℓ_-π/ℓdq/a+b q^2=ξ_I/barctan[ξ_Iπ/ℓ]≈1/√(ab)=ξ_I/b C^(2)(0) =1/2π∫^+π/ℓ_-π/ℓ-q^2 dq/a+b q^2≈-1/bℓ C^(4)(0) =1/2π∫^+π/ℓ_-π/ℓq^4 dq/a+b q^2≈π^2/3bℓ^3 where ℓ, the interface thickness, sets the ultraviolet cutoff, and the capillary-gravity wave interfacial correlation length is ξ_I=√(b/a). In this hydrodynamic treatment, we again consider the regime ℓ≪ξ_I≪ L. Note that this alternative ultraviolet cutoff leads to C^(2)(0)/C(0)=-1/ℓξ_I, a result in agreement with Eq. (<ref>), where the short distance was imposed by c. Here, C^(4)(0)=lim_y→ 0 C^(4)(y)=lim_y→ 0d^4C(y)/dy^4=lim_y→ 0⟨ f”(x+y)f”(x)⟩, we shall need it in later sections. We now discuss the case a=0, i.e. a membrane with a tension in two dimensions, or equivalently a semi-flexible polymer in two dimensions with a force applied to the ends <cit.>. In this case, c represents the bending rigidity and b, a force or tension. The height correlation function for a 1-dimensional membrane (or stretched semi-flexible polymer) with a tension in two dimensions at zero separation is given by, C(0) =1/π∫^∞_π/Ldq/b q^2+c q^4≈Lξ_M/√(bc)C^(2)(0) =d^2C(y)/dy^2|_y=0=-1/π∫^∞_π/Ldq/b +c q^2≈-1/2√(1/bc)C^(4)(0) =d^4C(y)/dy^4|_y=0=1/π∫^π/ℓ_π/Lq^2 dq/b +c q^2≈1/ℓ c where we impose an infrared cutoff by forbidding wave vectors such that |q|<q_min=π/L where L is the system size. We have also defined the membrane correlation length to be, ξ_M=√(c/b). In our long wavelength expansion, we consider the limit of large system size compared to the membrane correlation length. We also assume a short distance cutoff ℓ, of order the membrane thickness, such that ℓ≪ξ_M≪ L. Note that the fluctuation amplitude, ⟨ f^2(x)⟩=C(0), diverges with the system's size L, but L drops out of Eq. (<ref>) in this limit. § ZERO CROSSINGS OF INTERFACES AND MEMBRANES IN TWO DIMENSIONSBefore determining the density of specular points for interfaces and membranes, we first adapt ideas of O. Rice <cit.> to determine the density of zero crossings, a simpler problem illustrating functional integral techniques used elsewhere in this paper. Consider a single-valued random function f(x), which could be the two-dimensional profile of an interface or membrane which can assume the height z=f(x) at various points along the x-axis. We assume that the probability of different configurations is Gaussian and described by a hydrodynamic free energy such asEq. (<ref>). We again Fourier analyze the single valued height function f(x), into plane waves:f(x)=∑_q f_qe^iqx.Upon assuming periodic boundary conditions, f(x+L)=f(x), the allowed wave vectors in Eq. (<ref>) are, q_m=2π/Lm for m=0,±1,±2,... . In the limit L→∞, we again replace the discrete sum, ∑_q_m by an integral over q with dq=2π/L.The requirement that f(x) be real leads to a condition on the complex Fourier amplitudes, namely f^*_q=f_-q with a Gaussian probability distribution function controlled by the negative exponential of the underlying free energy divided by k_B T. The statistical properties of f(x) are fully controlled by the autocorrelation function C(x)=⟨ f(y)f(x+y) ⟩, which can be expressed as C(x)=∫ dq E(q)exp(iqx), where E(q)=⟨ |f(q)|^2⟩ is the “power spectrum" in q space.Consider the joint probability distribution p(f=y,f_1;x) such that p(f=y,f_1;x) df gives the chance that f(x) in the small interval (x+dx) takes a value in the range f∈ [y,y+df] with an undetermined slope f_1=df/dx. We are interested in the probability that f(x) passes through zero, f=0 (or in general f(x)=y), for any value of the slope df/dx in [x,x+dx]. This quantity is given by ∫^∞_-∞ df_1 p(f=y,f_1;x) df=dx∫^∞_-∞ p(f=y,f_1;x)|f_1|df_1. The probability density, p(f=y,f_1;x) can be expressed as a functional integral in terms of a weighted average on all possible configurations of f(x), p[f(x)=g,f'(x)=g_1;x] = ∫𝒟f(y)δ[f(x)-g]δ[f'(x)-g_1]e^-F[f(y)]/∫𝒟f(y) e^-F[f(y)] = 1/2π[-C(0)C^(2)(0)]^1/2exp[-f^2/2C(0)+f^2_1/2C^(2)(0)],where we define, C(x) =⟨ f(y)f(x+y) ⟩C^(2)(x) =d^2 C(x)/dx^2=-⟨df(y)/dydf(x+y)/dy⟩ The derrivation of Eq. (<ref>) is reported in Appendix B.We can now calculate the average number of times the function f(x) in the interval -L/2<x<+L/2 passes through a specific value f=y (subject to periodic boundary conditions), N_f=y = ∫^+L/2 _-L/2dx∫^∞_-∞ p(f=y,f_1;x) |f_1|d f_1= ∫^+L/2 _-L/2dx[-C^(2)(0)/π^2 C(0)]^1/2exp[-f^2/2C(0)]= L[-C^(2)(0)/π^2C(0)]^1/2exp[-f^2/2C(0)]Thus, the number density n_f=y=N_f=y/L of crossings is given by, n_f=y=[-C^(2)(0)/π^2 C(0)]^1/2exp[-y^2/2C(0)]In order to find the density of zero crossings, n_0, we need to simply substitute y=0 in Eq. (<ref>).We can now evaluate the density of crossings for interfaces and membranes from the correlation functions tabulated in Section II. We can express the number of times per unit length the function f(x) crosses the value y in terms of the correlation lengths ξ_I or ξ_M and the microscopic system thickness ℓ and the macroscopic system size L, n_I ≈ 1/π√(ℓξ_I)exp[-y^2/2⟨ f^2(x)⟩] capillary-gravity wave interface n_M ≈ 1/π√(Lξ_M)exp[-y^2/2⟨ f^2(x)⟩]membrane under tensionwhere ⟨ f^2(x)⟩=C(0) is given by Eqs. (<ref>) or (<ref>). Note that in the capillary-gravity interface model the geometric mean of ξ_I and the microscopic cutoff ℓ controls the density of zero crossings, whereas in the membrane model the geometric mean of ξ_M and the macroscopic system size L is the controlling factor. Thus there are many fewer zero crossings in the softer membrane where longer wavelength fluctuations dominate.In both cases the number of crossings falls off rapidly where y≳√(⟨ f^2(x)⟩). Reasonable agreement with these predictions is found using Monte-Carlo simulations methods (see Sec. IV-C) in Figs 2 and 3 for interfaces and membranes, respectively.§ STATISTICAL GEOMETRY OF SPECULAR POINTS ON INTERFACES AND MEMBRANES IN THE PARAXIAL APPROXIMATIONIn this section we use the approach sketched in the last section to determine the distribution of specular reflection points from thermally excited interfaces and membranes in two dimensions. We use simple ideas from the theory of focal singularities <cit.> to determine the geometric condition for the formation of specular points.In the following two subsections, we determine the density of specular points for two dimensional interfaces and membranes in two limits: large and small angles of incidence. §.§ Nearly normal angles of incidence and reflectionWe assume that the thermally fluctuating boundary, described by Eq. (<ref>), is a reflecting surface. A source of light emits wavelengths short compared to the undulations of the interface or membrane, hence we can use geometrical optics. The source resides above the surface, in the x-z plane, at S=(x_1,z_1) and we have the observer at O=(x_2,z_2). Providedmultiple reflections can be neglected, the condition for an arbitrary point, P=(x,f(x)) on the reflecting surface f(x), to be a specular point follows from a straightforward application of Fermat's principle <cit.>. The ray from the source S needs to reach to the point P on the surface (see Fig.5.d) and then travel to the observer at O. The distances, l_1=OP and l_2=SP are given by,l_i=√((x-x_i)^2+(f(x)-z_i)^2)≈ z_i-f(x)+(x_i-x)^2/2z_iwhich holds for i=1,2 and where we have taken the limit of positive z_i>>f, known as paraxial approximation <cit.>, in which both the observer and source reside far above a fluctuating surface with small deviations from flatness for an interface or membrane such that ⟨ f(x)⟩=0. We assume first that both source and observer have x_i≈0 as in Fig. 4a, i.e., they lie approximately above the origin of our coordinate system, although possibly with different heights z_1≠ z_2. In the next section we consider a more general case with both x_i 0. Upon assuming a constant propagation velocity in the region above the surface, the path of least time is given by <cit.>,d/dx[l_1(x)+l_2(x)]=0,which leads via Eq. (<ref>) to,1/x d f(x)/dx = 1/2[1/z_1+1/z_2]≡ K_1.To determine the density of points satisfying this condition for fixed z_1 and z_2, it is useful to introduce two auxiliary functions,g_1(x) =1/xf'(x)=1/x d f(x)/dx,g_2(x) =f”(x)=d^2 f(x)/dx^2, and to define K_1≡1/2[1/z_1+1/z_2]. Similar to the last section we seek from Eq. (<ref>) the probability of a specular point with g_1=K_1, somewhere in the interval [x,x+dx],p_spec(x,K_1) = ∫^∞_-∞ dg_2 P(g_1=K_1,g_2;x)|g_2-K_1|dx/x,where P(g_1,g_2)dg_1dg_2 gives the probability that the functions in Eq. (<ref>) and (<ref>) assume the specific values g_1 and g_2 in the intervals [g_1,g_1+dg_1] and[g_2,g_2+dg_2].To evaluate Eq. (<ref>), we used the relation |dg_1|=|dg_1/dx|dx=|g_1-g_2|/|x|dx.For the quadratic energy functionals used here, the probability distribution of g_1=1/xdf/dx and the curvature, g_2 can be determined by the methods of appendix B to be P(g_1,g_2;x)=(2π)^-1|M|^1/2exp[-1/2∑_i,j=1,2M_ijg_ig_j],where the 2×2 matrix M_ij is given by,M_ij= [ -x^2/C^(2)(0) 0;; 01/C^(4)(0) ] where C^(2)(0)=-⟨(df(x)/dx)^2⟩ and C^(4)(0)=⟨(d^2f(x)/dx^2)^2⟩ are given in terms of the power spectrum E(q)=⟨ | f(q) |^2⟩ of f(x) by, C^(2)(0) =-∫^∞_-∞ q^2 E(q)dq < 0, C^(4)(0) =∫^∞_-∞ q^4 E(q)dq. The probability distribution entering Eq. (<ref>), is thus given by,P(g_1,g_2;x)=|x|/2π[-C^(2)(0)C^(4)(0)]^1/2exp[g_1^2x^2/2C^(2)(0)-g^2_2/2C^(4)(0)].Upon inserting Eq. (<ref>) into Eq. (<ref>), we find the one-dimensional density of spectral points n_s≡ P_spec(x,K_1) for near normal angles of incidence and reflection as,n_s = ∫^∞ _-∞ dg_2[|K_1-g_2|/|x|]|x|/2π[-C^(2)(0)C^(4)(0)]^1/2exp[+K^2_1x^2/2C^(2)(0)-g^2_2/2C^(4)(0)]= exp[+K_1^2x^2/2C^(2)(0)] K_1/[-2π C^(2)(0)]^1/2[(2/π)^1/2β^-1exp[-β^2/2] +erf(β/√(2))]=N_sexp[K_1^2x^2/2C^(2)(0)]K_1/[-2π C^(2)(0)]^1/2where N_s=∫^∞ _-∞n_sdx and we define β=K_1[C^(4)(0)]^-1/2. Using Monte-Carlo simulation of the fluctuating interfaces and membranes, we test the validity of Eq. (<ref>) which predicts a Gaussian fall off of the density of specular points as a function of distance from the common location (x=0) of the source and the observer. Figure 4.b depicts the results of the numerical simulations (see Sec. IV) represented by points and the prediction of the continuum theory in Eq. (<ref>) which is given by the dashed curve. The small deviations from a Gaussian at small x and large x are likely due to higher order gradient couplings, neglected in our hydrodynamic approach.§.§ Distribution of specular points in the large angle approximation: We now consider a source and observer at more general coordinates (x_i,z_i), still making the paraxial approximation (z_j>> |x_j| and z_i>>f(x)) in Eq. (<ref>) but considering now the limit |x_i|>>x, i.e., source and observer reside far to the left and right of a specular point at x. Thus, we consider the configurations, in which the observation and incident angles measured form the normal and given by tanθ_i=|x_i|/z_i<<1are finite and large compare to the angles tan^-1(|x|/z_j) that give rise to the highest density of specular points. With these assumptions, vanishing of the first derivative of the total length l=l_1+l_2 defined in Eq. (<ref>) with respect to xleads to the conditiond f(x)/dx = -1/2[x_1/z_1+x_2/z_2]≡ k_2,where the source point and the observer reside at S=(x_1,z_1) and O=(x_2,z_2) respectively. Similar to the scheme we developed in the last section, we now define the auxiliary functions,g_1(x) ≡ f'(x)= d f(x)/dx g_2(x) ≡ f”(x)=d^2 f(x)/dx^2where g_1(x)=k_2 is the condition for a specular reflection. To calculate the density of specular points, we need the joint probability distribution function P[g_2,g_1;x] which can be calculated explicitly via functional integral methods (see Appendix B),P(g_1,g_2;x) = ∫𝒟f(y)δ[f'(x)-g_1]δ[f”(x)-g_2]e^-F[f(y)]/∫𝒟f(y) e^-F[f(y)] = 1/2π[-C^(2)(0)C^(4)(0)]^1/2exp[g_1^2/2C^(2)(0)-g^2_2/2C^(4)(0)],which is independent of x (provided |x|<<|x_j|) and where we have again assumed a quadratic free energy for interfaces and membranes such as Eq. (<ref>) and (<ref>). The probability distribution for specular reflections is now,p_spec(x,k_2)dx = ∫^∞ _-∞dg_2 P(g_1,g_2;x)|dg_1/dx| dx= ∫^∞ _-∞dg_2 P(g_1,g_2;x)|g_2|dx,which leads to an x-independent density of specular points,n_s=1/π(C^(4)(0)/- C^(2)(0))^1/2exp[-k^2_2/2C^(2)(0)],where C^(4)(0)=lim_y→0⟨ f”(x)f”(x+y)⟩.Note that the density of specular points is proportional to [C^(4)(0)]^1/2=[⟨(∂^2_x f)^2⟩]^1/2, i.e. to the root mean square fluctuations in curvature of the reflecting curve.We can now use the fluctuating interface and membrane models introduced in Sec. II to study the statistical geometry of the specular points on a membrane in thermodynamic equilibrium. We express this density in terms of interface or membrane correlation length ξ_I, macroscopic size L and microscopic dimensionℓ of order the thickness. We focus on the scaling of density of the specular points at infinity, so that k_2=[x_1/z_1+x_2/z_2]/2=0. In this extreme paraxial limit, the density n_s, is characterized only by derivatives of the correlation functions C(x)=⟨ f(x)f(x+y)⟩, as in Eq. (<ref>). Upon using the relations in Eqs. (<ref>) -(<ref>) for the capillary-gravity interface, in the limit ξ≫ℓ and c=0 we obtain, lim _ξ→ 0 n_s=1/π(C^(4)(0)/- C^(2)(0))^1/2≈1/√(3)ℓUsing Eqs. (<ref>)-(<ref>) for fluid membranes (or the equivalent model of fluctuating stretched two-dimensional polymer) yields,lim _ξ→ 0 n_s=1/π(C^(4)(0)/- C^(2)(0))^1/2≈1/√(ℓξ_M)Eqs. (<ref>) and (<ref>) demonstrate that the density of specular points diverges for both models as the microscopic cutoff, ℓ→ 0. Although the specular density on a capillary-gravity wave interface is dominated by the short distance physics and is independent of the correlation length, the density of specular reflections for membranes, n^M_s is controlled by the reciprocal of the geometric mean of the membrane correlation length and microscopic cutoff. Fluid membranes generate large number of specular reflections only in the limit of short correlation lengths ξ_M > ℓ. The densities of maxima, n_M and minima, n_m for a fluctuating random curve are closely related to the density of specular points in the extreme paraxial limit when both the observer and source reside at infinity. The specular point condition at infinity reduces to f'(x)=k_2=0 which of course is equivalent to being at a maximum or minimum. Since the average density of maxima and minima must be equal, they are related to the density of specular points calculated above by n_m=n_M=n_s/2.We can also investigate the ratio of n_0, the density of zero crossings to the density of maxima n_M or minima n_m for interfaces and membranes.With the aid of Eqs. (<ref>)-(<ref>) and (<ref>)-(<ref>) we find n_0/n_M=n_0/n_m≈√(ℓ/ξ_M)≪ 1 and n_0/n_m=n_0/n_M≈√(ℓ/L)≪ 1 for the capillary-gravity interface and fluid membrane model respectively. Thus the density of zero crossings is smaller than the density of maxima (or minima), which is plausible since for a highly fluctuating curve there can exist large number of maxima and minima between two consecutive zero crossings.§.§ A Monte Carlo simulations of fluctuating one dimensional chain model To explore the validity of expressions for the density of zero crossings and specular points given for interfaces and membranes in terms of C(x)=⟨ f(y+x)f(x)⟩ and its derivatives at the origin (C(0), C^(2)(0) and C^(4)(0)), we carry out Monte-Carlo simulations. We simulate a fluctuating 1-d surface in two dimensions described by the discretized free energy,F_dis=∑_j=1^N[ a'/4(f_j^2+f_j+1^2)+b'/2(f_j-f_j+1)^2+c'/2(2f_j-f_j-1-f_j+1)^2 ],where in the continuum limit our simulation parameters a',b' and c' are related to the parameters appearing in Eq. (<ref>) by a'=aℓ, b'=b/ℓ and c'=c/ℓ^3.The height function f(x) is defined in the interval (0,L) where L is the system size with periodic boundary conditions. It is discretized by considering the height at two neighboring points, f_j and f_j+1, where f_j=f(x=x_j) is defined in the small interval (x_j,x_j+1)=(x_j+ℓ), j=1,...,N with N=L/ℓ. The continuum energy expression in Eq. (<ref>) can be recovered via taking the limit ℓ/L→ 0 in eq. (<ref>).Monte Carlo simulations are performed on this “chain model" at temperature k_BT (in units of bℓ) with the following move: a random vertex i residing at (x_i,y_i=f(x_i)) is chosen and displaced to (x_i+δ x,y_i+δ y) where δ x and δ y are small independent random real numbers. Moves are accepted or rejected based on the Metropolis criterion, using the Boltzmann factor exp[-F_dis/k_B T] associated with the free energy in Eq. (<ref>). By tracking the time-dependance of correlation functions, we found that system with N=150 equilibrates after ≈ 10^6 MC steps. One MC step is defined as N vertex moves. Thermodynamic quantities like the average number of zero crossings or specular points, are measured after equilibration has been reached.We performed the simulations by starting from a configuration where all N points reside on a straight line, equally spaced with separation ℓ. The results of the Monte Carlo simulations for the average density of zero crossings, n_0, of the fluctuating membrane illustrated in Fig. 2 and Fig. 3. For the capillary-gravity interface model we takec'=0 in Eq. (<ref>), while for the membrane or polymer-like model, we take a'=0. The zero crossings are shown as a function of the ratio of the lattice spacing to the correlation length √(ℓ/ξ) in Figs. 2 and 3 with the corresponding error bars. The dashed line represents the analytical result for the density of zero crossings in eq. (<ref>) with the correlation function calculated based on the free energy with c=0, given by Eqs. (<ref>)-(<ref>). In fig.2b we confirm the Gaussian fall-off of the expected number of f(x)=y passages, n(y), as a function of the scaled specific height y/√(C(0)), for √(ℓ/ξ)=0,0.5,0.9. A similar comparison has been done for the density of zero crossings for the membrane model in Fig 3b. Fig.4b shows the dependence of the average density of specular points on x in the small angle approximation using the Monte-Carlo simulations (points) and the predictions of the continuum theory (dashed line) given in Eq. (28). It confirms the Gaussian fall off of the average density of specular points in agreement with Eq. (28) for the thermally excited interfaces. Next we simulate the density of specular reflection points from a thermally excited interface dressed with the capillary-gravity waves (c=0) as a function of the correlation length, ξ and the distance of the observer from the 1d membrane in terms of k_2=-1/2[x_1/z_1+x_2/z_2] in the extreme paraxial limit (|x_j|≪ z_j). Figure 5a and 6a illustrate the density of specular points at infinity when k_2=0 as function of √(ℓ/ξ), where ξ is the relevant interface or membrane correlation length. We show the dependence of the density if specular points on the distance of the observer from the reflecting line which is encoded in k_2 and described by Eq. (<ref>). The density of specular points was evaluated in the simulation by measuring the slope of the fluctuating curves and computing the expected number of times the local slope satisfies the condition for specular reflection df/dx=k_2 given by Eq. (<ref>).The density of specular points reaches a constant value for large ξ. The schematic presented in Fig.5c shows a simulated reflecting interface and the reflected light rays of a light source at infinity. The incoming light rays parallel to z axis and has not been shown. The number of reflecting light rays reaching the point of observation determines the number of specular pointsSimilarly Fig 6a-c, depicts the result of the simulations for the fluctuating curve described via a=0 which mimics the behavior of the fluctuating membrane or polymer under tensile stresses characterized by the coefficient b and the bending modulus c. Fig.6a shows the expected number of specular points as a function of √(ℓ/ξ_M) for k_2=0 (observer at infinity) governed by eq. (<ref>) and Fig.6b illustrates the k_2 dependence for different values of √(ℓ/ξ_M). We also show the dependence of the density of specular points on the distance from the reflection center (x=0) on the reflecting surface has been shown in Fig4.b for both membrane and interface models described by c=0 and a=0. The result of the simulations is compared to the analytical calculation presented in eq. (<ref>) in the small angle approximation.§ CONCLUSIONTo explore the statistical geometry of specular reflections from fluctuating surfaces in thermal equilibrium, we developed a mapping which connects configurations of thermalized surfaces described by the relevant elastic constants to the statistics of zero crossings and specular reflection points. The density of the specular points is proportional to the root mean square curvature of the interface or membrane.This study can be generalized to the statistics of fold, cusp and higher order caustics to construct a more complex mapping between the geometry, mechanics and thermodynamic of a surface and the statistical geometry of reflected caustics patterns in higher dimensions. More generally, this study suggests new avenues for surface characterization based on the distribution of specular patterns forms in a plane with dimensions lower that the dimension of the embedding space. This framework can be used to infer material properties such as surface tension and correlation lengths of thermalized interfaces and membranes, by studying the lower dimensional pattern of reflections.Upon zooming in the blurred reflected pattern of coherent light even at smaller length scales, one expects to observe the fine structure of the diffraction pattern associated with the optical wave field, consisting of a complex pattern of dislocation lines, which are disruptions in the reflected wavefront <cit.>. Future studies would be needed to reveal how the geometry and statistical mechanics of the fluctuating interfaces control the fine structure of the diffraction pattern. § ACKNOWLEDGMENTSIt is a pleasure to acknowledge stimulating interactions with Massimo Cencini, Daniel A. Beller and Yoav Lahini. We are grateful to Michael Berry for comments on the manuscript. This work was supported by the National Science Foundation, through grants DMR-1608501 and via the Harvard Materials Science Research and Engineering Center via grantDMR-1435999. We would like to dedicate this article to the memory of Pierre C. Hohenberg, whose pioneering work on dynamic critical phenomena with B.I. Halperin and others had a profound influence early in the career of us.10 vinci Drawing of caustics and light reflections in curved mirrors by Leonardo da Vinci, British library, codex Arundel 263, folio 87 v. (1508). nye J. F. Nye. Natural focusing and fine structure of light, Institute of Physics Publishing; 1st edition (1999). berry M. V. Berry and C. Upstill, IV Catastrophe Optics: Morphologies of Caustics and Their Diffraction Patterns, Chapter in Progress in Optics 18:257-346, (1980) berry_2 M. V. Berry, Journal of the Optical Society of America A Vol. 4, Issue 3, pp. 561-569 (1987)long_1 M. S. Longuet-Higgins, The statistical analysis of a random, moving surface Phil. Trans. R. Soc. A 249 321 87 (1957). long_2 M. S. Longuet-Higgins, Statistical properties of an isotropic random surface Phil. Trans. R. Soc. A 250 157 74 (1957). long_3 M. S. Longuet-Higgins, The statistical distribution of the curvature of a random Gaussian surface Proc. Camb. Phil. Soc. 54 439 53 (1958) ref1 I. M. Fuks and M.I. Charnotskii, Statistics of specular points at a randomly rough surface. J. Opt. Soc. Am. A 23, 73-80 (2006). ref2 A.A. Maradudin and T. Michel, The transverse correlation length for randomly rough surfaces, J. Stat. Phys. 58, 485-501 (1990). ref3 P. Swerling, Statistical properties of the contours of random surfaces, IRE Trans. On Information Theory 8, 315-321 (1962). ref4 S. Simeonov, A.R. McGurn, and A. A. Maradudin, Proc. SPIE 3141, 152- 163 (1997).Munk C. Cox, and W. Munk, Measurement of the roughness of the sea surface from photographs of the sun's glitter, Journal of the Optical Society of America 44, 838 (1954).rice S. O. Rice, Mathematical analysis of random noise Selected Papers on Noise and Stochastic Processes edited by N. Wax, New York: Dover (1954).halp B. I. Halperin, Statistical mechanics of topological defects Les Houches Session XXV Physics of Defects ed R Balian, M Kleman and J-P Poirier, Amsterdam: North-Holland, (1981). speckle J. W. Goodman, Laser Speckle and Related Phenomena, edited by J. C. Dainty, Springer, Berlin, (1975). halp_2 A. Weinrib , and B. I. Halperin, Distribution of maxima, minima, and saddle points of the intensity of laser speckle patterns Phys. Rev. B, 26, 1362 (1982).lax B. I. Halperin and Melvin Lax, Phys. Rev. 148, 722 (1966); 153, 802 (1967). aV. E. Zakharov, V. S. L'vov and G. Falkovich, Kolmogorov Spectra of Turbulence I: Wave Turbulence, Springer Science and Business Media (2012). b S. I. Badulin, A. N. Pushkarev, D. Resio and V. E. Zakharov,Self-similarity of wind-driven seas, Nonlinear Processes in Geophysics, 12, 891 (2005). c V. E. Zakharov , P. Guyenne, A. N. Pushkarev, and F. Dias, Wave turbulence in one-dimensional models, Physica D: Nonlinear Phenomena 152, 573 (2001). d S. A. Orszag, Statistical theory of turbulence, in Fluid Mechanics, Les Houches, eds. R. Balian and J. L. Peube, Gordon and Breach, New York (1973). e U. Frisch, U., Turbulence: The Legacy of AN Kolmogorov, Cambridge University Press (1995). f L. D. Landau, L.D. and E. M. Lifshitz, Fluid mechanics, Course of Theoretical Physics, Elsevier, Boston (1987). g D. R. Nelson, D.R., Defects and geometry in condensed matter physics, Sec. 5.1, Cambridge University Press, Cambridge (2002). h F. C. MacKintosh J. Kas and P. A. Janmey, Elasticity of Semiflexible Biopolymer Networks, Phys. Rev. Lett. 75, 4425 (1995).min G. E. Volovik, V. P. Mineev, Investigation of singularities in superfluid He3 in liquid crystals by the homotopic topology methods, Zh. Eksp. Theor. Fiz. 72, 2256 (1976). fey A. Hibbs and R. Feynman, Quantum Mechanics and Path Integrals, McGraw-Hill (1965).Nelson_Seung H. S. Seung and D. R. Nelson, Phys. Rev. A 38, 1005 (1988).wave_1 I. Simonsen, A. A. Maradudin and T. A. Leskova, Phys. Rev. A 81, 13806 (2010) wave_2 N. B. Baranova, N. B. Mamaev, A. V. Pilipetskii, et. al. J. Opt. Soc. Am. A 73 525 (1983) wave_3 I. Freund, Phys. Rev. E 52, 2348 (1995) wave_4 M. S. Soskin, V. N. Gorshkov, M.V. Vasnetsov, et. al. Phys. Rev. A 56, 4064 (1997)§ APPENDICES§ CORRELATION FUNCTION CALCULATIONHere we provide details of the correlation function calculations reported in section II. A for the fluctuating interfaces or membranes in 1+1 dimensions. From Eq. (<ref>) we obtain, C(y)=⟨ f(x_0)f(x_0+y)⟩=1/2π∫^+∞_-∞dq e^iqy/a+b q^2+ c q^4Upon considering the complex q-plane, the denominator defines four poles at:q = ± i [b∓√(δ)/2c]^1/2 δ = b^2-4acWe assume that δ>0. Upon defining an interfacial capillary-gravity correlation length by ξ_I≡√(b/a) and the ultraviolet cutoff induced by the c-term as ℓ≡√(c/b), this condition simply means that ξ_I>2ℓ. Because all poles reside on the imaginary axis, we also have b>√(δ). By completing the contour in the upper half plane, which encloses the poles at q= i [b±√(δ)/2c]^1/2, we find,C(y)=1/2π∫^+∞_-∞dq e^iqy/a+b q^2+ c q^4 =i (e^iqy/2bq+4cq^3)_q=i [b+√(δ)/2c]^1/2+ i (e^iqy/2bq+4cq^3)_q=i [b-√(δ)/2c]^1/2= 1/√(2 δ)[e^-√((b-√(δ)/2c))|y|/√(b-√(δ)/c)-e^-√((b+√(δ)/2c))|y|/√(b+√(δ)/c)]In a similar fashion we find that the second derivative of the correlation function is given by,C^(2)(y) = -1/2π∫^+∞_-∞q^2 dq e^iqy/a+b q^2+ c q^4= 1/2√(2 δ)[√(b-√(δ)/c)e^-√((b-√(δ)/2c))|y|-√(b+√(δ)/c)e^-√((b+√(δ)/2c))|y|]Note that C^(2)(0) diverges as c→0, reflecting the sensitivity of this quantity to the effective ultraviolet cutoff ℓ=√(c/b) as ℓ→0.§ FUNCTIONAL INTEGRALS FOR THE PROBABILITY DENSITY OF A FLUCTUATING MEMBRANEHere we calculate, via functional integral methods, the normalized probability of a function f(y) to have a specific value and slope at position x given by f(x)=g, f'(x)=g_1. The probability density for these quantities with a fluctuating interface described by our simple quadratic free energy functional F_I[f(y)], (we use the interface free energy Eq. (<ref>) for concreteness, but identical manipulation apply for the membrane free energy Eq. (<ref>)) is given by,p[f(x)=g,f'(x)=g_1;x] = ∫𝒟f(y)δ[f(x)-g]δ[f'(x)-g_1]e^-F_I[f(y)]/∫𝒟f(y) e^-F_I[f(y)]= ∫𝒟f(y)δ[f(x)-g]δ[f'(x)-g_1]e^-1/2∫ dy[a f(y)^2+b(df(y)/dy)^2]/∫𝒟f(y) e^-F_I[f(y)]= ∫ ds∫ dt e^-2π i sge^-2π i t g_1⟨ e^2π i s f(x)+2π i t f'(x) ⟩= ∫ ds∫ dt e^-2π i sge^-2π i t g_1e^-2π^2 ⟨ f^2(x)⟩ s^2 -2π^2 ⟨ f'^2(x)⟩ t^2-4π^2 ⟨ f(x) f'(x)⟩ st= ∫ ds∫ dt e^-2π i sge^-2π i t g_1e^-2π^2 C(0) s^2+2π^2 C^(2)(0) t^2where ⟨ .⟩ represents a thermal average, and ⟨ f(x)f'(x)⟩=1/2⟨d/dxf^2(x)⟩=0 with our periodic boundary conditions. We also have C(0)=⟨ f^2(x)⟩ and C^(2)(x)=-⟨(df/dx)^2⟩. Here we used the identity, valid for any Gaussian probability distribution, ⟨ e^h(x)⟩=e^1/2⟨h^2(x)⟩ and the integral representation of the delta function,δ[f(x)-g]=∫^∞_-∞ ds e^2π i [f(x)-g]s .This probability density for a fluctuating membrane is normalized by construction, ∫^∞_-∞ p[g,g_1]dg_1,dg_2=1. By the aid of standard Gaussian integrals we find from Eq. (<ref>) the probability density, p[f(x)=g, f'(x)=g_1;x] is in fact independent of x,p[f(x)=g,f'(x)=g_1;x]=1/2π[-C(0)C^(2)(0)]^1/2exp[-g^2/2C(0)+g^2_1/2C^(2)(0)]. Similarly, one can calculate the normalized probability of a function f(y) to have a given slope and curvature described by f'=g_1, f”=g_2 at the position x,p[f'(x)=g_1,f(x)”=g_2;x] = ∫𝒟f(y)δ[f'(x)-g_1]δ[f”(x)-g_2]e^-F[f(y)]/∫𝒟f(y) e^-F[f(y)]= ∫ ds∫ dt e^-2π i sg_1e^-2π i t g_2⟨ e^2π i s f'(x)+2π i t f”(x) ⟩= ∫ ds∫ dt e^-2π i sg_1e^-2π i t g_2e^2π^2 C^(2)(0) s^2-2π^2 C^(4)(0) t^2= 1/2π[-C^(2)(0)C^(4)(0)]^1/2exp[g_1^2/2C^(2)(0)-g_2^2/2C^(4)(0)]where C^(4)(0)=lim_y→0⟨ f”(x+y)f”(x)⟩=⟨(d^2 f/dx^2)^2⟩. In a similar fashion, we can find the probability density of a stochastic function with fixed higher order derivatives, at position x, with d^n f(x)/dx^n=g_n.
http://arxiv.org/abs/1708.08154v2
{ "authors": [ "Amir Azadi", "David R. Nelson" ], "categories": [ "cond-mat.soft" ], "primary_category": "cond-mat.soft", "published": "20170827235628", "title": "Statistical mechanics of specular reflections from fluctuating membranes and interfaces" }
Network Slicing for Service-Oriented Networks Under Resource Constraints Nan Zhang, Ya-Feng Liu, Hamid Farmanbar, Tsung-Hui Chang, Mingyi Hong, and Zhi-Quan Luo This work is supported by NSF, grant number CCF-1526434, and by NSFC, grant number 61571384. N. Zhang is with the School of Mathematical Sciences, Peking University, China. Email: [email protected] Y.-F. Liu is with the State Key Laboratory of Scientific and Engineering Computing, Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China. Email: [email protected] H. Farmanbar is with Huawei Canada Research Center, Ottawa, Canada. Email: [email protected] M. Hong is with the Department of Electrical and Computer Engineering, University of Minnesota, USA. Email: [email protected] T.-H. Chang and Z.-Q. Luo are with Shenzhen Research Institute of Big Data, and the Chinese University of Hong Kong, Shenzhen, China. Emails: [email protected][email protected] 30, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== To support multiple on-demand services over fixed communication networks, network operators must allow flexible customization and fast provision of their network resources. One effective approach to this end is network virtualization, whereby each service is mapped to a virtual subnetwork providing dedicated on-demand support to network users. In practice, each service consists of a prespecified sequence of functions, called a service function chain (SFC), while each service function in a SFC can only be provided by some given network nodes. Thus, to support a given service, we must select network function nodes according to the SFC and determine the routing strategy through the function nodes in a specified order. A crucial network slicing problem that needs to be addressed is how to optimally localize the service functions in a physical network asspecified by the SFCs, subject to link and node capacity constraints. In this paper, we formulate the network slicing problem as a mixed binary linear program and establish its strong NP-hardness. Furthermore, we propose efficient penalty successive upper bound minimization (PSUM) and PSUM-R(ounding) algorithms, and two heuristic algorithms to solve the problem. Simulation results are shown to demonstrate the effectiveness of the proposed algorithms. Software Defined Network, Network Function Virtualization, Traffic Engineering. § INTRODUCTION Today's communication networks are expected to support multiple services with diverse characteristics and requirements. To provide such services efficiently, it is highly desirable to make the networks agile and software reconfigurable. Network function virtualization (NFV) <cit.> is an important technology to achieve this goal, which virtualizes network service functions so that they are not restricted to the dedicated physical devices.Different from the traditional networking where service functions are assigned to special network hardware, NFV enables service operators to flexibly deploy network functions and service providers to intelligently integrate a variety of network resources owned by different operators to establish a service customized virtual network (VN) for each service request. In practice, each service consists of a predefined sequence of service functions, called a service function chain (SFC) <cit.>. Meanwhile, a service function can only be provided by certain specific nodes, called NFV-enabled nodes. As all of the VNs share a common resource pool, we are led to the problem of network resource allocation to meet diverse service requirements, subject to the capacity constraints at NFV-enabled nodes and at network links. Recently, reference <cit.> proposed a novel 5G wireless network architecture MyNET and an enabling technique called SONAC (Service-Oriented Virtual Network Auto-Creation). In SONAC, there are two key components: software defined topology (SDT) and software defined resource allocation (SDRA). SDT determines a VN graph and a VN logical topology for each service request. The determination of the VN logical topology, also called VN embedding, locates the virtual service functions onto the physical NFV-enabled infrastructures so that each function in the corresponding SFC is instantiated.SDRA maps the logical topology to physical network resources, including both communication and computational resources.In software defined networks, centralized control enables joint VN embedding and SDRA, a problem called network slicing. Specifically, network slicing controls the flow routing such that each flow gets processed at NFV-enabled nodes in the order of service functions defined in the corresponding SFC. There are some works related to the network slicing problem <cit.>. Reference <cit.> considered a simplified problem where there is a single function in each SFC and a single path for each service, and solved the formulated problem approximately. References <cit.> simplified “routing" by either considering only one-hop routing or selecting paths from a predetermined path set. Reference <cit.> considered the so-called consolidated middleboxes where a flow could receive all the required functions. It proposed a two-stage heuristic algorithm to route each flow through a single associated NFV-enabled node. Such formulation is not applicable to the case where each SFC contains multiple functions which need to be instantiated in sequence at NFV-enabled nodes. An important common assumption in <cit.> is that the instantiation of a service function for a traffic flow can be split over multiple NFV-enabled nodes. The service splitting assumption significantly simplifies the optimization problem since no binary variable is needed in the problem formulation. However, service splitting would result in high coordination overhead in practice, especially when the number of service requests is large. Reference <cit.> allowed service splitting in a different way by assuming that a service function can be instantiated in the form of multiple instances of virtual network functions (with different throughputs and resource demands) at multiple nodes. References <cit.> did not allow service splitting and assumed the data of any flow are processed in only one NFV-enabled node for any function in the chain. They solved their problems by either existing integer optimization solvers or heuristic algorithms that may lead to violations of resource constraints.In this paper, we consider the network slicing problem with practical constraints, where a set of service requests are simultaneously processed and routed. Our considered problem differs from most of the aforementioned works in that we allow traffic flows to be transmitted on multiple paths and require that the data of any flow are processed by only one NFV-enabled node for any function in the corresponding function chain. We formulate the problem as a mixed binary linear program and show that checking the feasibility of this problem is strongly NP-hard in general. Moreover, we propose an efficient penalty successive upper bound minimization (PSUM) algorithm, a PSUM-R algorithm, and two heuristic algorithms for this problem. The simulation results show that the proposed PSUM and PSUM-R algorithms can find a near-optimal solution of the problem efficiently.§ SYSTEM MODEL AND PROBLEM FORMULATIONIn this section, we introduce the model and present a formulation of the network slicing problem.Consider a communication network represented by a graph 𝒢=(𝒱,ℒ), where 𝒱={i} is the set of nodes and ={(i,j)} is the set of directed links. Denote the subset of function nodes (NFV-enabled nodes) that can provide aservice function f as V_f. Each function node i has a known computational capacity μ_i, and we assume that processing one unit of data flow requires one unit of computational capacity. Suppose that there are K data flows, each requesting a distinct service in the network. The requirement of each service is given by a service function chain (k), consisting of a set of functions that have to be performed in the predefined order by the network. Different from the aforementioned works <cit.>, we require that each flow k receives each service function in (k) at exactly one function node, i.e., all data packets of flow k should be directed to the same function node to get processed by a service function, so that there does not exist coordination overhead caused by service splitting in practice. Notice that this requirement does not prevent a common function in different SFCs from being served by different nodes.The source-destination pair of flow k is given as (S(k),D(k)), and the arrival data rate of flow k is given as λ(k). The network slicing problem is to determine the routes and the rates of all flows on the routes while satisfying the SFC requirements and the capacity constraints of all links and function nodes. Let r_ij(k) be the rate of flow k over link (i,j). The capacity of link (i,j) is assumed to be C_ij which is a known constant. This assumption is reasonable when the channel condition is stable during the considered period of time. The total flow rates over link (i,j) is then upper bounded by C_ij, i.e., ∑_k=1^Kr_ij(k)≤ C_ij, ∀ (i,j)∈. To describe VN embedding, we introduce binary variables x_i,f(k) which indicate whether or not node i provides function f for flow k (i.e., x_i,f(k)=1 if node i provides function f for flow k; otherwise x_i,f(k)=0). To ensure that each flow k is served by exactly one node for each f∈(k), we have the following constraint ∑_i∈ V_f x_i,f(k)=1, ∀ f∈(k), ∀ k.We assume that each function node provides at most one function for each flow: ∑_f∈(k) x_i,f(k)≤ 1,  ∀ k, ∀ i.This assumption is without loss of generality. This is because, if a function node can provide multiple services for a flow, we can introduce virtual nodes such that each virtual node provides one function for the flow and all these virtual nodes are connected with each other. Suppose that the function chain of flow k is (k)=(f^k_1→ f^k_2→…→ f^k_n). To ensure flow k goes into the function nodes in the prespecified order of the functions in (k), we introduce new virtual flows labelled (k,f): flow k just after receiving the service function f, and denote as (k,f^k_0) the flow k just coming out of the source node S(k) without receiving any service function. See Fig. <ref> for an illustration.Let r_ij(k,f) be the rate of flow (k,f) over link (i,j). Since each function node provides at most one function for each flow (as shown in (<ref>)), the following flow conservation constraints must hold for all nodes i and s=1,…,n: λ(k)x_i,f_s(k)=∑_j:(j,i)∈r_ji(k,f_s-1)-∑_j:(i,j)∈r_ij(k,f_s-1), λ(k)x_i,f_s(k)=∑_j:(i,j)∈r_ij(k,f_s)-∑_j:(j,i)∈r_ji(k,f_s), ∑_j:(S(k),j)∈r_S(k)j(k,f_0)=λ(k), ∑_j:(j,D(k))∈r_jD(k)(k,f_n)=λ(k),where (<ref>) and (<ref>) imply that if x_i,f_s=1, then flow (k,f_s-1) going into node i and flow (k,f_s) coming out of node i both have rate λ(k); otherwise each virtual flow (k,f_s) coming out of node i and going into node i should have the same rate; (<ref>) and (<ref>) ensure that flow k coming out of S(k) and going into D(k) both have rate λ(k). These constraints guarantee that each flow k gets served at function nodes in the order prespecified by (k) and with the required data rate λ(k). By the definitions of r_ij(k,f) and r_ij(k), we have r_ij(k)=∑_f∈(k)∪{f^k_0} r_ij(k,f), ∀ k, ∀ (i,j)∈. Since processing one unit of data flow consumes one unit of computational capacity, the node capacity constraint can be expressed as ∑_f∑_kλ(k) x_i,f(k)≤μ_i,  ∀ i. Now we present our problem formulation of network slicing to minimize the total link flow in network: [ min_,g()=∑_k,(i,j)r_ij(k);(<ref>)-(<ref>),; r_ij(k)≥ 0,  ∀ k, ∀ (i,j)∈,; r_ij(k,f)≥ 0, ∀ f∈(k),∀ k,∀ (i,j)∈,; x_i,f(k)∈{0,1}, ∀ i∈ V_f,∀ f∈(k),∀ k, ]where :=( {r_ij},{r_ij(k,f)}), :={x_i,f(k)}. The objective function g()=∑_k,(i,j)r_ij(k) is set to avoid cycles in choosing routing paths. There can be other choices of objective functions, such as the cost of the consumed computational resources and the number of activated function nodes.Problem (<ref>) is a mixed binary linear program which turns out to be strongly NP-hard. The proof is based on a polynomial time reduction from the 3-dimensional matching problem <cit.>.Checking the feasibility of problem (<ref>) is strongly NP-complete, and thus solving problem (<ref>) itself is strongly NP-hard.We give the proof of Theorem <ref> in Appendix A.§ PROPOSED PSUM AND PSUM-R ALGORITHMS Since problem (<ref>) is strongly NP-hard, it is computationally expensive to solve it to global optimality. In this section, we propose efficient PSUM and PSUM-R algorithms to solve it approximately. The basic idea of our proposed PSUM algorithm is to relax the binary variables in problem (<ref>) and add penalty terms to the objective function to induce binary solutions. The PSUM-R algorithm combines PSUM and a rounding technique so that a satisfactory solution can be obtained more efficiently than PSUM.§.§ PSUM Algorithm Notice that problem (<ref>) becomes a linear program (LP) when we relax the binary variables to be continuous. Problem (<ref>) and its LP relaxation are generally not equivalent (in the sense that the optimal solution of the LP relaxation problem may not be binary). The following Theorem <ref> provides some conditions under which the two problems are equivalent.The proof of Theorem <ref> is given in Appendix B.Suppose μ_i≥μ̅ for all i, and C_ij≥C̅ for all (i,j), where μ̅=∑_k=1^Kλ(k), C̅=∑_k=1^Kλ(k)(|(k)|+1), and |(k)| denotes the number of functions in (k).Then the LP relaxation of problem (<ref>) always has a binary solution of {x_i,f(k)}.Moreover, the lower bounds in (<ref>) are tight in the sense that there exists an instanceof problem (<ref>) such that its LP relaxation problem does not have a binary solution of {x_i,f(k)} if one of the lower bounds is violated. Theorem <ref> suggests that, if the link and node capacity are sufficiently large, then problem (<ref>) and its LP relaxation problem (which relaxes the binary variables to be continuous) are equivalent. Moreover, if the link and node capacity are fixed, problem (<ref>) becomes more difficult to solve as the number of flows and the number of functions in the SFC increase, because the lower bounds in Theorem <ref> will be more likely violated. To solve the general problem (<ref>), our basic idea is to add an L_p penalty term to the objective of the LP relaxation problem of (<ref>) to enforce the relaxed variables end up being binary.Let _f(k)={x_i,f(k)}_i∈ V_f. Then, we can rewrite (<ref>) as _f(k)_1=1, ∀ f∈(k), ∀ k.We have the following fact <cit.>.Fact: For any k and any f∈(k), consider [ min__f(k) _f(k)+ϵ1_p^p:=∑_i∈ V_f (x_i,f(k)+ϵ)^p;_f(k)_1=1,;x_i,f(k)∈ [0,1], ∀ i∈ V_f, ]where p∈(0,1) and ϵ is any nonnegative constant. The optimal solution of problem (<ref>) is binary, that is, only one element is one and all the others are zero (see an example in Fig. <ref>), and its optimal value is c_ϵ,f:=(1+ϵ)^p+(|V_f|-1)ϵ^p. Moreover, the objective function in problem (<ref>) is differentiable with respect to each element x_i,f(k)∈[0,1] when ϵ>0. Motivated by the above fact, we propose to solve the following penalized problem [min_,g()+σ P_ϵ();(<ref>)-(<ref>),; r_ij(k)≥ 0,  ∀ k, ∀ (i,j)∈,; r_ij(k,f)≥ 0, ∀ f∈(k),∀ k,∀ (i,j)∈,;x_i,f(k)∈ [0,1], ∀ i∈ V_f,∀ f∈(k),∀ k, ]where σ>0 is the penalty parameter, andP_ϵ()= ∑_k∑_f∈(k)( _f(k)+ϵ1_p^p-c_ϵ,f). For ease of presentation, we define =(,) and g_σ()=g()+σ P_ϵ(). The following Theorem <ref> reveals the relationship between the optimal solutions of problems (<ref>) and (<ref>). The proof is relegated to Appendix <ref>. For any fixed ϵ>0, let ^k be a global minimizer of problem (<ref>) with the objective function g_σ_k(). Suppose the positive sequence {σ_k} is monotonically increasing and σ_k→ +∞. Then any limit point of {^k} is a global minimizer of problem (<ref>).Theorem <ref> suggests that the penalty parameter σ should go to infinity to guarantee that the obtained solution {x_i,f(k)} of (<ref>) is binary. In practice, however, the parameter σ only needs to be large enough so that the values of the binary variables {x_i,f(k)} are either close to zero or one. Then, a (feasible) binary solution of (<ref>) can be obtained by rounding. Solving (<ref>) directly is not easy since it is a linearly constrained nonlinear program. To efficiently solve (<ref>), we apply the SUM (Successive Upper bound Minimization) method <cit.>, which solves a sequence of approximate objective functions which are lower bounded by the original objective function. Due to the concavity of P_ϵ(), the first order approximation of P_ϵ() is an upper bound of itself, i.e., P_ϵ()≤ P_ϵ(^t)+∇ P_ϵ(^t)^T(-^t), where ^t is the current iterate. At the (t+1)-st iteration, we solve the following problem[min_, g()+σ_t+1∇ P_ϵ(^t)^T;(<ref>)-(<ref>),;r_ij(k)≥ 0, r_ij(k,f)≥ 0, ∀ f,∀ k, ∀ (i,j)∈,;x_i,f(k)∈ [0,1], ∀ i∈ V_f,∀ f∈(k),∀ k. ]Notice that each subproblem (<ref>) is a linear program which can be efficiently solved to global optimality.Adding Additional Constraints. The feasible region of the PSUM subproblem (<ref>) is actually enlarged compared with that of the original problem (<ref>), as the binary variables are relaxed while all the other constraints are unchanged. To improve the feasibility of the obtained solution to the original problem, we will add some valid cuts. This idea is popular in combinatorial optimization.To this end, we add some constraints related to the binary variables and strengthen the node capacity constraints, which are redundant for problem (<ref>) but can significantly reduce the feasible solution set of problem (<ref>).We first define two new binary variables. Let x_i,f be the binary variable indicating whether node i provides function f (i.e., x_i,f=1 if node i provides f, otherwise x_i,f=0), and x_i be the binary variable indicating whether node i is active for providing network services (i.e., x_i=1 if node i is active, otherwise x_i=0). By the definitions of binary variables, we havex_i,f(k) ≤ x_i,f≤ x_i, ∀i∈ V_f, f∈(k), k. Since the computational resource at each node i is available only when node i is active, the node capacity constraint (<ref>) can be strengthened in the following way.∑_f∑_kλ(k)x_i,f(k)≤ x_iμ_i, ∀ i.Moreover, for each function f that node i can provide, the computational resource at node i is available for function f only when the node provides function f. Therefore, the consumed computational resource on function f at node i is upper bounded by x_i,fμ_i, as shown in equation (<ref>).∑_kλ(k)x_i,f(k)≤ x_i,fμ_i, ∀ i∈ V_f, f.With the above constraints added to problem (<ref>), the problem in the (t+1)-st PSUM iteration becomes[min_, g()+σ_t+1∇ P_ϵ(^t)^T; (<ref>)-(<ref>), (<ref>)-(<ref>),;r_ij(k)≥ 0, r_ij(k,f)≥ 0, ∀ f,∀ k, ∀ (i,j)∈,; x_i∈ [0,1], x_i,f∈ [0,1], ∀i,f,;x_i,f(k)∈ [0,1], ∀ i∈ V_f,∀ f∈(k),∀ k. ] The proposed PSUM algorithm for solving problem (<ref>) is presented in Algorithm <ref>, where γ and η are two predefined constants satisfying 0<η<1<γ. We remark that 1) the parameter ϵ is adaptively updated as the iteration number increases, which turns out to be very helpful in improving numerical performance of the algorithm <cit.>; 2) since the difference of two consecutive PSUM subproblems only lies in the objective function, the warm-start strategy can be applied in PSUM to solve the sequence of LP subproblems, i.e., let ^t be the initial point of the (t+1)-st PSUM subproblem; 3) reference <cit.> proposed a Penalty-BSUM algorithm that relaxes some equality constraints and applies Block-SUM to solve the penalized problem, while our Penalty-SUM algorithm is designed to enforce the relaxed variables being binary.§.§ PSUM-R: A Combination of PSUM and A Rounding Technique As we have mentioned before, in practice the penalty parameter σ only needs to be sufficiently large so that the values of binary variables {x_i,f(k)} are either close to zero or one. To obtain a satisfactory solution and save computational efforts, we can obtain a suboptimal solution by the PSUM-R algorithm, using the PSUM algorithm with an effective rounding strategy. Suppose a solution of problem (<ref>) is obtained after a few PSUM iterations with some binary elements {(x̅_i,f(k))_i∈ V_f} being fractional. We aim to construct a binary solution {x_i,f(k)} on the basis of {(x̅_i,f(k))_i∈ V_f}. However, systematic rounding is difficult since the link and node capacity constraints (<ref>) and (<ref>) couple the transmission of all K flows. Thus we turn to round some elements to one while some elements to zero in a heuristic way.Our strategy is to first obtain a binary solution of , then solve the routing problem to obtain a solution of . In particular, if _f(k) is binary, then we simply let _f(k)=_f(k); for each _f(k) that is not binary, our main idea of rounding is to respect the value of _f(k) (i.e., round the maximum element to one) when there exists one element being very close to one, otherwise to give priority to the node with the largest computational capacity. In particular, we first check the value of its maximum element, if the maximum element is sufficiently close to one, then let the corresponding node provide function for flow k, that is, if x̅_j,f(k)=max_i∈ V_fx̅_i,f(k)≥θ where θ∈(0,1) is a predefined positive threshold, then we setx_j,f(k)=1, andx_i,f(k)=0, ∀i∈ V_f∖{j};otherwise, we find the node v∈ V_f with the maximum remaining computational capacity, and let node v provide function for flow k, i.e., setx_v,f(k)=1, andx_i,f(k)=0, ∀i∈ V_f∖{v}. In this way, we obtain a binary solution {x_i,f(k)}. Notice that the node capacity constraints may be violated after the rounding.To determine the routing solution , we fix the values of binary variables {x_i,f(k)} and solve the original problem with objective function being g+τΔ, and modify the link capacity constraints by∑_kr_ij(k)≤ C_ij+Δ, ∀(i,j)∈,where the new variable Δ is the maximum violation on link capacity and the positive constant τ is the weight of Δ in the objective function. Finally, we obtain the solution of both the service instantiation and the routing. The PSUM-R algorithm is given in Algorithm <ref>. Moreover, after we obtain the solution, the violations of link and node capacity can be respectively measured byδ_ij=max{0,∑_kr_ij(k)-C_ij}, ∀(i,j)∈ℒ, π_i=max{0,∑_k∑_fλ(k)x_i,f(k)-μ_i}, ∀ i.If there is no violation on link and node capacity constraints, the obtained solution is feasible to problem (<ref>). Practically, the obtained solution is satisfactory if the resource violation is sufficiently small. We will show later in Section <ref> that the PSUM-R algorithm can obtain a satisfactory solution with small resource violations within less time than PSUM. § LOW-COMPLEXITY HEURISTIC ALGORITHMS In practice, service requests arrive randomly with little future information of network resources or other requests, and the network must be sliced as soon as a request arrives. An online algorithm with high competitive ratio is needed <cit.>. In this section, we propose two low-complexity heuristic algorithms to solve problem (<ref>), including one online algorithm (heuristic algorithm I). The idea of our proposed low-complexity algorithms is to decouple the joint service instantiation and traffic routing problem (<ref>) into two separate problems. More specifically, the first step is to determine the set of instantiated nodes given a requested service/requested services, and the second step is to perform traffic routing given the selected instantiated nodes.In particular, after fixing the values of the binary variables to determine the service instantiation, the remaining flow routing can be realized by solving problem (<ref>), which is an LP problem. To ensure that the routing problem always has a feasible solution, we adopt a strategy similar to PSUM-R, i.e., modify the link capacity constraints by (<ref>) and set the objective function to g+τΔ.To determine which function node a service function should be instantiated for each request, Openstack scheduling <cit.> provides a method of filtering and weighting. The filtering step selects eligible nodes in view of computational resource, CPU core utilization, and the number of running instances etc. The weighting step determines the instantiation node after computing a weighted sum cost for each eligible node. Inspired by Openstack scheduling, we perform service instantiation in a similar way. In particular, for each service function in a SFC, we first select eligible function nodes in terms of their remaining computational resource, then compute the weighted sum cost of each eligible node i byz_i=∑_m w_m ×ncost(i,m), ∀ i,where ncost(i,m) is the normalized cost of node i in terms of capability m. For example, if we consider the weighting factors of path length and available computational resource, the weighted sum cost can be computed byz_i=w_1(h^S(k)_i+h^D(k)_i)+w_21/μ̃_i,where h^S(k)_i (resp. h^D(k)_i) is the number of hops between nodes i and S(k) (resp. D(k)), μ̃_i is the current capacity of node i, and w_1, w_2 are weights. After computing the weighted costs, we let the node with the smallest cost be the instantiation node. Notice that there may be violations of node computational capacities. In Algorithms <ref> and <ref>, we present heuristic algorithms I and II respectively. Heuristic algorithm I. We sequentially perform service instantiation and traffic routing for each flow (request). As shown in Algorithm <ref>, for each flow k, we first locate the nodes that provide functions in SFC (k) by filtering and weighting, next route flow k from its source node to its destination node while going through the instantiation nodes in the specified order, then update node capacity and link capacity before proceeding to the next flow.Heuristic algorithm II. We first perform service instantiation for all flows, then route all flows at the same time. Since the routing problem considers all flows, the violation on link capacity tends to be smaller than that in heuristic algorithm I.§ NUMERICAL EXPERIMENTS In this section, we present numerical results to illustrate the effectiveness of the proposed algorithms. We shall compare our proposed PSUM algorithm, PSUM-R algorithm, and heuristic algorithms I and II with heuristic algorithm III (which is modified from the algorithm in <cit.>) and heuristic algorithm IV (which is proposed in <cit.>). We give the detailed description of heuristic algorithms III and IV in Appendix D. All simulations are done in MATLAB (R2013b) with 2.30 GHz CPUs. All LP subproblems are solved by the optimization solver Gurobi 7.0.1 <cit.>. To show the performance of these algorithms, we will compare the quality of the obtained solutions in terms of their objective function values, the CPU time, and the amount of violations of the network resource capacities. We define the worst-case violation ratio of link capacity by max_ijδ_ij/C_ij, and the worst-case violation ratio of node capacity by max_iπ_i/μ_i, where δ_ij and π_i are defined in (<ref>) and (<ref>) respectively. We will consider two network topologies: a mesh topology in Fig. <ref> and a fish topology in Fig. <ref>.In the PSUM algorithm, we set T_max=20,  σ_1=2,  ϵ_1=0.001,  γ=1.1, and η=0.7. In the PSUM-R algorithm, we set t_max=7 and θ=0.9. In heuristic algorithms I and II, we set weights w_1=1 and w_2=10max_iμ_i in (<ref>).§.§ Mesh Network TopologyWe consider a mesh network with 100 nodes and 684 directed links. The topology of this network is shown in Fig. <ref>, where the nodes located in the 4 middle columns are function nodes. Suppose there are in total 5 service functions {f_1,…,f_5} and 10 candidate function nodes for each function (|V_f|=10 for each function f). We consider K=30 flows with demands λ(k)=1 for all flow k. The SFC (k)=(f^k_1→ f^k_2) and (S(k),D(k)) are uniformly randomly chosen for each flow (f^k_1≠ f^k_2, S(k),D(k)∉ V_f^k_s,s=1,2). The link capacity C_ij is uniformly randomly chosen in [0.5,5.5] and the node capacity μ_i is uniformly randomly chosen in [0.5,8].We randomly generate 50 instances of problem (<ref>) and apply the six algorithms to solve them. Let g_PSUM^* be the objective value of problem (<ref>) at the solution returned by the PSUM algorithm and letg_LP^* be the optimal value of the corresponding LP relaxation problem. We find that g^*_PSUM=g^*_LP in 4 instances and the objective value ratio g^*_PSUM/g^*_LP is less than 1.09 in all of 50 instances, which implies that the solutions returned by the PSUM algorithm are close to be global optimal. We also find that in 21 instances the PSUM-R algorithm already obtains a feasible solution before the rounding step, which further shows that the PSUM algorithm converges fast. As shown in Fig. <ref>, the objective value ratios corresponding to the PSUM-R algorithm and heuristic algorithm III are in [1.0, 1.26], the ratios corresponding to heuristic algorithms I and II are in [1.25, 1.7), and those corresponding to heuristic algorithm IV are in [2.2, 3.5] which are much larger. In terms of the returned objective value, the PSUM algorithm gives the best solutions and heuristic algorithm IV performs the worst among the six algorithms. The statistics of worst-case violation ratios are plotted in Fig. <ref>. Since there is no resource violation in the solutions obtained by the PSUM algorithm and heuristic algorithm IV, we only show the violation ratios returned by PSUM-R, heuristic algorithms I, II, and III. From Fig. <ref> we can see that the violation ratios returned by heuristic algorithms I and II are small, and those returned by heuristic algorithm III are generally much larger. Moreover, the violation ratios of node capacity returned by heuristic algorithms I and II are smaller than those returned by PSUM-R and heuristic algorithm III, as the node capacity has already been taken into consideration in the service instantiation step in our proposed heuristic algorithms I and II. §.§ Fish Network Topology We consider another network with 112 nodes and 440 directed links. The topology of this network is shown in Fig. <ref>, where the 6 diamond nodes are function nodes and the triangular nodes cannot be chosen as a source node of any flow. We consider K=20 flows with a common destination which is the circular node in Fig. <ref>. According to the distance (counted by the number of hops) to the destination node, the network nodes are divided into 11 layers. We set the node capacity μ_i to 16 for any function node i, and choose the value of link capacity C_ij from [1, 55] which is further divided into 10 sub-intervals. In particular, the capacities of the links connecting layer m+1 and layer m are uniformly randomly chosen from the (11-m)-th sub-interval. Suppose there are in total 4 service functions {f_1,…,f_4} and all of them can be provided by any of the 6 function nodes.In each instance of problem (<ref>), the demand λ(k) is an integer number randomly chosen in [1,5], the source node S(k) is randomly chosen from the available network nodes excluding the function nodes, and the SFC (k) is an ordered sequence of functions randomly chosen from {f_1,…,f_4} for each flow. We first set the chain length to 1, i.e., each SFC (k) consists of only one service function, and randomly generate 50 instances of problem (<ref>). We find thatg^*_PSUM=g^*_LP in 42 instances and g^*_PSUM-R=g^*_LP in 45 instances. This implies that LP relaxation is tight for at least 42 instances (in the sense that the LP relaxation of problem (<ref>) and problem (<ref>) have same optimal values) and the PSUM algorithm finds global optimal solutions for these instances. Moreover, the objective value ratio g^*_PSUM/g^*_LP is less than 1.002 in all of 50 instances, which implies that the obtained solution is very close to the global optimal solution. As shown in Fig. <ref>, heuristic algorithm III generates better solutions in terms of the objective value among the heuristic algorithms. The statistics of worst-case violation ratios are plotted in Fig. <ref>.We can see from Fig. <ref> that there is no link/node capacity violation in the solutions obtained by PSUM-R and heuristic algorithms I and II in over 40 simulations, while larger violations on node capacity in the solutions obtained by heuristic algorithm III.We set the chain length to 2, i.e., each SFC (k) consists of two different service functions. Fig. <ref> shows the statistics of the objective value ratios. In this case, the ratio g^*_PSUM/g^*_LP is below 1.09 in all 50 instances and is below 1.01 in 30 instances, and the ratios returned by heuristic algorithms IV are in [1.3, 1.7], which are very similar to those returned by heuristic algorithms I and II. One reason for the better performance of heuristic algorithm IV in this topology (compared to the mesh topology in Section <ref> where the chain length is also 2) is that, the multi-layer routing problem solved in heuristic IV becomes less complicated when |V_f| decreases. Fig. <ref> plots the violation ratios, from which we can see that the violation ratios become larger (compared to the case where the chain length is 1), especially for heuristic algorithm III.From the results above we find that the performance of PSUM, PSUM-R and all four heuristic algorithms deteriorates when the chain length increases from 1 to 2. As shown in Theorem <ref>, if the link and node capacity are fixed, the LP relaxation of problem (<ref>) and itself are less likely to be equivalent as the chain length increases, and problem (<ref>) becomes more difficult to solve. Therefore, the gap between g^*_LP and g^*_PSUM tends to become larger when the chain length increases. Finally, we plot the CPU time (in seconds) per instance with different problem sizes in Fig. <ref>. Among the six algorithms, heuristic algorithms I and IV are the fastest ones as the LP problems to be solved are in small scales. The PSUM algorithm requires more time. Meanwhile, the time of PSUM-R (which performs 7 PSUM iterations) is about 80% of the time of PSUM (whose maximum number of iterations is set to 20), which means the number of iterations performed in PSUM to obtain a feasible solution is usually much smaller than 20. Heuristic algorithm III is more time-consuming than PSUM. This is because heuristic algorithm III usually needs to solve more LP problems and the warm-start strategy cannot be used in solving the LP problems.§.§ Summary of Simulation Findings From the above simulation results we can conclude that: * Our proposed heuristic algorithms I and II are efficient and can find satisfactory solutions with moderate resource violations;* Heuristic III gives better solutions than heuristic algorithms I, II, and IV in terms of objective values, but has larger resource violations and is more time-consuming;* Heuristic algorithm IV can find feasible solutions with satisfactory objective values, but its performance is not stable under different numerical settings;* Our proposed PSUM algorithm guarantees perfect satisfaction of resource constraints and returns the best solution among all algorithms, but is slower than the heuristic algorithms I, II, and IV (albeit it is faster than heuristic algorithm III);* Our proposed PSUM-R algorithm achieves a good balance of solution quality and algorithm efficiency.§ CONCLUSION In this work, we study the network slicing problem. Different from most of the existing works, we assume that each flow receives any service function in the corresponding service function chain at exactly one function node, a requirement that is strongly motivated by reducing practical coordination overhead. We formulate the problem as a mixed binary linear program and prove its strong NP-hardness. To effectively solve the problem, we propose a PSUM algorithm, a variant PSUM-R algorithm, and two low-complexity heuristic algorithms, all of which are easily implemented. Our simulation results demonstrate that PSUM and PSUM-R can approximately solve the problem by returning a solution that is close to the optimal solution. Moreover, the PSUM algorithm completely respects the resource capacity constraints, and the PSUM-R algorithm achieves a good balance of solution quality and algorithm efficiency. § ACKNOWLEDGMENT The authors would like to thank Navid Reyhanian for his help in numerical simulations. IEEEtran section § PROOF OF THEOREM <REF>We will prove that for an instance of problem (<ref>), the problem of checking its feasibility is as hard as a 3-dimensional matching problem, which is known as strongly NP-complete. We first construct an instance of problem (<ref>) as follows. * The set of service functions: F=F_1∪ F_2∪ F_3, where F_1,F_2,F_3 are disjoint.* The SFC of flow k: (k)=(f^k_1→ f^k_2→ f^k_3), where f^k_j∈ F_j, j=1,2,3, k=1,…,K. Moreover, the SFCs of any two flows are different in the sense that f^k_j≠ f^m_j for all k≠ m, j=1,2,3.* The set of network nodes: =∪ X∪ Y∪ Z∪, whereis the set of source nodes,is the set of destination nodes, X,Y,Z are disjoint sets of function nodes, and these sets have the same number of elements, i.e., K=||=|X|=|Y|=|Z|=||. See Fig. <ref> for an illustration. Moreover, any node x∈ X can provide all functions in F_1, and for any function f∈ F_1, V_f=X; any node y∈ Y can provide all functions in F_2, and for any function f∈ F_2, V_f=Y; any node z∈ Z can provide all functions in F_3, and for any function f∈ F_3, V_f=Z.* The set of links consists of 3 parts: ={(s,x)| s∈,x∈ X}∪_R∪{(z,d)| z∈ Z, d∈}, where _R is determined by a set R⊆ X× Y× Z in the sense that any (x,y,z)∈ R implies that there exist directed links (x,y)∈_R, (y,z)∈_R.* All flows have the same rate, i.e., λ(k)=λ for all k.* The link capacity C_ij is sufficiently large that C_ij≥ 4Kλ for any (i,j)∈.* For any function node i∈ X∪ Y∪ Z, the value of μ_i is chosen to ensure that node i provides at most one function, e.g., μ_i∈[λ, 2λ), which ensures that function node i provides at most one function due to the limited node capacity.In the following, we prove that the constructed above instance of problem (<ref>) has a feasible solution if and only if there exists a 3-dimensional matching of R. (i) problem (<ref>) has a feasible solution ⟹ there exists a 3-dimensional matching of R.Since problem (<ref>) has a feasible solution, then for each flow k, there exist function nodes x^k∈ X, y^k∈ Y, z^k∈ Z such that x^k provides f^k_1, y^k provides f^k_2, z^k provides f^k_3, and (x^k,y^k),(y^k,z^k) are directed links in _R.Let us define a subset of X× Y× Z by M={(x^k,y^k,z^k)| k=1,…,K}. By the definition of _R, we have M ⊆ R. Moreover, for any (x^k,y^k,z^k), (x^t,y^t,z^t)∈ M (k≠ t), we have x^k≠ x^t, y^k≠ y^t,z^k≠ z^t. This is because that each node provides at most one function. Therefore, M is a 3-dimensional matching of R. (ii) M⊆ R is a 3-dimensional matching ⟹ problem (<ref>) has a feasible solution.Since M is a 3-dimensional matching of R, M consists of K triplets. We can build a one-to-one mapping between the K flows and the K triplets in M. We denote the triplet that flow k is mapped to as (x^k,y^k,z^k). Let x^k provide f^k_1, y^k provide f^k_2, and z^k provide f^k_3 for any flow k. Since (x^k,y^k,z^k)∈ R, (x^k,y^k),(y^k,z^k) ∈ and for any t≠ k we have x^k≠ x^t, y^k≠ y^t,z^k≠ z^t. Thus this instantiation strategy is feasible. The routing problem is also feasible because the link capacity is sufficiently large. In this way, we obtain a feasible solution of problem (<ref>).We can see obviously from the above description that the construction of this instance can be completed in polynomial time. Since the 3-dimensional matching problem is strongly NP-complete, finding a feasible solution of (<ref>) is also strongly NP-complete, and thus solving problem (<ref>) itself is strongly NP-hard. The proof is completed. § PROOF OF THEOREM <REF>By the constraint ∑_f∈(k) x_i,f(k)≤ 1 and the node capacity constraint ∑_f∑_kλ(k) x_i,f(k)≤μ_i, we have∑_k∑_fλ(k)x_i,f(k)≤∑_kλ(k)· 1=μ̅.By the relationship between r_ij(k) and r_ij(k,f) and the fact that r_ij(k,f)≤λ(k) for all (i,j),f∈(k), we have∑_kr_ij(k)=∑_k∑_f∈(k)∪{f^k_0}r_ij(k,f)≤∑_kλ(k)(|(k)|+1)=C̅.Therefore, the link and node capacity constraints are automatically satisfied if μ_i≥μ̅ for all i and C_ij≥C̅ for all (i,j). Then the LP relaxation of problem (<ref>) reduces to[ ∑_(i,j)∑_k∑_f∈(k)∪{f^k_0} r_ij(k,f);∑_i∈ V_f x_i,f(k)=1,  ∀ k,f∈(k),;∑_f∈ F(k) x_i,f(k)≤ 1,  ∀ k,i,;flow conservation constraints,;r_ij(k,f)≥0, x_i,f(k)∈ [0,1], ∀ k,f,i,;]which decouples among different flows. Thus, solving problem (<ref>) is equivalent to solving K subproblems, and the kth subproblem aims at minimizing the total link rates in directing flow k from its source node to its destination node while sequentially going through the instantiation nodes in the order of the functions in the chain (k). Let us denote ℙ_k as the set of the shortest paths from S(k) to D(k) that sequentially go through nodes that provide f^k_1,f^k_2,…,f^k_n. We have the following claim which can be proved by contradiction.Claim: The optimal solution of problem (<ref>) for each flow k is to transmit with rate of λ(k) (or λ(k) amount of flow) along any path in ℙ_k.In fact, we can always find an optimal solution of problem (<ref>) that has binary components of {x_i,f(k)}. For each flow k, we select a path from ℙ_k and an instantiation node on the path for each required service function (this is feasible due to the definition of ℙ_k), and let all data of flow k be transmitted on this path. Then we obtain a solution of problem (<ref>). According to the above claim, such solution must be an optimal solution of problem (<ref>). Therefore, problem (<ref>) has an optimal solution with binary components of {x_i,f(k)}, which implies the first conclusion in Theorem 2. Next, we prove by contradiction that the bounds in (<ref>) on {μ_i, C_ij} are tight, i.e., there exists an instance of problem (<ref>) where some μ_i or C_ij is below the bound and the LP relaxation problem does not have an optimal solution with components of {x_i,f(k)} being binary. Let us consider the network shown in Fig. <ref>. Suppose there is one flow with demand λ=1, the source node is S and the destination node is D, and the service function chain is ={f_1}. In this network, only nodes 3 and 6 can provide f_1. By (<ref>), we can compute the bounds for this case: μ̅=λ=1,C̅=2λ=2.Since V_f_1={v_3, v_6}, we have that the shortest path ℙ_1={(S, v_1, v_2, v_3, v_1, v_2, D)} with the number of hops being 6 (see the path in red in Fig. <ref> (a)), and another feasible path is (S, v_4, v_5, v_6, v_7, v_8, v_9, D) with the number of hops being 7 (see the path in blue in Fig. <ref> (b)). To minimize the total-link-rate objective function in problem (<ref>), the flow should transmit as many data as possible on the shortest path ℙ_1. (i) Let μ_3=1-ϵ<μ̅ where ϵ∈(0,1), μ_6>μ̅, and C_ij is no less than C̅ for any link (i,j). Since v_3 is on the shortest path and μ_3=1-ϵ, v_3 can process at most 1-ϵ units of data. Therefore, in the optimal solution of the LP relaxation problem, 1-ϵ units of data are transmitted on the shortest path ℙ_1, while the remaining ϵ units of data are transmitted on the other feasible path. The optimal solution of {x_i,f(1)} is not binary. (ii) Let C_12=2(1-ϵ)<C̅, the capacity of other links are larger than C̅, and the capacity of any function node is larger than μ̅.Since link (1,2) is on the shortest path, by the same analysis in (i) we have that the amount of data transmitted on the shortest path is at most 1-ϵ. The remaining ϵ units of data must be transmitted on the other feasible path. Therefore, the optimal solution of {x_i,f(1)} of the LP relaxation problem is not binary.From the above two cases we can conclude that the lower bounds of {μ_i,C_ij} given in (<ref>) are tight. The proof is completed.§ PROOF OF THEOREM <REF>For ease of presentation, we define P^k=P_ϵ(^k) and g^k=g(^k). Since ^k is a global minimizer of problem (<ref>) with the objective function g_σ_k(), it follows that g_σ_k(^k)≤ g_σ_k(^k+1),  g_σ_k+1(^k+1)≤ g_σ_k+1(^k), ∀ k.Combining the above with the assumption σ_k≤σ_k+1, we obtain σ_k(P^k-P^k+1)≤ g^k+1-g^k≤σ_k+1(P^k-P^k+1), ∀ k,which shows that {g^k} is increasing and {P^k} is decreasing.Suppose that ^* is a global minimizer of problem (<ref>). Then P_ϵ(^*)=0. By the definition of ^k, we have g_σ_k(^k)≤ g_σ_k(^*)=g(^*), which further implies that0≤ g^k+σ_k P^k≤ g(^*).This, together with the facts that g^k≥ 0, P^k≥ 0, and σ_k→ +∞, shows that σ_kP^k→ 0 and P^k→ 0 as k→ +∞.Let =(,) be any limit point of {^k}, and {^k}_ be a subsequence converging to . Since P^k→ 0, we have P_ϵ()=0, which shows thatis feasible for (<ref>). Furthermore, taking limit alongin (<ref>), we have g()≤ g(^*). Therefore, g()=g(^*) andis a global minimizer of (<ref>).§ DESCRIPTION OF HEURISTIC ALGORITHMS III AND IV As shown in Algorithm <ref> below, heuristic algorithm III is modified from the heuristic algorithm in <cit.>. In this modified algorithm, we denote the set of binary variables {x_i,f(k)} as , the set of {x_i,f(k)} which take value of one as _1, and those taking value of zero as _0. The basic idea is to first determine the value of the binary variable by “bootstrapping iteration" and “greedy selection" and then perform traffic routing. Heuristic algorithm IV is proposed in <cit.>, for which we describe in Algorithm <ref>. This heuristic algorithm reduces solving problem (<ref>) to solving a sequence of subproblems which are defined between two consecutive layers. A layer f is defined as a set of function nodes in V_f (i.e., the set of nodes that can provide function f), and layer f^k_0 (resp. f^k_n+1) refers to the source (resp. destination) node of flow k. For each flow k, we will route it between layers to bring the traffic from the first layer (S(k)) to the last layer (D(k)). In particular, for each subproblem defined between layer f^k_s and layer f^k_s+1, we first determine the instantiation of function f^k_s+1 for flow k by solving a multidimensional knapsack problem, next solve a multi-source multi-sink Minimum Cost Flow (MCF) problem to route the traffic, and finally perform local search to improve the obtained solution (for example, change the instantiation node to see whether the cost can be reduced).
http://arxiv.org/abs/1708.07463v1
{ "authors": [ "Nan Zhang", "Ya-Feng Liu", "Hamid Farmanbar", "Tsung-Hui Chang", "Mingyi Hong", "Zhi-Quan Luo" ], "categories": [ "cs.NI" ], "primary_category": "cs.NI", "published": "20170824154151", "title": "Network Slicing for Service-Oriented Networks Under Resource Constraints" }
[email protected] of Physics, University of Wisconsin-Milwaukee, Milwaukee, WI 53201The paramagnetic nature of monolayer FeSe films is investigated via first-principles spin-spiral calculations. Although the (π,π) collinear antiferromagnetic (CL-AFM) mode – the prevailing spin fluctuation mode relevant to most iron-based superconductors – is lowest in energy, the spin-wave energy dispersion E(q) isfound to be extremely flat over a large region of the two-dimensional Brillouin zone centered at the checkerboard antiferromagnetic (CB-AFM) q=0 configuration, giving rise to a sharp peak in the spin density of states.Considering the paramagnetic state as an incoherent average over spin-spiral states, we find that resulting electronic band states around the Fermi level closely resemble the bands of the CB-AFM configuration – not the CL-AFM one – and thus providing a natural explanation of the angle-resolved photoemission observations.The presence of the SrTiO_3(001) substrate, both with and without interfacial oxygen vacancies, is found to reduce the energy difference between the CB-AFM and CL-AFM states and hence enhance the CB-AFM-like fluctuations. These results suggest that CB-AFM fluctuations play a more important role than previously thought.Magnetic fluctuations in single-layer FeSe M. Weinert December 30, 2023 ========================================== Single-layer FeSe films grown on SrTiO_3(001) (STO) have generated intense interest because of their reported high superconducting critical temperature T_C∼40–100 K.<cit.> Angle-resolved photoemission spectroscopy (ARPES) experiments<cit.> reveal a number of distinct features: As-prepared single-layer FeSe/STO films are not superconducting, but are insulating or semiconducting.After intensive vacuum annealing, however,the monolayer films are superconducting with a metallic Fermi surface.<cit.> Contrary to most iron-based superconductors, this Fermi surface is characterized by electron pockets centered at the Brillouin zone corner (M), with the zone center (Γ) states pushed below the Fermi level, posing a challenge for pairing theories relying on Fermi surface nesting between the Γ and M-centered pockets<cit.> through (π,π) collinear stripe antiferromagnetic (CL-AFM) spin fluctuations (c.f., Fig. <ref>(c) inset).Oxygen vacancies formed at the interface during the annealing process are believed to play an important role in the transition to the metallic phase and high T_C superconductivity. Furthermore that bilayer FeSe films are not superconducting<cit.> suggests thatthe FeSe-STO interface may play a key role in the superconducting mechanism.A number of first-principles density-functional theory (DFT) calculations have been reported for FeSe thin films.<cit.> Detailed comparisons ofthe experimental ARPES data<cit.> for single-layer FeSe/STOto the DFT calculations corresponding tonon-magnetic and various ordered antiferromagnetic configurations find that the calculated bands of only the checkerboard antiferromagnetic (CB-AFM) configuration (Fig. <ref>(c) inset) are consistent with the experimental data: Both the Fermi surface and the band structure throughout the whole Brillouin zone (BZ) are reasonably reproduced, particularly when a small Hubbard U correction is included that pushes the Γ-centered hole-like band completely below the Fermi level.Other DFT studies have examined the effect of interfacial oxygen vacancies (O-vac):<cit.>oxygen vacancies are found to electron-dope the FeSe layer, and to modify the band structure of the CB-AFM state, including the hole-like band around Γ so that satisfactory agreement with the ARPES data is achieved without the addition of a phenomenological U.<cit.>This seemingly successful agreement between the ARPES data and the DFT calculated bands for the CB-AFM configuration of monolayer films of FeSe/STO, however is apparently inconsistent with the fact that the CB-AFM configuration is not the calculated ground state. Rather, DFT total energy calculations consistently find that the CL-AFM state is lower in energy than the CB-AFM state for both monolayer FeSe and FeSe/STO, but the calculated CL-AFM bands do not resemble the ARPES ones.Although the energy difference between the CB- and CL-AFM states is found to be reduced by electron doping due to the substrate, the CL-AFM state remains more stable at reasonable doping levels.<cit.> Moreover, there is no experimental evidence for long-range magnetic order in either bulk or monolayer FeSe— recent inelastic neutron scattering experiments<cit.> have found that CB-AFMand CL-AFM correlations coexist even at low excitation energies in the bulk — suggesting that a quantum paramagnetic state with strong fluctuations between the CB-AFM and CL-AFM states might also exist in the monolayer system. Furthermore, it has been suggested that a nematic paramagnetic state resembling the observed bulk FeSe nematic state is a result of a near degeneracy between the CB-AFM and CL-AFM states.<cit.> Even in the case of coexisting CB-AFM and CL-AFM correlations, the question of why ARPES experiments measure the CB-AFM-like band structure remains. The apparent paramagnetic nature of FeSe implies (i) there is no net magnetic moment or long-range antiferromagnetic order, and (ii) the translational (and space group) symmetry is consistent with the crystallographic 4 atom unit cell. These conditions, however, do not imply or require that FeSe is non-magnetic. Equating the paramagnetic state with the non-magnetic state leads to inconsistencies in the calculated properties such as requiring a large band renormalization to match the ARPES data. Instead, the Hund's rule tendency of Fe to form local moments and the existence of magnetic order in related Fe-based superconducting materials, strongly indicate that inclusion of magnetic interactions and (local) moments are essential for a proper description of the ground state properties. While ordered magnetic configurations can be calculated straightforwardly using standard DFT methods, treating the paramagnetic state – or disordered/random states in general –is more difficult: Spin-dynamics DFT calculations<cit.> can, in principle, directly simulate the time evolution of the spins but require both large supercells and long times, as well as being most applicable for configurations around an ordered state. Coherent potential approximation-type approaches<cit.> effectively proceed by considering averaged interactions.Yet another approach is to model the paramagnetic state as an explicit configurational average of a (large) number of different “snap shots” to mimic the various short-range interactions and configurations that might occur.We address the properties of magnetism in FeSe in the spirit of this last approach and show that spin fluctuations centered around the CB-AFM configuration are unexpectedly important due to the large near degeneracy of spin wave states with energies just above the CL-AFM state;our approach to the paramagnetic state provides the first natural resolution to the ARPES-DFT paradox. The different magnetic configurations are represented by spin-spiral states of spin-wave vector q, each of which by construction has no net magnetic moment but does have a local magnetic moment. The paramagnetic state is then represented as an incoherent sum of these states.These individual planar spin-spiral states are calculated by DFT non-collinear total energy calculations using the generalized Bloch theorem.<cit.> In Fig. <ref>(a), an example of a spin-spiral is shown for a spin configuration close to the CB-AFM configuration, but with the Fe moments rotating slowly with spin wave vector q:An Fe at R has its moment pointing in a direction given by the azimuth angle ϕ=q·R+φ_α, where φ_α is an atomic phase for the α atom in the (magnetic) unit cell.Representative magnetic states at high symmetry points for two-Fe (primitive chemical cell) spin-spiral configurations are shown in insets of Fig. <ref>(c).While standard DFT approaches using supercells can sample these “discrete” states and compare their energies, spin-spiral calculations for varying q provide information on the connections of one representative state to another in the context of the total energy E(q) and the electronic band structure ϵ_nk (q). The monolayer FeSe/STO system is modeled by a slab consisting of Se-Fe_2-Se/TiO_2/SrO as shown in Fig. <ref>(b).Although this is a minimal representation of the substrate STO, our results reasonably reproduce previous calculations that use thicker STO substrates.<cit.> The planer lattice constant a is set to the STO parameter ∼3.9Å. The vertical heights of Fe and Se above the TiO_2 are relaxed in the CB-AFM state (as summarized inTable I of the Supplementary Information),and then held fixed during the spin-spiral calculations. To isolate the interfacial effects, we consider a free-standing FeSe monolayer using the same structural parameters. In addition, we also calculate spin-spirals for bulk FeSe at the experimental lattice parameters<cit.> in order to compare to the inelastic neutron scattering experiment<cit.> and to address the possible correspondence with magnetic fluctuations in single-layer FeSe/STO.We first consider the freestanding FeSe monolayer film.Figure <ref>(c) shows the spin-spiral total energy E(q) with q along high-symmetry lines in the BZ. Since there are two Fe atoms per cell, several spin-spiral “modes” appear, characterized by the relative atomic phase φ=φ_2-φ_1.A mode with φ=π (solid line connecting the filled circles)stably exists throughout the BZ and forms a “band” of lowest energy. This band includes the CB-AFM state at Γ, the CL-AFM state at M, and the non-collinear orthogonal state<cit.> at X.Around the minimum at q=M, E(q) is mostly parabolic. In contrast, E(q) around Γ (CB-AFM state) is markedly flat, i.e.,there is almost no energy cost to slightly rotate the Fe moments from the CB-AFM configuration and there are many spin-wave states with (almost) the same energy as that of the CB-AFM one.The dashed line represents the φ=0 mode. This mode is degenerate with the π mode at M on the zone boundary, but splits away and corresponds to a ferromagnetic configuration at Γ.This mode, however, is highly unstable in the sense that the Fe moments collapse immediately as q goes away from the zone boundary (c.f., Fig. 2 of Supplementary Information)and a self-consistent ferromagnetic configuration does not exist.The horizontal dotted line represents the energy of the non-magnetic (NM) state.When E(q) approaches or exceeds this energy, the spiral calculation of the φ=0 mode either converges to the NM state or never converges to a self-consistent solution.We also find π/2 and 3π/2 modes (represented by filled squares connected by the red line) that exist only at the zone boundary and are degenerate in energy.At M, these modes give the non-collinear vortex state,<cit.> and at X the bicollinear stripe state.The energy splitting of the half-integer and integer π modes is very large, amounting to ∼64 meV/Fe at M, strongly indicating that non-Heisenberg interactions are important.<cit.> In particular, in a simple Heisenberg model, the integer and half-integer π modes would be degenerate at M.The inclusion of the previously considered fourth-order bi-quadratic term,<cit.> (S_i· S_j)^2, however, is not sufficient; from extensive fits of E(q), more general 4-spin terms<cit.> are essential to account for the DFT results. This non-Heisenberg behavior has implications for the stability of possible paramagnetic states, including the nematic paramagnetic phase suggested for bulk FeSe.<cit.>A description of the paramagnetic state in terms of the spin-spiral states requires knowledge of E(q) not only along the high symmetry directions, but throughout the BZ.The energy landscape of the π mode for the free-standing FeSe monolayer is shown in Figs. <ref>(a)-(c). The region around Γ is found to be exceedingly flat: for the Γ-centered circle of radius 0.2 (2π/a) shown in Fig. <ref>(b), the energies are in a window of -3 to 1 meV relative to the CB-AFM energy E_CB=56 meV, and the flat (yellow) region covers a substantially wide area of the 2D BZ, resulting in a sharp peak in the density of states of the π mode, Fig. <ref>(c).At high temperatures, entropy considerations suggest that many spin-wave states around Γ will be excited, and may be frozen in as the temperature is lowered.The question that then arises is how the electronic structure varies for these q states near Γ compared to the CB-AFM state.To obtain the electronic energy bands in a spin-wave q state, it is necessary to carry out a k-projection procedure<cit.> (see “Method” for details), leading to an approximate “spectral weight” A_q(k, ε) rather than a sharp band dispersion ε_nk. The spectral weight of a paramagnetic state approximated by a q-average of spin-spiral states is shown in Fig. <ref>(d) together with the pure CB-AFM band structure; the q-resolved evolution of the bands is given in the Supplementary Information.The bands in the pure CB-AFM state (upper panel) are consistent with previous calculations: there is an electron pocket at M and a hole-like band around Γ, which touches the Fermi level. (This band around Γ is pushed below the Fermi level, in agreement with experiment, when oxygen vacancies in the STO substrate are included.<cit.>) The q-averaged spectral weight, averaged over the Γ-centered circle given in (b), shows features very similar to the pure CB-AFM bands.Although the CB-AFM spin configuration itself is easily deformed by spin wave formation, CB-AFM-like band features are very robust. These results thus provide a natural explanation for the ARPES observation that the electronic bands look like those of the CB-AFM configuration: entropy effects lead to a paramagnetic state built up from spin-wave states centered around Γ (extending over a large part of q-space), and these approximately degenerate q states have similar electronic bands, such that the spectral weight looks similar to the CB-AFM.The objection to this scenario is that the CL-AFM phase still is lower in energy, and so the predicted electronic bands should also show CL-AFM features, in contrast to the ARPES data.The band structure of the pure CL-AFM state (the top panel of Fig. <ref>(e)) has hole pockets at M and Γ, in addition to two bands crossing the Fermi level along Γ–M, in good agreement with previous calculations. Taking a q-average over a circle of radius 0.2 (2π/a) centered at the BZ corner, the spectral weight is drastically changed from the pure CL-AFM bands as shown in the bottom panel of Fig. <ref>(e).At k=M, the convex (hole-like) and concave (electron-like) bands repel each other, with the result that states around M are pushed far away from the Fermi level. Unlike the CB-AFM case, the electronic bands of spin-spiral states with q near M are very sensitive to small changes in magnetic configuration; thus, the superposition of spin-wave states to form the paramagnetic phase results in electronic bands that do not resemble the ideal CL-AFM ones. Moreover, even if there are contributions from the φ=π states throughout the BZ, the electronic bands around M close to the Fermi level would be dominated by the CB-AFM-like bands since those are in the correct energy range and have large spectral weight. (Around Γ near the Fermi level, the bands are somewhat similar. There are, however, more significant differences in the electronic bands that distinguish the different magnetic configurations at energies less than about -0.4 eV.)Previous DFT calculations have shown that oxygen vacancies at the STO interface not only provide electron doping to the FeSe layer, but also modify the electronic structure in the CB-AFM state significantly. Self-developed electric fields cause spin splittings of the bands and open a gap for the 3d_zx/zy electron-like bands at M.<cit.> These effects have been attributed to hybridization with Se 4p_z state deformed by the electric field.<cit.> The 3d_z^2 hole-like band around Γ, which touches the Fermi level if a Coulomb correction U is not added,<cit.> has its energy lowered relative to the 3d_zx/zy band at M and its band width reduced, leading to a disappearance of the Γ-centered hole pocket even without relying on U.<cit.>It is also reported that oxygen vacancies modify the magnetic interaction and reduce the CB-AFM energy E_CB relative to CL-AFM.<cit.> Because of the possible importance of oxygen vacancies to the properties of FeSe/STO, we have also considered their effect on E(q). Here we use the conventional virtual crystal approximation without reducing the crystal symmetry. A single oxygen vacancy on the surface or at interface provides excess charge of nominally ∼-2e.A vacancy concentration α (0≤α≤1) is simulated by replacing the atomic number of interface oxygen with Z=8+2α.In Fig. <ref>, E(q) for the π modes of (i) free-standing FeSe, and STO-supported FeSe (ii) with a perfect interface (Z=8) and (iii) with oxygen vacancies (Z=8.1) are compared. Although the flatness of E(q) around Γ remains in all cases, the CB-AFM energy E_CB is significantly reduced by the presence of the STO substrate from 66 to 40 meV, and further reduced to 24 meV by the introduction of oxygen vacancies. However, the overall energy landscape still has the same shape as in Fig. <ref>(a)-(c), and the variation of the electronic bands with q will be similar. Thus, presence of the STO substrate – and oxygen vacancies – will increase the propensity CB-AFM-like electronic structure. For bulk FeSe we obtain a similar E(q) landscape (Supplementary Information): again the CL-AFM state is lower in energy than the CB-AFM state, and E(q) is flat around q=Γ, and is consistent with the experimentally observed coexistence of CB- and CL-AFM correlations in the bulk.<cit.>The similarity of E(q) for bulk and films strongly support the idea that the same coexistence is inherentin FeSe/STO as well.In addition, the increased flatness of bulk E(q) around the CL-AFM configuration (q=M) implies that CL-AFM correlations are more important in the bulk than the films while CB-AFM correlations are stronger in monolayer FeSe/STO.These quantitative differences in the bulk and film spin-spiral dispersions may be relevant to the difference in superconducting T_C and to the appearance of nematic order in the 3-D bulk.The calculation of spin-spiral states provides a number of insights into the magnetic and electronic properties of the monolayer FeSe, as well as providing an approach for consistently including (local) magnetic effects in the treatment of the paramagnetic state.Although the CB-AFM is not the calculated lowest energy ordered magnetic configuration,E(q) is extremely flat around the Γ point, resulting in a high density of states and entropy. These almost degenerate spin-wave states have electronic bands similar to the pure CB-AFM state (q=0). The electronic bands for states around low-energy CL-AFM configurations, on the other hand, are very sensitive to spin fluctuations, with the electronic states near M are pushed away from the Fermi level. Thus for the paramagnetic phase described in terms of spin-wave states, the resulting electronic structure around the Fermi level is expected to be dominated by CB-AFM-like features, as observed in ARPES experiments. The STO substrate, including oxygen vacancies, acts to reduce the energy of the CB-AFM (q=0) vs. CL-AFM (q=(π,π)), thus making both CB-AFM-like magnetic correlations and electronic structure more favorable. Although we do not propose any mechanism for the superconductivity, the magnetic fluctuations intrinsic to the paramagnetic state, enhanced by proximity to the STO substrate, and the resulting effect on the description of the electronic structure in the normal state are essential aspects that will need to be addressed in any comprehensive theory of the superconductivity in the FeSe system; for example, a recent effective k·p-based theory predicts<cit.> that the CB-AFM type fluctuations produces a fully-gapped nodeless d-wave superconducting state on the M-centered electron pockets, naturally accounting for the gap anisotropy observed in single-layer FeSe films.<cit.> § METHOD All DFT calculations were carried out using the full-potential linearized augmented plane wave method<cit.>as implemented in the HiLAPW code, including noncollinear magnetism<cit.> and generalized-Bloch-theorem spin-spiral magnetism.Orientation of the spin density is fully relaxed throughout the entire space.<cit.> The muffin-tin sphere radius was set to 0.8Å for oxygen and 1.1Å for other atoms, andthe wave function and density and potential cutoffs were 16 and 200 Ry, respectively. The Perdew, Burke, and Ernzerhof form of the Generalized Gradient Approximation was used for exchange correlation.The Brillouin zone was sampled with 20×20×1 and 20×20×15 k-point meshes for the film and bulk calculations, respectively.The density of two-dimensional spin-wave states (Fig. <ref>(c)) was calculated by the triangle method.<cit.>In the spin-spiral calculations, the cell-periodic part ρ̃ of the spin off-diagonal density matrix ρ_↑↓ for wave vector q has the form ρ̃=e^iq·rρ_↑↓ in the interstitial region, and in the sphere region ρ̃=e^iq·τρ_↑↓ with atomic position τ.To obtain the band dispersion for a spin-wave state q, a k-projection in the same spirit as the unfolding technique for supercell calculations is needed.<cit.>The wave function of a band ε_nk(q) is a two component spinor, where each spin component has a different Bloch phase,ψ_nk^q =[ e^i (k-q/2)·ru_nk^q ↑; e^i (k+q/2)·ru_nk^q ↓ ] .Thus this band must be projected onto two different k vectors,k^± = k∓q/2,to obtain a band structure consistent with standard (supercell) calculations and with the momentum-resolved measurement in ARPES experiments. The projection weights for k^± are w^+=⟨ u^↑_k | u^↑_k⟩ and w^-=⟨ u^↓_k | u^↓_k⟩. They can be found from the expectation value of σ_z by w^± = 1 ±⟨ψ_k | σ_z | ψ_k⟩/2since ⟨ψ_k | σ_z | ψ_k⟩ = w^+ - w^- and the orthonormality condition gives w^+ + w^- = 1. (The band index and q have been omitted for simplicity.)The integrals implicit in the brackets are performed over the primitive cell.Figure <ref>(f) gives an example of how to obtain the dispersion along the Γ–M line: The band energies are calculated along two k-rods shifted from the Γ–M segment by ±q/2,and then projected onto Γ–M.To get the q-averaged spectral weights shown in Fig. <ref>(d)(e), the two paths shown in (g) are averaged since they become inequivalent for arbitrary q. § ACKNOWLEDGEMENTSThis work was supported by the U.S. National Science Foundation, Division of Materials Research, DMREF-1335215.§ AUTHOR CONTRIBUTIONSTS and MW conceived the project and wrote the manuscript, TS carried out the calculations, and all authors contributed to the interpretation of the results and commented on the manuscript.§ COMPETING FINANCIAL INTERESTSThe authors declare no competing financial interests.10 url<#>1urlprefixURLwang_interface authorQing-Yan, W. et al. titleInterface-Induced High-Temperature Superconductivity in Single Unit-Cell FeSe Films on SrTiO_3. journalChin. Phys. Lett. volume29, pages37402–037402 (year2012).wen-hao authorWen-Hao, Z. et al. titleDirect Observation of High-Temperature Superconductivity in One-Unit-Cell FeSe Films. journalChin. Phys. Lett. volume31, pages17401–017401 (year2014).ge_superconductivity_2015 authorGe, J.-F. et al. titleSuperconductivity above 100 K in single-layer FeSe films on doped SrTiO_3. journalNature Mater. volume14, pages285–289 (year2015).liu_electronic_2012 authorLiu, D. et al. titleElectronic origin of high-temperature superconductivity in single-layer FeSe superconductor. journalNature Commun. volume3, pages931 (year2012).tan_interface-induced_2013 authorTan, S. et al. titleInterface-induced superconductivity and strain-dependent spin density waves in FeSe/SrTiO_3 thin films. journalNature Mater. volume12, pages634–640 (year2013).he_phase_2013 authorHe, S. et al. titlePhase diagram and electronic indication of high-temperature superconductivity at 65 K in single-layer FeSe films. journalNature Mater. volume12, pages605–610 (year2013).liu_dichotomy_2014 authorLiu, X. et al. titleDichotomy of the electronic structure and superconductivity between single-layer and double-layer FeSe/SrTiO_3 films. journalNature Commun. volume5, pages5047 (year2014).lee_interfacial_2014 authorLee, J. J. et al. titleInterfacial mode coupling as the origin of the enhancement of T_c in FeSe films on SrTiO_3. journalNature volume515, pages245–248 (year2014).zhao_common_2016 authorZhao, L. et al. titleCommon electronic origin of superconductivity in (Li,Fe)OHFeSe bulk superconductor and single-layer FeSe/SrTiO_3 films. journalNature Commun. volume7, pages10608 (year2016).wang_topological_2016 authorWang, Z. F. et al. titleTopological edge states in a high-temperature superconductor FeSe/SrTiO_3(001) film. journalNature Mater. volume15, pages968–973 (year2016).Mazin_2008 authorMazin, I. I., authorSingh, D. J., authorJohannes, M. D. & authorDu, M. H. titleUnconventional Superconductivity with a Sign Reversal in the Order Parameter of LaFeAsO_1xF_x. journalPhys. Rev. Lett. volume101, pages057003 (year2008).Kuroki_2008 authorKuroki, K. et al. titleUnconventional Pairing Originating from the Disconnected Fermi Surfaces of Superconducting LaFeAsO_1xF_x. journalPhys. Rev. Lett. volume101, pages087004 (year2008).bang_2013 authorBang, J. et al. titleAtomic and electronic structures of single-layer FeSe on SrTiO_3(001): The role of oxygen deficiency. journalPhys. Rev. B volume87, pages220503 (year2013).cao_2014 authorCao, H.-Y., authorTan, S., authorXiang, H., authorFeng, D. L. & authorGong, X.-G. titleInterfacial effects on the spin density wave in FeSe/SrTiO_3 thin films. journalPhys. Rev. B volume89, pages014501 (year2014).shanavas_doping_2015 authorShanavas, K. V. & authorSingh, D. J. titleDoping SrTiO_3 supported FeSe by excess atoms and oxygen vacancies. journalPhys. Rev. B volume92, pages035144 (year2015).xie_oxygen_2015 authorXie, Y., authorCao, H.-Y., authorZhou, Y., authorChen, S., authorXiang, H. & authorGong, X.-G. titleOxygen Vacancy Induced Flat Phonon Mode at FeSe/SrTiO_3 interface. journalScientific Reports volume5, pages10011 (year2015).chen_effects_2016 authorChen, M. X., authorGe, Z., authorLi, Y. Y., authorAgterberg, D. F., authorLi, L. & authorWeinert, M. titleEffects of interface oxygen vacancies on electronic bands of FeSe/SrTiO_3(001). journalPhys. Rev. B volume94, pages245139 (year2016).liu_2012_cal authorLiu, K., authorLu, Z.-Y. & authorXiang, T. titleAtomic and electronic structures of FeSe monolayer and bilayer thin films on SrTiO_3(001): First-principles study. journalPhys. Rev. B volume85, pages235123 (year2012).bazhirov_2013_cal authorBazhirov, T. & authorCohen, M. L. titleEffects of charge doping and constrained magnetization on the electronic structure of an FeSe monolayer. journalJournal of Physics: Condensed Matter volume25, pages105506 (year2013).zheng_antiferromagnetic_2013 authorZheng, F., authorWang, Z., authorKang, W. & authorZhang, P. titleAntiferromagnetic FeSe monolayer on SrTiO_3: The charge doping and electric field effects. journalScientific Reports volume3, pages2213 (year2013).cao_2015 authorCao, H.-Y., authorChen, S., authorXiang, H. & authorGong, X.-G. titleAntiferromagnetic ground state with pair-checkerboard order in FeSe. journalPhys. Rev. B volume91, pages020504 (year2015).liu_2015 authorLiu, K., authorZhang, B.-J. & authorLu, Z.-Y. titleFirst-principles study of magnetic frustration in FeSe epitaxial films on SrTiO_3. journalPhys. Rev. B volume91, pages045107 (year2015).zheng_potassium_2016 authorZheng, F., authorWang, L.-L., authorXue, Q.-K. & authorZhang, P. titleBand structure and charge doping effects of the potassium-adsorbed FeSe/SrTiO_3 system. journalPhys. Rev. B volume93, pages075428 (year2016).wang_magnetic_2016 authorWang, Q. et al. titleMagnetic ground state of FeSe. journalNature Commun. volume7, pages12182 (year2016).WaKL15 authorWang, F., authorKivelson, S. & authorLee, D.-H. titleNematicity and quantum paramagnetism in FeSe. journalNature Phys. volume11, pages959–963 (year2015).spin-dyn0 authorAntropov, V. P., authorKatsnelson, M. I., authorHarmon, B. N., authorvan Schilfgaarde, M. & authorKusnezov, D. titleSpin dynamics in magnets: Equation of motion and finite temperature effects. journalPhys. Rev. B volume54, pages1019–1035 (year1996).spin-dyn1 authorStocks, G. M. et al. titleTowards a constrained local moment model for first principles spin dynamics. journalPhilosophical Magazine Part B volume78, pages665–673 (year1998).cpa authorKanamori, J. titleThe Coherent Potential Approximation and Ferromagnetism. journalJ. Physique Coll. volume35 (C4), pages131–139 (year1974).cpapm authorKozarzewski, B., authorNowak, J. & authorZieliński, J. titleThe Coherent Potential Approximation in the Theory of Paramagnetic Alloys. journalphysica status solidi (b) volume74, pages437–442 (year1976).Sandratskii_1991 authorSandratskii, L. M. titleSymmetry analysis of electronic states for crystals with spiral magnetic order. I. General properties. journalJournal of Physics: Condensed Matter volume3, pages8565 (year1991).glasbrenner_effect_2015 authorGlasbrenner, J. K., authorMazin, I. I., authorJeschke, H. O., authorHirschfeld, P. J., authorFernandes, R. M. & authorValenti, R. titleEffect of magnetic frustration on nematicity and superconductivity in iron chalcogenides. journalNature Phys. volume11, pages953–958 (year2015).Pasrija_2013 authorPasrija, K. & authorKumar, S. titleHigh-temperature noncollinear magnetism in a classical bilinear-biquadratic Heisenberg model. journalPhys. Rev. B volume88, pages144418 (year2013).ohalloran_stabilizing_2017 authorO'Halloran, J., authorAgterberg, D. F., authorChen, M. X. & authorWeinert, M. titleStabilizing the spin vortex crystal phase in two-dimensional iron-based superconductors. journalPhys. Rev. B volume95, pages075104 (year2017).Takahashi authorTakahashi, M. titleHalf-filled Hubbard model at low temperature. journalJournal of Physics C: Solid State Physics volume10, pages1289 (year1977).TS_spin authorShishidou, T., authorAgterberg, D. F. & authorWeinert, M. titleNon-Heisenberg four-spin interactions in FeSe (year2017). noteUnpublished.qi_2010 authorQi, Y., authorRhim, S. H., authorSun, G. F., authorWeinert, M. & authorLi, L. titleEpitaxial Graphene on SiC(0001): More than Just Honeycombs. journalPhys. Rev. Lett. volume105, pages085502 (year2010).davenport_1988 authorDavenport, J. W., authorWatson, R. E. & authorWeinert, M. titleLinear augmented-Slater-type-orbital method for electronic-structure calculations. V. Spin-orbit splitting in Cu_3Au. journalPhys. Rev. B volume37, pages9985–9992 (year1988).Daniel authorAgterberg, D. F., authorShishidou, T., authorBrydon, P. M. R., authorO'Halloran, J. & authorWeinert, M. titleResilient nodeless d-wave superconductivity in monolayer FeSe (year2017). arXiv:1706.01978v2.PhysRevLett.117.117001 authorZhang, Y. et al. titleSuperconducting Gap Anisotropy in Monolayer FeSe Thin Film. journalPhys. Rev. Lett. volume117, pages117001 (year2016).Mike_FLAPW_2009 authorWeinert, M., authorSchneider, G., authorPodloucky, R. & authorRedinger, J. titleFLAPW: Applications and Implementations. journalJournal of Physics: Condensed Matter volume21, pages084201 (year2009).PhysRevLett.111.176405 authorOk, J. M. et al. titleQuantum oscillations of the metallic triangular-lattice antiferromagnet PdCrO_2. journalPhys. Rev. Lett. volume111, pages176405 (year2013).nakamura_2003 authorNakamura, K., authorIto, T., authorFreeman, A. J., authorZhong, L. & authorFernandez-de Castro, J. titleIntra-atomic noncollinear magnetism and the magnetic structures of antiferromagnetic FeMn. journalPhys. Rev. B volume67, pages014405 (year2003).Lee_triangle_2002 authorLee, J.-H., authorShishidou, T. & authorFreeman, A. J. titleImproved triangle method for two-dimensional Brillouin zone integrations to determine physical properties. journalPhys. Rev. B volume66, pages233102 (year2002).chen_2014 authorChen, M. X. & authorWeinert, M. titleRevealing the Substrate Origin of the Linear Dispersion of Silicene/Ag(111). journalNano Letters volume14, pages5189–5193 (year2014).
http://arxiv.org/abs/1708.07868v2
{ "authors": [ "Tatsuya Shishidou", "Daniel F. Agterberg", "Michael Weinert" ], "categories": [ "cond-mat.mtrl-sci", "cond-mat.mes-hall" ], "primary_category": "cond-mat.mtrl-sci", "published": "20170825193610", "title": "Magnetic fluctuations and spin-spirals in single-layer FeSe" }
Stony Brook University, Stony Brook NY 08544, USA,[email protected] Distances of Networks on the Tensor Manifold Bipul Islam1, Ji Liu 1, Romeil Sandhu1 Received: December 30, 2023/ Accepted: December 30, 2023 ============================================================ At the core of understanding dynamical systems is the ability to maintain and control the systems behavior that includes notions of robustness, heterogeneity, and/or regime-shift detection.Recently, to explore such functional properties, a convenient representation has been to model such dynamical systems as a weighted graph consisting of a finite, but very large number of interacting agents. This said, there exists very limited relevant statistical theory that is able cope with real-life data, i.e., how does perform analysis and/or statistics over a “family” of networks as opposed to a specific network or network-to-network variation.Here, we are interested in the analysis of network families whereby each network represents a “point” on an underlying statistical manifold.To do so, we explore the Riemannian structure of the tensor manifold developed by Pennec previously applied to Diffusion Tensor Imaging (DTI) towards the problem of network analysis. In particular, while this note focuses on Pennec definition of “geodesics” amongst a family of networks, we show how it lays the foundation for future work for developing measures of network robustness for regime-shift detection.We conclude with experiments highlighting the proposed distance on synthetic networks and an application towards biological (stem-cell) systems.§ INTRODUCTIONNotions of robustness, heterogeneity, and phase changes are ubiquitous concepts employed in understanding complex dynamical systems with a variety of applications <cit.>. This is seen in Figure <ref>A. While recent advancements in network analysis has arisen through the usage of spectral techniques <cit.>, expander graphs <cit.>, percolation theory <cit.>, information theoretic approaches <cit.>, scale-free networks <cit.>, and a myriad of graph measures reliant on the underlying discrete graph space (e.g., degree distribution, shortest path, centrality <cit.>), such measures rarely incorporate the underlying dynamics in its construction. To this end, we have previously developed fundamental relationships between network functionality <cit.> and certain topological and geometric properties of the corresponding graph <cit.>. In particular, recent work of ours has shown the geometric notion of curvature (a measure of “flatness”) is positively correlated with a system's robustness or its ability to adapt to dynamic changes <cit.>.This can be seen in Figure <ref>B.Here, we are interested in expounding upon previous work <cit.> in order to exploit distances (and eventually statistics, robustness) over a family of networks. In particular, as vector-valued networks are becoming increasingly important, we believe such a framework may be of interest to broader complex network community.To do so, we are interested in studying geometry of networks as a dynamical system that evolves over time in which each given network or observation may represent a “point” (encoded by some matrix-based positive definite model) on an underlying Riemannian manifold.From this, we are interested in applying the framework of Pennec <cit.> to define geodesics such that distances between “points” (networks) in a given family can be measured as seen in Figure <ref>.This said, we note significant related work on employing statistical analysis for Riemannian manifolds has focused on directional statistics, statistics on shape spaces, as well as specific manifolds - we refer the reader <cit.> and references therein. While these methods are mostly extrinsic whereby one embeds the manifold in an ambient Euclidean space, we are interested in studying intrinsic Riemannian metrics that can be naturally applied to the network setting without restrictions on the network topology. As such, the remainder of the present paper is outlined as follows: In the next section, we provide preliminaries regarding the framework by Pennec <cit.> primarily used for DTI imaging.From this, we then show how this framework is a natural towards networks with a particular remarks on how we can begin on constructing robustness over network families.Section 4 presents preliminary results on synthetic and biological data and Section 5 concludes with future work.§ PRELIMINARIESThis section focuses on developing a measures of distance amongst networks in the context of developing measures of network robustness (future work).§.§ Geodesics on the Network ManifoldLet us consider a dynamical system that evolves over time and whose information is encapsulated by a positive definite matrix (tensor). In this regard, it is well-known that the space of tensors is not a vector space, but instead forms a Riemannian manifold M <cit.>.In particular, we will leverage the theory of symmetric spaces which has been extensively studied since the seminal work of Nomizu <cit.>; a comprehensive work on tensor manifolds can be found in <cit.>. Riemannian manifolds are endowed with a metric that smoothly assign to each point ρ∈ M an inner product on 𝒯_ρ, the tangent space to M at ρ.More formally, let us denote Υ and Υ^+ andas the set of all Hermitian matrices and the cone of positive-definite matrices, respectively. We consider the following:Λ_+ := {ρ∈Υ^+| tr(ρ)=1 }𝒯_ρ := {χ∈Υ | tr(χ)=0 }where Λ_+ represents our space of networks and 𝒯_ρ is the corresponding tangent space.From this, we let the observations of our dynamic system be recorded as {ρ_0,ρ_1,ρ_2,...,ρ_t} and where each “point” (network) ρ_i∈Υ^+ is given by a tensor.As seen in Figure <ref>, we need a sensible notion of distance that is dependent on the underlying geometry. To do so, one defines the exponential map as the function that maps to each vector ρ_0 ρ_1∈𝒯_ρ_0, the point ρ_1∈Λ^+ on the manifold M that is reached after unit time by the geodesic starting at ρ_0 with this tangent vector.The exponential map, herein denoted as exp_ρ:𝒯_ρ↦ M at point ρ is defined on the whole tangent space and for the particular space of tensors, this map is also one-to-one.Moreover, one can also define a unique inverse map denoted as the logarithm map Log_ρ_0 : M ↦𝒯_ρ_0 that maps a point ρ_1 ∈ M to the unique tangent vector χ∈𝒯_ρ_0 at ρ_0 whose initial velocity is that of the unique geodesic γwith γ(0) = ρ_0 and γ(1)= ρ_1.Thus, the problem of computing distances amongst networks (encapsulated by a positive defense matrix) amounts to being able to properly computing geodesics on this space and for which, we must now consider a family of curves γ(t) on the manifold and its speed vector γ̇(t).Then, we seek the minimum distance (length) for any two points, e.g., ρ_0 and ρ_1 asℒ_ρ_0^ρ_1 = min_γ∫_ρ_0^ρ_1‖γ̇(t)‖_γ(t) dt = min_γ∫_ρ_0^ρ_1(⟨γ̇(t),γ̇(t)⟩_γ(t))where γ(0)=ρ_0 and γ(1)=ρ_1.For this note, we relax the restriction upon trace one matrices for sake of clarity (without obfuscating the conceptual motivation). §.§ Revisiting Riemannian Framework for TensorsAs highlighted, we seek to leverage Pennec <cit.> framework towards the network setting.While this framework has popularized most recently in the field of DTI <cit.>, the mechanics can be used to analyze network functionality with this note focusing on the computation of distances amongst such networks.In particular, the authors utilize a result from differential geometry regarding geodesics for invariant metrics on affine symmetric spaces <cit.> together with the Sylvester equation from control to propose an affine invariant metric whereby the unique geodesic curve from point ρ (and at the origin) on our tensor (network) manifold with a tangent vector χ can be shown asγ(t)_(ρ,χ) = ρ^1/2exp(tρ^-1/2χρ^-1/2)ρ^1/2andγ(t)_(I,χ)= exp(tχ)where exp(χ)=∑_k=0^+∞=χ^k/k! is the usual matrix exponential.From this, we have:dγ(t)/dt = d/dtexp(tχ) =d/dt(Q [ [ e^tχ_1,10...0;0 e^tχ_2,2...0;00... e^tχ_n,n ]] Q^T) = Q DIAG(χ_iexp(tχ_i))Q^T = exp(tχ)^1/2χexp(tχ)^1/2) =γ(t)_(I,χ)^1/2⋆χwhere the A⋆ B=ABA^T.Noting the geodesic γ(t)_(ρ,χ) between any two points ρ_0, ρ_1∈ M where ρ_1=exp_ρ_0(χ) at t=1, the logarithm map can be seen asγ(1)_(ρ_0,χ) =ρ_1 = ρ_0^1/2exp(ρ_0^-1/2χρ_0^-1/2)ρ_0^1/2 log(ρ_0^-1/2ρ_1ρ_0^-1/2) =ρ_0^-1/2χρ_0^-1/2 ρ_0^1/2log(ρ_0^-1/2ρ_1ρ_0^-1/2)ρ_0^1/2 =χ = ρ_0ρ_1=Log_ρ_0(ρ_1)From this, the distance between tensors can finally be computed asdist^2(ρ_0,ρ_1)= ‖Log_ρ_0(ρ_1) ‖^2_ρ_0 =‖log(ρ_0^-1/2ρ_1ρ_0^-1/2)‖^2_2.We now have a natural distance that measures the length of the geodesic curve that connects any two “points” on our tensor (network) manifold.This is particularly important as it allows for characterization of functionality over a family of networks as we will highlight in the next sections.§.§ Mean, Variance, and Gaussian DistributionsGiven the above geodesic distance, one can formulate necessary statistics and most importantly form distributions.Specifically, with the above affine invariant metric, the unique mean ρ̅ of N “points” {ρ_0,ρ_1,ρ_2,...,ρ_N} on the tensor manifold can be computed via gradient descent:ρ̅_t+1=ρ̅_t^1/2exp(1/N∑_i=1^Nlog(ρ̅_t^-1/2ρ_iρ̅_t^-1/2))ρ̅_t^1/2It has been noted that although no closed form expression for the mean exists, the above converges towards a steady-state solution in a few iterations. As in the traditional Euclidean distance and similar to the Karcher or Frechet mean, the above mean is constructed on the premise that we seek a “point” that minimizes the sum of squared distance.e., ρ=∑_i=1^Ndist^2(ρ,ρ_i).In a similar fashion, the variance can be computed as follows:Σ = 1/N-1∑_i=1^NVec_ρ̅(Log_ρ̅(ρ_i))Vec_ρ̅(Log_ρ̅(ρ_i))^Twhere Vec_ρ̅(ρ_i)=Vec_I(log(ρ̅⋆ρ_i)) and where the operation Vec_I(A) is a vectorized projection of the independent coefficients of our tensor in the above formulation, i.e., Vec_I(A) = (a_1,1,√(2)a_1,2,...√(2)a_1,n,...√(2)a_n-1,n, a_n,n). From this, one obtains a generalized form of the Gaussian distribution on the tensor manifold:N_(ρ̅,Σ)(ρ_i)=kexp(1/2Vec_ρ̅(Log_ρ̅(ρ_i))^TΣ^-1Vec_ρ̅(Log_ρ̅(ρ_i))With the above, we can now begin to stylize how such a framework can be utilized towards characterizing network families. § APPLICATION TO NETWORK CHARACTERIZATIONThis section presents how the above framework can be used to generalize notions of robustness to a family of networks.While the experiments focus on the fitness of “distance”, we provide further information to put this note in context.§.§ Network Robustness From Scalar to Matrix SettingAs highlighted, our main underlying interest is to understand the functionality over network families.Recently, it has been shown that Ricci curvature (herein denoted “Ric”) from geometry is intrinsically connected to Boltzmann entropy H <cit.> as well as functional robustness of networks R or the ability to maintain functionality in the presence of random fluctuations, i.e., Δ Ric×Δ R≥ 0 <cit.>.Given that geometry (curvature) of networks seemingly plays an integral role in understanding functionality, we require an appropriate discretization applicable to a particular network.To this end, one can discretely defined Ricci Curvature (due to Ollivier <cit.>) between any two “points” x and y as:κ(x,y) := 1 - W_1(μ_x,μ_y)/d(x,y)This definition, motivated by coarse geometry, is applicable to the graph setting whereby the geodesic distance d(x,y) is given by the hop metric and W_1 can be computed simply via linear programming. However, this measure is limited to local network-to-network variation as opposed to a family of networks.That is, the definition of equation (<ref>) in the current setting has been used to analyze a given networks robustness through changes of curvature between any two end networks configurations.Therefore, our primary motivation is focused on developing a matrix-valued curvature measure such that we can analyze Ricci curvature over a family of networks in a more global setting.In particular, Ollivier's definition is rooted in the concept of providing an “indicator” between negative, Euclidean “flat”, and positive curved spaces by measuring the ratio (distance) of the centers of a geodesic ball against that of the transport distance of ball's distributions (via Wasserstein-1).Similarly, one can naturally expand this definition to the problem of a family of networks. To do so, we first need a geodesic distance d_M(x,y) provided by equation (<ref>) amongst networks along with the appropriate transport distance W_p. For the case of Euclidean geometry, one can define the transport distance between two Gaussians μ _1=𝒩(m_1,C_1) and μ _2=𝒩(m_2,C_2), where m_1 and m_2∈ℝ ^n m_2∈ℝ ^n are the means and C_1 and C_2∈ℝ ^n× n C_2∈ℝ ^n× n are the covariances, as:W_2(μ _1,μ _2)^2=m_1-m_2_2^2+trace (C_1+C_2-2 (C_2^1/2C_1C_2^1/2 )^1/2 ).Comparing this definition to the structure presented in Section 2, the difference lies with respect to the underlying geometry for which Pennec <cit.> provides the mathematical tools to generalize W_2 to our tensor manifold.While the generalization is on-going and future work, this puts experimental results of d_M(x,y) in context; e.g., our framework interest is to naturally generalize to functionality.§.§ Appropriate Matrix-Based Model: Graph Laplacian Given our interest in network control, a natural network model for encapsulation is an approximated graph Laplacian whose structure provides a global representation of a given network with many useful properties that are intrinsically tied to system functionality.While a complete review is beyond the scope of this note (we refer the reader to <cit.>), a few areas that have garnered attention have included construction of expander graphs <cit.> to Cheegers inequality with increasing attention on connections between spectral graph theory and its respective geometry <cit.>.Mathematically, one can define the Laplacian operator for a specific network as Δ f(x)= f(x) - ∑_y f(y)μ_x(y) with f being a real-valued function which coincides with the usual normalized graph Laplacian given by:L = I - D^-1/2AD^-1/2It is interesting to note in this connection that if k ≤κ(x,y) is a lower bound for Ricci curvature defined in equation (<ref>), then the eigenvalues of Laplacian is bounded k ≤λ_2 ≤ . . . ≤λ_N ≤ 2-k <cit.>.This relationship is important since 2-λ_N measures the deviation of the graph from being bipartite, i.e., a graph whose vertices can be divided into two disjoints sets U and V such that every edge connects a vertex in U to one in V.As such ideas appear in resource allocation and control, the study of such structures in the time-varying setting motivate our focus on the Laplacian matrix.Lastly, while one can form a normalized Laplacian with unitary trace, we note that the Laplacian matrix by definition is a positive semi-definite matrix.Therefore, to enforce positive definiteness, we utilized an approximated Laplacian L̂:= L+ϵ_LI where 0<ϵ_L << 1 such that x^TL̂x>0 for all nonzero x∈ℝ^n.§ RESULTSWe now present preliminary results to highlight potential for network analysis and set the foundation for future work in network robustness. §.§ Toy Network ConfigurationsTo provide initial intuition of the proposed framework, we constructed three toy “communication” based networks.As one can see in Figure <ref>, the networks are composed of 200 nodes and 400 edges with varying topology that represents a chain, a random, and a star-like network.Similar to our earlier work <cit.>, we are interested in using the proposed framework to measure distances amongst these structures under varying “noise”. As highlighted by network curvature <cit.> and network entropy <cit.>, one should expect from a communication perspective the star-like network is “closer” to a “random” network.That is, given that their functional properties such as robustness (a measure of “flatness”) are similar compared towards the chain-like network, we should expect that such networks on the manifold lie closer to one another.This is exactly what is shown in Figure <ref>B.Moreover, as we move from an unweighted graph to a weighted graph in which weights are chosen randomly in a uniform manner, this relationship still holds.On the other hand, if we utilize the classical Euclidean distance (Frobenius norm) to measure network-to-network distance, we find that the chain-like network is in fact “closer” to the star-like network which is against intuition.§.§ Scale-Free and Random Networks This section presents clustering results and importance of accounting for the underlying geometry.In particular, we generate 50 scale-free networks <cit.> and 50 random (Erdos-Renyi) networks with each network composed 200 nodes and 1164 edges (differences are only in topology).We then compute pairwise distances amongst all potential network pairs based on the Euclidean distance, defined by the Frobenius norm, as well as the Riemannian distance, defined by (<ref>).Figure <ref>A presents a “heat map” of the distances while Figure <ref>B presents average distance .As one can see, Euclidean distance fails to cluster networks as the distance between scale-free to random networks are “closer” than random to random networks. On the other hand, the framework proposed here is able to properly cluster the aforementioned networks which is an intuitive result given a basic understanding of geometry.However, this motivates this framework for further study on network robustness.We note if experiments are repeated and/or the number of nodes/edges are changed, the clustering result still holds.§.§ Waddingtons Landscape: Cell DifferentiationIn this section, we apply the proposed distance measure towards cellular differentiation as depicted in Figure <ref>.This epigenetic landscape provides a stylistic hierarchal view cellular differentiation, i.e., elevation associates to degrees of pluripotency or the ability for a cell to differentiate.At a high level, one can consider the intricate and complex processes of cellular differentiation as a network whereby such bifurcations relate to differentiated states <cit.>. To illustrate this, we accessed the Gene Expression Omnibus (GEO) of two public datasets, GSE11508 and GSE 30652, that possess 59 pluripotent / 160 non-pluripotent and 159 pluripotent / 32 non-pluripotent cell samples, respectively.Within each pluripotent samples, there contains tissue types that include human embryonic stem cells (hESCs) and induced pluripotent stem cells (iPSCs).This results in a potential mix of differentiation, i.e., iPSCs pluripotency may be much lower than hESCs yet are deemed pluripotent.Here, we are interested to examine how “far” apart differentiated samples are from their counterparts.As such, we construct a single pluripotent network similar to previous work <cit.> where the weights are the gene-to-gene correlation.This was done similarly for a single non-pluripotent network.Together, this was repeated to generate 50 pluripotent and non-pluripotent networks from each GSE11508 and GSE 30652.Figure <ref> presents heat map results as well as the average network distance between such networks.As one can see, we see a marked increase in distance between pluro and non-pluro samples.We note while the Frobenius norm follows the same pattern, this is primarily due to the topology being fixed amongst samples as well as sparse interactions compared to work by Teschnedorff <cit.>. We provide this to highlight the tacit topological dependence on distance which requires further study that is on-going (e.g., accounting for signaling interactome hierarchy).§ CONCLUSIONThis note introduces a statistical geometric framework <cit.> previously applied to DTI Imaging <cit.> toward the problem of network analysis.To illustrate the importance of geometry in this setting, we presented preliminary results that show classical Euclidean distances are unable to properly differentiate between not only toy network configurations, but that of scale-free and random networks.The results set the foundation for a more expansive study on global notions of robustness for which we will explore notions of network family curvatures. 99 Bara1 A. Barabasi. “The Network Takeover,” Nature Physics. 2012.Dem1 L. Demetrius and T. Manke. “Robustness and Network Evolution: Entropic principle.” Physica A: Statistical Mechanics and its Applications. 2005.Tesch1 J. West, G. Bianconi, S. Severini, and A. Teschendorff. “Differential Network Entropy Reveals Cancer System Hallmarks.” Nature (Scientific reports) 2012.Bara2 A. Barbasi and R. Albert. “Emergence of Scaling Networks.” Science. 1999.Chung R.K. Fan Chung. “Spectral Graph Theory.” American Mathematical Society. 1997.Widgerson S. Hoory, N. Linial, and A. Widgerson. “Expander Graphs and Their Applications.” American Mathematical Society. 2006.Aharony D. Stauffer and A. Aharony. “Introduction to Percolation Theory.” Taylor and Francis. 1994Borgatti1 S. Borgatti. “Centrality and Network Flow.” Social Networks. (2005).Bara3G. Ghoshal and A. Barabasi. “Ranking Stability and Super-Stable Nodes in Complex Networks.” Nature Communications. 2011.Varadhan S. R. S. Varadhan. “Large Deviations and Applications.” SIAM. 1984.Ollivier1 Y. Ollivier. “Ricci Curvature of Markov Chains on Metric spaces.” Journal of Functional Analysis. 2009.Ollivier2 Y. Ollivier. “Ricci Curvature of Metric Spaces.” Math. Acad. Sci. Paris. 2007.Sandhu1R. Sandhu, T. Georgiou, E. Reznik, L. Zhu, I. Kolesov, Y. Senbabaoglu, and A. Tannenbaum. “Graph Curvature for Differentiating Cancer Networks.” Nature (Scientific Reports). 2015. Sandhu2 R. Sandhu, T. Georgiou, and A. Tannenbaum. “Ricci Curvature: An Economic Indicator for Market Fragility and Systemic Risk.” Science Advances. 2016.Pennec1 X. Pennec, P. Fillard, and N. Ayache. “A Riemannian Framework for Tensor Computing.” IJCV. 2005.Pennec2 V. Arsingy, P. Fillard, X. Pennec, and N. Ayache. “Log-Euclidean metrics for fast and simple calculus on diffusion tensors.” Magnetic Resonance in Medicine. 2006.Mardia P. Jupp and K. Mardia. “A unfied view of the theory of directional statistics.” International Statistical Review. 1989.Moahker2 M. Moakher. “Means and averaging in the group of rotations.” SIAM Journal of Matrix Analysis and Applications. 2002.Smith A. Edelman, T. Arias, and S Smith. “The Geometry of Algorithms with Orthonganlity Constraints.” SIAM Journal of Matrix Analysis and Applications. 2002.DoCarmo M. DoCarmo. Riemannian Geometry. Birkhauser. 1992.Joshi P. Fletcher and S. Joshi. “Principal geodesic analysis on symmetric spaces: Statistics of diffusion tensors.” ECCV. 2004.Nom K. Nomizu. “Invariant affine connections on homogeneous spaces.” American Journal of Math. 1954.Yogesh Y. Rathi, A. Tannenbaum, O. Michailovich. “Segmenting Images on the Tensor Manifold.” CVPR. 2007.Affine1 S. Helgason. Differential Geometry, Lie Groups, and Symmetric Spaces. 1978.LV J. Lott and C. Villani. “Ricci Curvature for Metric-Measure Spaces via Optimal Transport.” Annals of Mathematics. 2009.Sturm K. Sturm. “On the geometry of metric measure spaces.” Acta Math. 2006.Jost F. Bauer, J. Jost, and S. Liu. “Ollivier-Ricci Curvature and the Spectrum of the Normalized Graph Laplace Operator.” arXiv. 2013.
http://arxiv.org/abs/1708.07904v3
{ "authors": [ "Bipul Islam", "Ji Liu", "Romeil Sandhu" ], "categories": [ "cs.SY" ], "primary_category": "cs.SY", "published": "20170825221738", "title": "Characterizing Distances of Networks on the Tensor Manifold" }
=1bi-spectrum#1 #1[(Phys. Rev. DPhys. Rep.MNRASJHEPa]Chi-Ting Chiang,b]and Anze Slosar[a]C.N. Yang Institute for Theoretical Physics, Stony Brook University, Stony Brook, NY 11794, U.S.A.[b]Brookhaven National Laboratory, Blgd 510, Upton, NY 11375, [email protected],[email protected] investigate the three-point correlation between theforest and the CMB weak lensing (δ_F δ_F κ) expressed as the cross-correlation between the CMB weak lensing field and local variations in the forest power spectrum. In addition to the standard gravitational bispectrum term, we note the existence of a non-standard systematic term coming from mis-estimation of the mean flux over the finite length ofskewers. We numerically calculate the angular cross-power spectrum and discuss its features. We integrate it into zero-lag correlation function and compare our predictions with recent results by Doux et al.. We find that our predictions are statistically consistent with the measurement, and including the systematic term improves the agreement with the measurement. We comment on the implication of the response of theforest power spectrum to the long-wavelength density perturbations. YITP-SB-17-30The Lyman-α power spectrum - CMB lensing convergence cross-correlation [ Accepted ?. Received ?; in original form ?. ====================================================================== § INTRODUCTIONThe three-point function of the large-scale structure contains ample information that cannot be probed by the two-point function, hence it is sensitive to the nonlinear gravitational evolution <cit.>, tracer bias <cit.>, and even the inflationary physics <cit.>. Measurement of the three-point function, or its Fourier counterpart bispectrum, by auto- or cross-correlation is thus one of the biggest scientific goals for ongoing and future surveys.In this paper, we propose a new bispectrum formed by theforest power spectrum (hereafter forest power spectrum) and the CMB lensing convergence. Since the forest power spectrum is sensitive to the small-scale matter fluctuation at 2≤ z≤4 whereas CMB lensing convergence measures the total matter fluctuation along the line-of-sight, this combination of bispectrum allows us to probe the nonlinear structure formation that has not been explored using galaxies as tracers. Note that a similar observable has been measured in Ref. <cit.> by cross-correlating the one-dimensional forest power spectrum and the CMB lensing atthe same angular position. We shall generalize the bispectrum to include the complete angular scale between theforest skewers and the CMB lensing, and demonstrate that their observable is an integral over the angular scale of the total bispectrum.The rest of this paper is organized as follows. In true_signal, we compute the underlying signal generated by the gravitational evolution. In biased_signal, we calculate the signal due to the continuum fitting when measuring theforest, which we refer to as the “continuum-misestimation bias”. In observation, we discuss the contamination from dampedabsorbers and compare our prediction with the measurement in Ref. <cit.>. We conclude in conclusion. Throughout the paper we adopt the Planck cosmology <cit.>, i.e. h=0.6803, Ω_bh^2=0.0226, Ω_ch^2=0.1186, A_s=2.137×10^-9, and n_s=0.9667. § UNDERLYING SIGNAL DUE TO GRAVITATIONAL EVOLUTIONThe mean of the forest power spectrum at redshift z with width , corresponding to the comoving distance r_ and width Δ r, can be written as <cit.>P_FF(_s)=P_FF(k_s,μ_s)=b_F^2(1+β_Fμ_s^2)^2P_̣̣(k_s)(k_s,μ_s),where _s is the small-scale wavevector (opposed to the large-scale mode that we introduce later), μ_s is the cosine of _s along the line-of-sight, b_F is the flux bias, β_F is the redshift-space distortion parameter for the flux, P_̣̣ is the linear power spectrum, andis the fitting formula for the small-scale nonlinearity. In this paper we shall use the fitting function provided in Ref. <cit.> as (k_s,μ_s)=exp{ q_1Δ^2(k_s)+q_2Δ^4(k_s)1-k_s/k_v^a_vμ_s^b_v -k_s/k_p^2} ,where Δ^2(k)=k^3P_̣̣(k)/(2π^2) and (q_1,q_2,k_v,a_v,b_v,k_p) are fitting parameters obtained from simulations. Note that the linear power spectrum as well as all biases and fitting parameters (provided in Table 4 and 5 of Ref. <cit.>) depend on redshift, but we do not write the dependence explicitly when no confusion occurs.If there is a large-scale fluctuation on the sky, the forest power spectrum would be modulated and at the leading order the forest power spectrum at angular position _⊥ becomes <cit.>P_FF(_s,_⊥)=P_FF(_s)+dP_FF(_s)/ḍ̣̅̅(_⊥),where ̣̅ is the two-dimensional projected large-scale density fluctuation, and dP_FF(_s)/ḍ̅ is the response of the forest power spectrum with respect to ̣̅ due to the gravitational evolution, which can be measured from the separate universe simulations <cit.>. In the flat-sky approximation, the three-dimensional position can be decomposed into parallel r_ and transverse _⊥ components, and ̣̅ is given by ̣̅(_⊥)=1/Δ r∫_r_-Δ r/2^r_+Δ r/2dr'_δ(r'_,_⊥)=1/Δ r∫ dr'_δ(r'_,_⊥)ΘΔ r/2-|r'_-r_| ,where δ is the underlying matter density perturbation and Θ is the heaviside step function. At the leading order, i.e in the limit where the wavevector of the large-scale mode ̣̅ is much smaller than _s, the response depends only on the small-scale mode _s and is independent of the wavelength of ̣̅.Consider measuring the correlation between the forest power spectrum with the two-dimensional projected CMB lensing convergence in the same redshift bin κ(_⊥)=∫_r_-Δ r/2^r_+Δ r/2dr'_∫ d^2r'_⊥δ(')W_κ(r'_)Λ_κ('_⊥-_⊥) =∫ d^3r'δ(')W_κ(r'_)ΘΔ r/2-|r'_-r_|Λ_κ('_⊥-_⊥),where W_κ is the lensing kernel and Λ_κ is the weighting filter in the transverse direction (which we will later set to be the Wiener filter). The lensing kernel is given by W_κ(r_)=3H_0^2Ω_m/2c^2r_/a(r_)r_*-r_/r_* ,where H_0 is the present-day Hubble, Ω_m is the fractional energy density of matter, c is the speed of light, a is the scale factor, and r_* is the comoving distance to the last-scattering surface. The three-point correlation between P_FF and κ in Fourier space is thus ⟨ P_FF(_s,_l⊥)κ('_l⊥)⟩'=dP_FF(_s)/ḍ̅⟨̣̅(_l⊥)κ('_l⊥)⟩'=dP_FF(_s)/ḍ̅P^ 2d_̣̅κ(_l⊥),where ⟨⟩' is the ensemble average without the Dirac delta function. Using bardkappa, we can compute P^ 2d_̣̅κ as P^ 2d_̣̅κ(_l⊥)=∫_r_-Δ r/2^r_+Δ r/2dr'_1W_κ(r'_1)∫ dr'_21/Δ rΘΔ r/2-|r'_2-r_|×∫dq_/2πP_̣̣(q_,_l⊥)Λ_κ(-_l⊥)e^iq_(r'_1-r'_2)=∫_r_-Δ r/2^r_+Δ r/2dr'_ W_κ(r'_)∫dq_/2πP_̣̣(q_,_l⊥)Λ_κ(-_l⊥)q_Δ r/2e^iq_(r'_-r_) ,andthe only assumption we make is that P_̣̣ is independent of z within Δ z. Because of the sinc function, the contribution to P^ 2d_̣̅κ from the parallel mode is mainly from q_≲Δ r^-1. Physically, this means that P^ 2d_̣̅κ is sensitive to the largest parallel mode, which is set by the redshift bin size. Following Ref. <cit.>, in the flat-sky approximation we can convert the two-dimensional power spectrum to the angular power spectrum as l(l+1)C_l^̣̅κ≈ k^2_l⊥P^ 2d_̣̅κ(_l⊥),where l+1/2≈ r_ k_l⊥. Combining p2d_pffkappacl_bardkappa, we have the most general correlation between forest power spectrum and CMB lensing convergence, which includes both the dependence on the small-scale mode _s and the large-scale mode _l⊥, or equivalently the angular scale l. This calculation is general and can be used for other fields in place of Lyman-α, most notably 21-cm emission from intensity mapping.The left panel of corr_bardkappa shows C_l^̣̅κ with Δ z=0.2 as a function of the angular scale l at various redshifts. We set l≥100 so that the flat-sky approximation works well. To numerically evaluate C_l^̣̅κ, we set q_, max=1000 and confirm that the result is insensitive to the choice of q_, max. We also take W_κ to be the Wiener filter from Ref. <cit.> with l=r'_ k_l⊥. The power spectrum is larger at lower redshift, and this is the result of the gravitational evolution. To check the approximation that the linear power spectrum does not vary within the redshift bin Δ z, we include the redshift evolution and find that the fractional difference is less than 1%. The agreement is slightly better at higher redshift since Δ r is smaller at larger z when Δ z is fixed.Since the forest power spectrum is frequently measured as the one-dimensional power spectrum, we can Fourier transform in the small-scale transverse direction: P^ X_FF(k_s,_s⊥)=∫d^2k_s⊥/(2π)^2P_FF(k_s,_s⊥)e^i_s⊥·_s⊥,with the one-dimensional power spectrum given by P_FF^ 1d(k_s)=P^ X_FF(k_s,,_s⊥=0). Similarly one can Fourier transform the cross-correlation of p2d_pffkappa in the small-scale transverse wavevector _s⊥ to get a cross-correlation as a function of (k_s,_s⊥,k_l⊥). This tells us how a cross-power spectrum between two close skewers as a function of the small-scale parallel wavevector k_s and small-scale perpendicular separation _s⊥ responds to a large-scale mode in κ field fluctuation. For this we need dP^ X_FF(k_s,_s⊥)/ḍ̅=∫d^2k_s⊥/(2π)^2dP_FF(k_s,_s⊥)/ḍ̅e^i_s⊥·_s⊥=∫d^2k_s⊥/(2π)^2P_FF(k_s,_s⊥)dln P_FF(k_s,_s⊥)/ḍ̅ e^i_s⊥·_s⊥ .As before, the one-dimensional specialization is just the case when we set _s⊥=0, i.e. dP^ 1d_FF(k_s)/ḍ̅=dP^ X_FF(k_s,_s⊥=0)/ḍ̅.To numerically evaluate the one-dimensional forest power spectrum, we use pff_fiddnl and set k_s⊥, max=1000. To compute the response of the one-dimensional forest power spectrum, we first measure and smooth dln P_FF(k_s,_s⊥)/d from the separateuniverse simulations as described in Refs. <cit.> and then use the analytic form of P_FF(k_s,_s⊥), i.e. pff_fiddnl. To minimize the possible bias on the integration from the smoothing and interpolation, we set k_s⊥, max=10. pff1d shows the one-dimensional forest power spectrum (left) and its response to ̣̅ (right). The one-dimensional forest power spectrum is larger at higher redshift due to the increase of |b_F|. Conversely, the response of the one-dimensional forest power spectrum, which contains the responses from linear power spectrum, flux bias, and the nonlinear fitting function, is larger at lower redshift for k≲1.To evaluate the correlation between the one-dimensional forest power spectrum and the CMB lensing convergence, we take the product of C_l^̣̅κ from the left panel of corr_bardkappa and dP^ 1d_FF(k_s)/ḍ̅ from the right panel of pff1d. true_sig shows the result at z=2.4 (left) and 2.8 (right) as a function of ℓ (the angular scale between the skewer and the lensing convergence) and k_s (the scale of the one-dimensional forest power spectrum), assuming Δ z=0.2, which corresponds respectively to Δ r=166.63 and 142.08. We find that at both redshifts the scale dependence on l and k_s is similar, and the more apparent difference is the amplitude of the signal. Specifically, at lower redshift the signal is larger, which is likely due to the stronger gravitational evolution.Lastly, if the correlation between the one-dimensional forest power spectrum (_s⊥=0) and the lensing convergence is measured in configuration space, then we can also Fourier transform the large-scale_l⊥: ⟨ P^ 1d_FF(k_s,_⊥)κ('_⊥)⟩=∫d^2 _l⊥/(2π)^2⟨ P^ 1d_FF(k_s,_l⊥)κ(_l⊥)⟩e^i_l⊥·(_⊥-'_⊥)=dP^ 1d_FF(k_s)/ḍ̅∫d^2 _l⊥/(2π)^2P^ 2d_̣̅κ(_l⊥)e^i_l⊥·(_⊥-'_⊥) .Furthermore, if the correlations are only measured at the same angular position (_⊥='_⊥), as done in Ref. <cit.>, then the relevant quantity in this case is the Fourier transform of P^ 2d_δ̅κ evaluated at zero lag, i.e. σ^2_̣̅κ=∫d^2 _l⊥/(2π)^2P^ 2d_̣̅κ(_l⊥),which the variance of the δ̅κ field. Note that while this is the region where majority of signal to noise is concentrated, there is in principle more information available in the data.We show the correlation at the same angular position in the right panel of corr_bardkappa as a function of redshift for various Δ z. We confirm that the results are insensitive to the choice of k_ max for the integration and the redshift evolution of P_̣̣. As for C_l^̣̅κ, σ^2_̣̅κ is larger at lower redshift due to the gravitational evolution. Interestingly, while the comoving distances between Δ z=0.1 and 0.3 differ by a factor of three, the change in σ^2_̣̅κ is only from 12 to 18%. Should we compute σ^2_̣̣̅̅=∫d^3k_l/(2π)^3P_̣̣(k_l)^2k_lΔ r/2 ,we have σ^2_̣̣̅̅(Δ z=0.1)/σ^2_̣̣̅̅(Δ z=0.3)≈2.9 between 2.2≤ z≤3.0. This is because σ^2_̣̣̅̅ is the variance of ̣̅ for a parallel scale Δ r, and the larger the Δ r the smaller the variance. On the other hand, σ^2_̣̅κ includes the integral of lensing kernel along the line-of-sight, and the larger the Δ r the larger the lensing contribution. These two effects roughly cancel each other, as a result σ^2_̣̅κ is much less affected by the width in the line-of-sight direction. § CONTINUUM-MISESTIMATION BIAS DUE TO LYMAN-Α FOREST MEASUREMENTIn the previous section we compute the gravitational bispectrum of flux-flux-lensing. In this section we shall discuss a non-standard systematic term coming from the mis-estimation of the mean flux over the finite length offorest skewers, which we shall refer to as the “continuum-misestimation bias”.Intuitively, this effect comes from the fact that over the length of the forest, one cannot distinguish between a large-scale absorption that would lower the measured flux in all pixels and the change in the brightness of the quasars that would do exactly the same. Of course, using information outside the quasar allows one to, in principle, recover some of this information, but in practice this introduces large amounts of noise in the continuum calibration. Therefore, most analysis just fit for the quasar brightness within the forest, in effect nulling large-scale flux fluctuations modes, which leads to two effects: i) distortion of measured large-scale power spectra or correlation functions (see e.g. <cit.>) and ii) modulation of the local small scale power spectrum normalization by the large scale mode <cit.>. The second effect is the one that affects the calculation in this paper as we discuss next.The observedflux can be modeled as F(r_,_⊥)=A(_⊥)C̅F̅(r_) 1+δ_F(r_,_⊥) +ϵ ,where A describes the brightness of the quasar so depends on the angular position, C̅ is the mean continuum shape obtained from stacking all skewers in the survey, F̅ is the true mean flux which is independent of the angular position on the sky, δ_F is the true flux fluctuation around F̅, and ϵ is the noise. To measure δ_F, one performs the continuum fitting on a skewer by adjusting A until the integral of δ_F becomes zero across this skewer. In other words, for a skewer at _⊥ we estimate the quasar brightness as Â(_⊥)=1/Δ∫_r_-Δ/2^r_+Δ/2dr'_F(r'_,_⊥)/C̅(r'_)F̅(r'_)=A(_⊥) 1+1/Δ∫_r_-Δ/2^r_+Δ/2dr'_δ_F(r'_,_⊥) =A(_⊥) 1+Δ_F(r_,_⊥,Δ),where r_ and Δ are the center and the width of the skewer. Note that the typical size of Δ is 280 to 420<cit.>, so Δ is generally larger than the size of the redshift bin Δ r. Plugging the estimated quasar luminosity  into the flux, we find the estimated flux perturbation at _⊥ to be (ignoring the r_ dependence in Δ_F as it only describes the center of the forest) _F(r'_,_⊥)=F(r'_,_⊥)/Â(_⊥)C̅(r'_)F̅(r'_)-1=1+δ_F(r'_,_⊥)/1+Δ_F(_⊥,Δ)-1 ≈δ_F(r'_,_⊥) 1-Δ_F(_⊥,Δ) -Δ_F(_⊥,Δ).Note that the last term in hat_dF is a constant at this angular position of _⊥, hence it contributes only to _s=0 mode of _F in Fourier space. We shall ignore it as we are interested in the forest power spectrum with k_s>0, and to the leading order the estimated forest power spectrum of this skewer becomes P̂_FF(_s,_⊥)=⟨_F(_s,_⊥)_F('_s,_⊥)⟩'=P_FF(_s) 1-2Δ_F(_⊥,Δ),where the ensemble average is taken for the small-scale mode _s while keeping the large-scale Δ_F fixed. Physically, the correction term appears because we use thelocal instead of theglobal mean flux (which is independent of _⊥) to compute the fluctuation. The same effect is also discussed in Ref. <cit.> for the measurement of the position-dependent correlation function. Moreover, the larger the Δ the smaller the correction.Though for each skewer the estimated forest power spectrum is biased, the total estimated forest power spectrum is still unbiased at the first order since <Δ_F>=0. It is well known that there are corrections at the second order <cit.>. However, if we measure the correlation between the forest power spectrum of each skewer and the CMB lensing convergence, the signal would be biased due to the correlation between Δ_F and κ at the first order. Specifically, if we consider the angular power spectrum between the forest power spectrum and the lensing convergence, then the continuum-misestimation bias is ⟨P̂_FF(_s,_l⊥)κ('_l⊥)⟩'=-2P_FF(_s)⟨Δ_F(_l⊥,Δ)κ('_l⊥)⟩'=-2P_FF(_s)P^ 2d_Δ_Fκ(_l⊥),where P^ 2d_Δ_Fκ(_l⊥)=∫_r_-Δ/2^r_+Δ/2dr'_ W_κ(r'_)∫dq_/2πP_F̣(q_,_l⊥)Λ_κ(-_l⊥)q_Δ/2e^iq_(r'_-r_) ,with P_F̣()= P_̣̣()P_FF() ^1/2 in linear theory. Note that p2d_DFkappa is basically the same as p2d_bardkappa, excpet for different power spectra as well as the lengths appearing in the integration boundary and the sinc function. The left panel of corr_DFkappa shows the angular power spectrum of the continuum-misestimation bias C_l^Δ_Fκ under the flat-sky approximation with Δ=350 as a function of the angular scale l at various redshifts. Unlike C_l^̣̅κ shown in corr_bardkappa, we find that the continuum-misestimation bias is larger at higher redshift, which is due to the fact that C_l^Δ_Fκ is proportional to b_F and |b_F| is larger at higher redshift.For the correlation between the one-dimensional forest power spectrum and the lensing convergence, the continuum-misestimation bias becomes -2P^ 1d_FF(_s)P^ 2d_Δ_Fκ(_l⊥). To evaluate the signal, we take the one-dimensional forest power spectrum from the left panel of pff1d and C_l^Δ_Fκ from the right panel of corr_DFkappa, and the result is shown in bias_sig at z=2.4 (left) and 2.8 (right), assuming the length of the forest to be 350. We find that the scale dependences of the true signal and the continuum-misestimation bias are similar. This is because the scale dependences are predominantly determined by C_l^̣̅κ and C_l^Δ_Fκ, and the primary difference between the two angular power spectra is the flux bias as on large scale →1. More interestingly, the relative contribution of the continuum-misestimation bias becomes larger at higher redshift, which is the result of increasing |b_F| at higher redshift.Finally, for the correlation measured in configuration space at the same angular position, the continuum-misestimation bias is -2P^ 1d_FF(k_s)σ^2_Δ_Fκ, where σ^2_Δ_Fκ is the Fourier transform of P^ 2d_Δ_Fκ(_l⊥) at zero lag, i.e.σ^2_Δ_Fκ=∫d^2k_l⊥/(2π)^2P^ 2d_Δ_Fκ(_l⊥).The right panel of corr_DFkappa shows σ^2_Δ_Fκ as a function of redshift for various Δ. We find that as the left panel the continuum-misestimation bias is larger at high redshift due to the larger bias. Also, σ^2_Δ_Fκ is almost unchanged between Δ=280 to 420. This is not surprising because the integration over the lensing kernel cancels the effect from the variance of the density perturbation σ^2_̣̣̅̅, as we have discussed in true_signal. It is, however, interesting in the sense that in asurvey the lengths of the forest may vary a lot, but to model the continuum-misestimation bias of the lensing-flux-flux bispectrum we only need one typical length for the forest, hence the modeling is simpler. § COMPARISON WITH OBSERVATIONHow does our prediction compare to the measurement? In this section we shall first discuss the contamination from dampedsystems that cannot be removed perfectly in observation, and then compare our prediction to the measured correlation between the one-dimensional forest power spectrum and CMB lensing convergence done in Ref. <cit.>. §.§ Contamination from Damped Lyman-α systemsIn theforest measurement, the largely broadened damping wings produced by high neutral hydrogen column density systems with N_HI>2×10^20  cm^-2, which is known as the dampedabsorbers (DLAs), are discarded to avoid contamination on the forest power spectrum. However, systems such as Lyman-limit systems and sub-DLAs with less column density than DLAs but still higher than that of theforest have less prominent features hence are difficult to remove. The presence of these systems may affect the forest power spectrum and so its correlation with the lensing convergence, and we shall quantify the effect.As pointed out in Ref. <cit.>, the dominant effect of the high column density systems is to add power to the forest power spectrum due to their random distribution. In the limit that the number of the systems are small (typically in one skewer one in 100-1000 pixels is in the high column density system), the power is proportional to the number density of such systems. As a result, in the presence of a large-scale fluctuation ̣̅, the leading order correction to the one-dimensional forest power spectrum due to the high column density systems is given by P^ 1d_ DLA(k_s,_⊥)= 1+b_ DLẠ̅(_⊥) P^ 1d_ DLA(k_s),where P^ 1d_ DLA and b_ DLA are the one-dimensional power spectrum and the bias of the high column density systems. Correlating this with the lensing convergence, the general three-point function containing the angular scale between the small-scale one-dimensional DLA power spectrum and the lensing convergence is ⟨ P^ 1d_ DLA(k_s,_l⊥)κ('_l⊥)⟩'=b_ DLAP^ 1d_ DLA(k_s)P^ 2d_̣̅κ(_l⊥).If the correlation is computed as the same angular position, then the signal becomes ⟨ P^ 1d_ DLA(k_s,_⊥)κ(_⊥)⟩=b_ DLAP^ 1d_ DLA(k_s)σ^2_̣̅κ . To evaluate this signal, we need both the bias and the one-dimensional power spectrum of DLAs. Ref. <cit.> has measured b_ DLA by cross-correlating DLAs withforest, and we should take their central value b_ DLA=2.17, assuming that the redshift-space distortion parameter of DLAs is subdominant. For the DLA power spectrum, we use the fitting function provided in Ref. <cit.> describing the ratio between P^ 1d_ DLA and P^ 1d_FF, and we utilize our fiducial forest power spectrum. Note that we only consider the contributions from Lyman-limit systems and sub-DLAs (i.e. the red line in figure 5 of Ref. <cit.>), as the prominent DLAs are discarded in Ref. <cit.>. §.§ Measurement of Doux et al.In Ref. <cit.>, the first detection of the correlation between the one-dimensional forest power spectrum from SDSS-III Baryon Oscillation Spectroscopic Survey Data Release 12 <cit.> and the CMB lensing convergence from Planck <cit.> at the same line-of-sight is reported with a significance of 5 σ. The authors interpreted this signal to be the response of the forest power spectrum to a large-scale overdensity, and subtracting the contribution from the leading-order squeezed-limit matter bispectrum they measured the “effective nonlinear bias”, which contains the nonlinearity such as redshift-space distortion and nonlinear clustering of gas. We shall compare the measurement of ⟨ P^ 1d_FF(k_s,_⊥)κ(_⊥)⟩ with our prediction.Let us first compute all components of the signal. The true signal is produced by the nonlinear gravitational evolution and given by dP^ 1d_FF(k_s)/ḍ̅ σ^2_̣̅κ, where the dP^ 1d_FF(k_s)/ḍ̅ and σ^2_̣̅κ are taken from the right panel of pff1d and the right panel of corr_bardkappa, respectively. The continuum-misestimation bias is due to theforest continuum fitting and given by -2P^ 1d_FF(k_s)σ^2_Δ_Fκ, where P^ 1d_FF(k_s) and σ^2_Δ_Fκ are taken from the left panel of pff1d and the right panel of corr_DFkappa. Finally, the DLA contamination is given by b_ DLAP^ 1d_ DLA(k_s)σ^2_̣̅κ, where b_ DLA=2.17 and P^ 1d_ DLA(k_s) is taken from the fitting function in Ref. <cit.>. Thus, we predict the total signal of the correlation between the one-dimensional forest power spectrum and the lensing convergence from the same line-of-sight to be dP^ 1d_FF(k_s)/ḍ̅σ^2_̣̅κ-2P^ 1d_FF(k_s)σ^2_Δ_Fκ+b_ DLAP^ 1d_ DLA(k_s)σ^2_̣̅κ . sig_tot shows the different components of the cross-correlation between lensing convergence and the forest power spectrum at the same angular position for various redshifts. We first find that the DLA contamination is only important on large scales, and for k_s≳0.6 the contribution becomes negligible. This is due to the shape of the DLA power spectrum, which can be seen in figure 5 of Ref. <cit.>. Next, we find that true signal dominates over the continuum-misestimation bias at lower redshift, but the trend reverses at high redshift. This is due to the increase of theflux bias, which enhances both P^ 1d_FF(k_s) and σ^2_Δ_Fκ, as well as the decrease of the forest power spectrum response. The fact that the continuum-misestimation bias is larger than the true signal at z≥2.8 means that a large portion of the measured signal is coming from the mis-estimation of mean flux over the finite length offorest skewers, which needs to be removed for unbiasedly extracting the three-point function due to the gravitational evolution. Interestingly, the total signal shows a mild redshift evolution between z=2.2 and 3.0. However, due to the lack of snapshots of separate universe simulations at z≥3 we cannot examine this correlation at higher redshift where some of the forests are observed.Since Ref. <cit.> measures the correlation from all redshifts between 2.1≤ z≤3.6, we shall combine the total signals at different redshifts in sig_tot into one line in order to compare with the measurement. We use the weight given in eq. (18) of Ref. <cit.>, assuming that the number of pixels of the i^ th forest is a constant and noise of the forest is zero.We also include the observedforest distribution obtained from the sample in Ref. <cit.>, hence the total weight is w(k,z)=N_(z)/ P^ 1d_FF(k,z) ^2 ,where N_(z) is the number offorest at z. The joint true and total signals from 2.2≤ z≤3 are shown respectively as the blue dashed and red solid lines in meas, whereas the measurement in Ref. <cit.> is shown as the black data points with error bar.We find that the total signal is closer to the measurement than the true signal alone, but still smaller than the measurement on all scales. To better quantify the difference, we compute χ^2 using the full covariance matrix estimated in Ref. <cit.>, and the values are shown in the legend. As there are 12 data points and without any fitting parameters, the degrees of freedom is 12. While the measurement is consistently larger than the total signal, we still obtain a reasonable reduced χ^2. This is due to the high correlation between measurement points, which is shown in figure 5 of Ref. <cit.> and can be noticed by the lack of scatter between neighboring points. We find that the total signal gives a χ^2=9.1 and that neglecting the biased signal and DLA contamination worsens the fit to χ^2=14.19. Therefore, even in the current data, it is important to include new effects discussed in this paper. We also note that Doux et al report best-fit χ^2 of 5.4 (which, while low, is not anomalously low with p=0.9 with 11 degrees of freedom). Their model would roughly correspond to an artificial increase in bias for our true signal, but the improvement in fit is not significant, so we cannot exclude our model as a complete model for the data.One would be able to examine this more critically with future surveys that have more and betterforest measurements. § CONCLUSIONIn this paper, we investigate the three-point function form by the cross-correlation between forest power spectrum and the CMB lensing convergence. We find that not only the gravitational evolution produces the flux-flux-lensing bispectrum, but the mis-estimation of the mean flux over the finite length offorest skewers would also generate non-zero correlation. In particular, this systematic effect dominates the underlying signal at z≳2.8, which has to be taken into account for unbiasedly probing the gravitational effect at high redshift using this specific bispectrum.We demonstrate that integrating the flux-flux-lensing bispectrum with full angular and scale dependences over the angular information between the forest power spectrum and the CMB lensing is equivalently to the cross-correlation between the forest power spectrum and the CMB lensing at the same angular positions, which is measured in Ref. <cit.>. We show that our predictions are consistent with the signal measured by Ref. <cit.>, and the inclusion of the systematic effects fromforest continuum fitting and the DLA contamination improves the agreement. However, for both with and without the systematic effects, we obtain acceptable χ^2 values. Furthermore, the reported best-fit χ^2 with one fitting parameter in Ref. <cit.> is low but not anomalous small.If our prediction is correct, then one interesting implication is that while Ref. <cit.> shows that the effective nonlinear bias, which encodes the excess of the forest power spectrum response with respect to that of the linear field, is b_2^ eff=1.16±0.53, our simulations suggest it to be negative as the forest power spectrum response dln P^ 1d_FF(k_s)/dδ is always smaller than the linear power spectrum response dln P_l(k)/dδ=68/21-(1/3)[dln k^3P_l(k)/dln k] for all redshifts and lines-of-sight (see figure 2 of Ref. <cit.>). This is likely due to the missing of the systematic term, hence to interpret b_2^ eff measured in Ref. <cit.> as the responses of flux bias, redshift-space distortion, and the nonlinear small-scale forest power spectrum one has to account for the continuum-misestimation bias. However, given the large uncertainty from current measurement, it is challenging to draw any concrete conclusions. With future surveys such as DESI <cit.> that have less uncertainty, one can more critically examine this correlation as well as how the forest power spectrum responds to the overdensity gravitationally. We would like to thank Cyrille Doux for providing the data points and the covariance matrix of the measurement in Ref. <cit.>. We would also like to thank Emmanuel Schaan and the referees for useful comments on the draft. AS acknowledges insightful conversations with Andreu Font-Ribera and hospitality of the University College London where parts of this work were performed. Results in this paper were obtained using the high-performance computing system at the Institute for Advanced Computational Science at Stony Brook University. CC is supported by grant NSF PHY-1620628.
http://arxiv.org/abs/1708.07512v2
{ "authors": [ "Chi-Ting Chiang", "Anže Slosar" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170824180000", "title": "The Lyman-$α$ power spectrum - CMB lensing convergence cross-correlation" }
1Department of Physics, The University of Chicago, Chicago, IL 60637, USA 2Department of Astronomy & Astrophysics and The Enrico Fermi Institute, The University of Chicago, Chicago, IL 60637, USA Giacalone, Matsakos, & Königl A test of the high-eccentricity migration scenarioIn the high-eccentricity migration (HEM) scenario, close-in planets reach the vicinity of the central star on high-eccentricity orbits that become circularized—with a concomitant decrease in the semimajor axis—through a tidal interaction with the star. Giant planets that arrive with periastron distances that are smaller than the Roche limit a_R lose their gaseous envelopes, resulting in an inner edge to the surviving planets' distribution. The observational evidence for this effect, while extensive, is nonetheless somewhat ambiguous because of the effect of tidal orbital decay. Here we consider another key prediction of the HEM scenario—the existence of a spatial eccentricity gradient near the location where the circularization time becomes comparable to the planet's age for typical parameters. Previous studies already found evidence for this gradient and demonstrated that its properties are consistent with the circularization process being dominated by tidal dissipation in the planet (encapsulated by the tidal quality factor Q^'_p). Our work extends these treatments by constructing explicit model distributions for comparison with the data and by carrying out backward-in-time integrations using observed system parameters. We show that circularization generally occurs outside the distribution's inner edge (which defines the boundary of the so-called sub-Jovian desert) and that typically Q^'_p≈10^6 in the circularization zone (to within a factor of 3). We also find tentative evidence for an eccentricity gradient in lower-mass planets, indicating that formation through HEM may be relevant down to Neptune scales. A test of the high-eccentricity migration scenario for close-in planets Steven Giacalone1, Titos Matsakos2, and Arieh Königl2 December 30, 2023 =======================================================================§ INTRODUCTION The growing number of observed close-in exoplanets (planets with orbital periods P_orb≲ 10 days) has motivated researchers to look for trends in the distribution of their physical and orbital parameters that might help clarify the origin of these planets and the nature of their interaction with the host star. In an early study of this type, <cit.> drew attention to two such trends for giant planets: the prevalence of circular orbits for very short periods, and the inverse correlation between the planet's mass M_p and P_orb for the closest planets. Under the prevailing view, wherein giant planets form beyond the water–ice line, the first of these trends has two distinct potential interpretations: in the disk migration scenario, the observed planets reach the central star by drifting inward through the protoplanetary disk on nearly circular orbits <cit.>, whereas in the high-eccentricity migration (HEM) picture they arrive on high-eccentricity orbits that become tidally circularized when the planets approach the star <cit.>. In this connection, PHMF11 identified an unambiguous transition from eccentric to circular orbits on going from long to short orbital periods (for a given value of M_p) and from high-mass to low-massplanets (for a given value of P_orb), and pointed out that this behavior is consistent with the expected outcome of a circularization process that is dominated by tidal dissipation in the planet <cit.>. These results were confirmed in a recent study by <cit.>.In trying to interpret the second identified trend—the pile-up of the shortest-period planets in such a way that those with higher values of M_p have lower values of P_orb—PHMF11 speculated that tidal circularization and the stopping mechanism of close-in planets might be related. However, this mass–period relation was subsequently recognized to be part of a more general feature in the P_orb–M_p plane: a nearly empty area, outlined roughly by two oppositely sloped lines, in the region of sub-Jupiter-mass planets on short-period orbits <cit.>. <cit.> showed that this feature (dubbed the sub-Jovian desert) can be interpreted in terms of HEM, with the two distinct segments of the desert's boundary reflecting the different slopes of the empirical mass–radius relation for small and large planets <cit.>.A plausible physical origin for the boundary is the Roche limit a_R, the distance from the star where the planet starts to be tidally disrupted. If the planet arrives on an orbit with initial (subscript 0) semimajor axis a_0 and eccentricity e_0, its distance of closest approach will be a_per,0=(1-e_0) a_0 and it will circularize (assuming conservation of orbital angular momentum) at a_cir=(1+e_0) a_per,0 (i.e., at ≃ 2 a_per,0 for a highly eccentric orbit; ). The upper boundary of the sub-Jovian desert was already interpreted in this way by <cit.>, although the data available at the time was insufficient for a definitive model fit.[As was pointed out by MK16, the observed shape of the desert's upper boundary is not adequately reproduced unless one also takes into account the tidal dissipation in the star, which, on a timescale much longer than the circularization time, causes the planet's orbit to decay. In a previous study, <cit.> interpreted the finding of giant planets with semimajor axes < 2 a_R in terms of orbital decay of this type.] The high-e_0 orbit could originate in a sudden planet–planet scattering event or in a slower interaction such as Kozai migration (involving either a stellar or a planetary companion) or secular chaos. In the latter case <cit.> suggested that the planet's inward drift might be arrested at the location where the rate at which its longitude of pericenter precesses due to a secular interaction with a more distant planet is equal to the orbit-averaged precession rate associated with the tidal quadrupole induced on the planet by the star. The locus of circularization radii in the P_orb–M_p plane is, however, similar in this case to the one obtained by setting a_per,0=a_R. Note that the planet's stopping mechanism is not directly related to the circularization process in either of these two explanations of the pile-up. However, in order for the data to be compatible with the prediction a_cir≈ 2a_per,0, the circularization radius r_cir (obtained by equating the planet's circularization time to the time that has elapsed since its arrival at the stellar vicinity) must exceed a_cir. As we demonstrate in Section <ref>, this condition is typically satisfied for planets in the pile-up zone.The observational support for the role of the HEM mechanism in shaping the spatial distribution of close-in planets has so far been based primarily on the apparent paucity of planets with semimajor axes a≲ 2 a_R <cit.> and on the corresponding dearth (the sub-Jovian desert) in the period–mass plane (; MK16). While the observational evidence for this effect is strong, it is not entirely unambiguous on account of the (already noted) additional orbital evolution induced by tidal dissipation in the star. In this paper we consider a complementary observational test of this scenario, the expected gradient in planet eccentricities in the vicinity of the locus of the circularization radii in the period–mass plane. The existence of such a gradient was already demonstrated in PHMF11, but here, in addition to updating the database, we compare it explicitly with the predictions of the HEM model. We describe our modeling approach in Section <ref>, present results in Section <ref>, discuss the main implications in Section <ref>, and summarize in Section <ref>. In a separate paper <cit.> we use the model employed in this work to study the fate of high-mass planets that arrive by HEM and end up crossing the Roche limit, which results in the loss of their gaseous envelopes: we argue that the remnant rocky cores of these planets can plausibly account for the recently identified population of dynamically isolated hot Earths <cit.>. § MODELING APPROACH Our treatment is based on the formulation presented in MK16, whose work was similarly concerned with the properties of close-in planets that arrive by HEM and undergo orbital circularization through internal tidal dissipation. That paper examined the shape of the boundary of the sub-Jovian desert in the P_orb–M_p plane under the assumption that planets reach the vicinity of the Roche limit a_R (or, alternatively, the point of closest approach in the secular-chaos model of ) with e_0 ≈ 1 and that their orbits then undergo an effectively instantaneous circularization. In contrast with that work, in which only the post-circularization orbital evolution of planets due to tidal dissipation in the star was calculated, in this paper we also account explicitly for tidal dissipation in the planets and we consider its effect on orbital circularization for a range of initial eccentricities and without assuming a priori that this process always runs to completion. We carry out Monte Carlo simulations using the same distributions of P_orb,0, R_p, M_p, and t_arr (the planet arrival time at the stellar vicinity) as in MK16, but we update their choices for the distributions of radii and masses of large planets as well as oft_age (the system's age) using data downloaded from the Extrasolar Planets Encyclopedia database at exoplanet.eu.Any given system is specified by six parameters: P_orb,0, e_0, R_p, M_p, t_age, and t_arr. We assume that the planets that reach the stellar vicinity by HEM originate in the P_orb,0 range [10,100] days with e_0 in the range [0.5,0.9]. Though consistent with our current understanding of how the HEM process operates in real systems <cit.>, these ranges are not meant to represent any particular physical model and are chosen for illustrative purposes only. For P_orb,0 we adopt the empirical distribution ∂f/∂logP_orb,0∝ P_orb,0^0.47 given in <cit.>, whereas for the e_0 distribution we adopt the form ∂f/∂ e_0 = constant (which corresponds to the steady-state distribution obtained byandfor planets that undergo eccentricity oscillations due to secular gravitational interactions with an outer companion, as well as to the “coplanar” distribution obtained in the secular-chaos model and shown in figure 4 of ).[The results presented in this paper are not sensitive to the details of the P_orb,0 and e_0 distributions. However, in KGM17 we examine the dependence of the fraction of planets that end up crossing the Roche limit on the form of the e_0 distribution and on the maximum value of e_0.] The values of R_p are also sampled from an empirical distribution given in <cit.>, ∂f/∂logR_p∝ R_p^-0.66. As in MK16, we distinguish between small and large planets, separated at R_p = 12R_. For the small planets we adopt M_p = (R_p/R_)^2 M_, R_p<12 R_<cit.>. In the case of the large planets—for which R_p is nearly independent of M_p—we resample R_p from the interval [9,20] R_ and independently sample M_p from the interval [0.3,10] M_J using the exoplanet.eu database (see Appendix <ref>; note that this was done in MK16 using the data originally compiled by ).[The radii of giant planets evidently depend on the incident flux from the host star <cit.> and could thus vary systematically with orbital period. We do not explicitly account for this dependence since it was found to be fairly weak <cit.> and because the orbital circularization zone that we investigate corresponds to a rather narrow range of P_orb values.] In another modification of the MK16 implementation, we replace the age distribution given in <cit.>—which was determined by applying gyrochronology relationships to comparatively rapidly rotating stars and is therefore biased toward young systems—by an empirical distribution obtained from exoplanet.eu. We restrict attention to the P_orb interval [2.5,7] days (where the upper bound defines the regime of hot Jupiters and the lower limit roughly corresponds to the distance from the star below which the effect of tidal orbital decay becomes significant) and consider separately the age distributions for small and large planets (see Appendix <ref>). Finally, we assume a uniform distribution in logt_arr for the arrival times, with t_arr∈[0.01,10] Gyr (see MK16).After sampling for P_orb,0 and e_0, one can determine a_per,0(a_0,e_0) and a_cir(a_0,e_0) (setting a_0 =(GM_*P_orb,0^2/4π^2)^1/3, where G is the gravitational constant and M_* is the stellar mass). These values can, in turn, be used to select the systems that are relevant to the present calculation. In this work we only consider the Roche-limit interpretation of the sub-Jovian desert boundary, taking the lower bound on a planet's initial periastron distance to be given by a_R = q(M_*/M_p)^1/3 R_p=0.016 (q/3.46) (M_*/M_)^1/3(M_p/M_J)^-1/3(R_p/R_J) au(where the normalization of the coefficient q is based on the results of MK16). The systems whose evolution we follow are defined by the requirements a_per,0>a_R and P_orb(a_cir)≲ 7 days. We simplify the evolution equations by assuming that the orbital angular momentum vector is aligned with the spin vector of the star as well as with that of the planet, and that the planet's rotation period does not change with time (corresponding to pseudosynchronicity). We also neglect the time variation of the stellar rotation period P_* (which we assume to be distributed uniformly in the interval [5,10] days). The evolution equations are then given byda/dt=9/Q^'_p (GM_*/a^3 )^1/2M_*/M_pR_p^5/a^4(1-e^2)^-15/2 ×[(f_2(e^2))^2/f_5(e^2)-f_1(e^2) ]+ 9/Q^'_* (GM_*/a^3 )^1/2M_p/M_*R_*^5/a^4(1-e^2)^-15/2 ×[f_2(e^2)P_orb/P_*(1-e^2)^3/2-f_1(e^2) ]andde/dt=81/2 Q^'_p (GM_*/a^3 )^1/2M_*/M_pR_p^5/a^5e(1-e^2)^-13/2 ×[11/18f_4(e^2)f_2(e^2)/f_5(e^2)-f_3(e^2)]+ 81/2 Q^'_* (GM_*/a^3 )^1/2M_p/M_*R_*^5/a^5e(1-e^2)^-13/2 ×[11/18f_4(e^2)P_orb/P_*(1-e^2)^3/2-f_3(e^2) ]<cit.>, where the eccentricity functions f_1, . . ., f_5 (each of which equals 1 at e=0) are given in <cit.>, Q^'_p and Q^'_* are, respectively, the (modified) planetary and stellar tidal quality factors, and R_* is the stellar radius. We express both Q^'_p and Q^'_*in the form Q^'=Q^'_1(P_orb/P_1), which was employed in previous studies as a representation of equilibrium tides in the weak-friction approximation <cit.>.[Q^'_* is probably better modeled in terms of a dynamical tide, but such models also infer a power-law dependence on P_orb with a positive (albeit >1) index in both the weakly and the strongly nonlinear regimes <cit.>. In this paper we are primarily interested in the behavior of Q^'_p, which underlies the orbital circularization process.] We set P_1=4 days and adopt Q^'_*1=10^6.[Note in this connection that MK16 treated Q^'_*as a spatial constant equal to 10^6.] We use the results presented in Section <ref> to constrain the value ofQ^'_p1.For the values of e_0 that we consider (1-e_0^2) is not ≪ 1 and one can define a characteristic orbital circularization time by τ_cir≡ |(1/e)de/dt|^-1. We estimate τ_cirfrom the first term on the right-hand side of Equation (<ref>) by taking the limit e→ 0 and identifying a with r_cir. By equating τ_cir to t_age, we obtain an expression for the locus of the circularization radii of planets with the given age in the period–mass plane:[More precisely, one should consider the locus of the circularization radii of planets with a given (nonnegative) value of (t_age - t_arr). However, for the chosen distributions of ages and arrival times, the distribution of the nonnegative values of (t_age - t_arr) closely approximates that of t_age.] P_orb,cir = 3.76 (P_1/4 days)^3/16(Q'_p1/10^6)^-3/16( t_age/1Gyr)^3/16 ×( M_*/M_)^-1/8( R_p/R_J)^15/16( M_p/M_J)^-3/16 days .We express R_p in Equation (<ref>) as a function of M_p by using the relationship given in Equation (<ref>) for M_p<150 M_ and by approximating its behavior for more massive planets by R_p=constant. This implies that the P_orb,cir curve in the period–mass plane changes from having a positive slope (∝ M_p^9/32) for M_p<150 M_ to having a negative slope (∝ M_p^-3/16) for larger masses.[If Q^'_p were instead a spatial constant, as is sometimes assumed, then the listed scalings of P_orb,cir would change to ∝ M_p^9/26 and ∝ M_p^-3/13 for (respectively) small and large masses.] As was pointed out by MK16, an analogous behavior is found for the immediate-post-circularization boundary of the sub-Jovian desert P_orb,RL (identified as the orbital period that corresponds to a_cir(a_R)=(1+e_0)a_R) by using the above functional form of R_p(M_p) in Equation (<ref>): P_orb,RL∝ M_p^1/4 and ∝ M_p^-1/2 for M_p<150 M_ and >150 M_, respectively. One can similarly obtain the orbital decay isochrones by considering the dominant (second) term on the right-hand side of Equation (<ref>). For the assumed dependence of Q^'_* on P_orb, that term is ∝ a^-7, so we define the orbital decay time as τ_d≡ |(8/a)da/dt|^-1 <cit.>. This yieldsP_orb,d = 3.18 (P_1/4 days)^3/16(Q'_*1/10^6)^-3/16( t_age/1Gyr)^3/16 ×( M_*/M_)^-1/2( R_*/R_)^15/16( M_p/M_J)^3/16[P_orb/P_*-1]^3/16 days .In all numerical evaluations of this expression we assume, for simplicity, that [(P_orb/P_*)-1]^3/16≈ 1. § RESULTS Our Monte Carlo simulations each involve 30,000 samplings of planetary systems with a solar-type host (R_*=R_, M_*=M_). As the R_p distribution that we employ was corrected for observational selection effects and is dominated by small planets, we randomly reduce the number of small (R_p<12 R_) model planets that we exhibit by 90% to improve the presentation. The top panel of Figure <ref> shows the calculated planet distribution in the P_orb–M_p plane, color coded according to the value of the orbital eccentricity at the end of the modeled evolution. For this panel we adopt Q^'_p1=10^6. We also plot the circularization isochrones (Equation (<ref>)) for two values of t_age—1 and 5 Gyr (solid and dashed blue lines, respectively)—which mark off the range from which most of the system ages are drawn (see Figure <ref>). It is seen that these curves capture well the numerical results in that the region between them corresponds to the transition zone in the period–mass plane that separates mostly eccentric orbits (to the right of the dashed curve) from mostly circular orbits (to the left of the solid curve). In addition, we plot the immediate-post-circularization desert boundary curves (P_orb,RL(M_p)) for the two values of e_0 (0.5 and 0.9) that bracket our adopted range of initial eccentricities. These lines are seen to lie to the left of the circularization isochrones and well within the region of mostly circularized orbits, corroborating the assumption that the planets near the desert boundary satisfy r_cir>(1+e_0)a_R (see Section <ref>). Finally, we plot the orbital decay isochrones (Equation (<ref>)) for the same two values of t_age (solid and dashed red lines, respectively). These lines pass near the vertices of the P_orb,RL(M_p) curves, indicating that orbital decay is likely to affect the shape of the upper desert boundary (see MK16). However, the orbital decay isochrones intersect the circularization isochrones at a sufficiently large value of M_p (≃ 2 M_J) to insure that most of the modeled planets are not measurably affected by orbital decay during the circularization process.[It should, however, be possible for the orbits of sufficiently massive planets to decay before they are fully circularized, which is consistent with the comparatively high inferred frequency of eccentric orbits among transiting planets with M_p>3 M_J <cit.>.]The top panel of Figure <ref> also exhibits observational data points from the exoplanet.eu compilation. They are shown as either triangles or stars and are color coded in the same way as the model dots. We only include planets with measured eccentricities that are listed with error values (generally both an upper and a lower one). For a data point to be considered reliable, we require that both of the associated values of δ e be < 0.5 max(0.1,e);[The form of this criterion is motivated by the separation of eccentric orbits in PHMF11, <cit.>, and <cit.> into those with e<0.1 and those with e≥ 0.1, and by the 1σ uncertainty limit δ e < 0.05 that <cit.> adopted as a reliability criterion for circular (e=0) orbits.] we represent such a data point by a triangle. If either one of the values of δ e does not satisfy the above inequality, we consider the associated data point to be questionable and display it as a star. These data points can be used to check a key prediction of the “HEM + circularization" scenario—the presence of an eccentricity gradient in the vicinity of the plotted circularization isochrones. Although the number of reliable data points in this region of the period–mass plane is relatively small, the predicted gradient is uniquely specified to point along the normal to these sloping lines, which should facilitate the test. A visual inspection of the top panel does indeed indicate consistency with this prediction, not just for the upper portions of the isochrone curves that were considered in PHMF11 but possibly also for the differently oriented lower branches of these curves. The additional data accumulated since the PHMF11 work was carried out also make it possible to resolve the gradient on smaller scales in the period–mass plane and therefore to localize it better in relation to the isochrone curves. The location of the circularization radius depends on the magnitude of the planetary tidal quality factor: it shifts to lower values of P_orb as Q^'_p is increased. In an attempt to constrain the value of Q^'_p1, we consider the cases where it is changed to 3× 10^5 and 3× 10^6 (left and right panels, respectively, at the bottom of Figure <ref>). To simplify this exercise, we use the two selected circularization isochrones to demarcate the circularization zone in the period–mass plane, and we only display data points that have low associated errors; in addition, we only consider data points that lie to the right of the 1 Gyr P_orb,d curve to minimize the effect of orbital decay. It is seen that the low-e data points are concentrated too far to the left of the circularization isochrones for Q^'_p1=3× 10^5 and not far enough to the left for Q^'_p1=3× 10^6, pointing to Q^'_p1≈ 10^6 as the preferred value. To check the extent to which our inferences from comparing model calculations with observational data depend on the error tolerance criterion used in selecting the data points, we modify the coefficient α in the condition δ e < α max(0.1,e) (where α = 1/2 corresponds to the fiducial case shown in Figure <ref>). The basic results from the top panel of Figure <ref> are shown in the top right panel of Figure <ref>, where we retain the various model curves (the circularization and orbital decay isochrones as well as the immediate-post-circularization boundaries of the sub-Jovian desert) and the reliable data points (triangles) but do not reproduce the simulation results (dots) and the high-δ e data points (stars). For comparison, we show the corresponding results using α = 1 and 1/3 in the top left and bottom left panels, respectively, of Figure <ref>. It is seen that, while the number of reliable data points decreases as α is decreased, the qualitative behavior—and in particular the appearance of a spatial eccentricity gradient in the vicinity of both the upper and the lower branchesof the circularization isochrones—is unchanged. We also confirmed that Q^'_p1≈ 10^6 remains the preferred value when either the more stringent selection criterion (α = 1/3) or the looser one (α =1) is used.Figure <ref> presents the same results as Figure <ref> but in the period–eccentricity plane, with the planetary mass now being the color-coded variable. The three panels correspond to the same values of Q^'_p1 as in Figure <ref>. To bring out the effect of tidal dissipation in the planet, we only display model and data points that lie to the right of the 1 Gyr P_orb,d curve in Figure <ref>. The model planets shown in the top panel exhibit a clear transition from being dominated by comparatively high eccentricities for P_orb 6 days to acquiring low values of e closer to the star—the signature of the “HEM + circularization” process. The observed systems appear to have a similar distribution and thus to be compatible with this scenario. The model dots track the observational data points best in the top panel: they appear to lie too far to the right in the bottom left panel and too far to the left in the bottom right panel, reconfirming the choice of Q^'_p1≈ 10^6 as the best-fitting value. A few of the data points displayed in Figure <ref> have very low (0.01) values of e, and yet their eccentricities are all distinct from zero. This raises two questions: (1) why are there no genuinely circular orbits in the dataset that we exhibit, and (2) could the very-low-e data points actually correspond to circular orbits? The answer to the first question is that most of the e=0 entries in the exoplanet.eu database are not listed with error values and we therefore do not consider them. (Several e=0 points that are listed with errors are included in the top panel of Figure <ref>; however, they all lie to the left of the 1 Gyr P_orb,d curve in that panel and have therefore been filtered out of Figure <ref>.) It is, however, evident from an inspection of Figure <ref> that this omission has little effect on the properties of the spatial eccentricity gradient—the focus of this work—since those are defined by data points with higher values of e. The second issue is related to a well-known intrinsic bias in the determination of eccentricity from radial velocity data <cit.>. This bias is a consequence of the fact that the value of e cannot be negative, which implies that observational uncertainties tend to yield positive (even if small) values of e for genuinely circular orbits. Mindful of this fact, PHMF11 and <cit.> carried out a homogeneous Bayesian analysis that explicitly addressed this issue. <cit.> extended these results by using a larger (by a factor of 3) sample of transiting planets with improved eccentricity and mass determinations. The latter authors considered planets in the mass range (0.1,25) M_J and classified their orbits as being circular (e=0 with 1σ uncertainty < 0.05), eccentric, or unconstrained (having e compatible with zero but δ e >0.05 or else a slightly eccentric orbit that is not strongly supported by the Bayesian model). To check on the effect of this bias on our conclusions, we repeat the test presented in Figure <ref> using the data from this sample (and again including planet masses only up to 10 M_J). The result, shown in the bottom right panel of Figure <ref>, demonstrates that a spatial eccentricity gradient can be discerned also in this case in the vicinity of the upper branches of the circularization isochrones in the P_orb–M_p plane. This dataset does not, however, contain reliable eccentric orbits for masses that lie below the break in the circularization isochrones and thus provides no information on the possible presence of an eccentricity gradient also in lower-mass planets.As a final check, we carry out backward-in-time integrations for observed systems that possess reliable eccentricity determinations. To isolate the effect of orbital circularization, we only consider planets for which exoplanet.eu lists an age estimate that satisfies t_age<τ_d (or, equivalently, P_orb>P_orb,d, where τ_d is evaluated using the listed values of R_*, M_*, M_p, and a), so that orbital decay is not important. Thus we only retain the planet-dissipation terms in Equations (<ref>) and (<ref>). Each system is integrated backward from its current location over a time interval equal to its age (see Footnote <ref>), with Q^'_p1 set equal to 10^6. Figure <ref> shows the calculated evolutionary tracks in the P_orb–e plane, with the three panels corresponding to the three error tolerance limits (specified by the parameter α) that were employed in Figure <ref>. The results indicate that a significant fraction of these planets have undergone orbital circularization and had initial orbital periods that lay outside the close-in range. Although the total number of systems that could be tested in this way is not large, the fact that this outcome was not inevitable—as evidenced by the presence among the systems considered in Figure <ref> of planets that exhibit little change in P_orb over their lifetimes—strengthens the conclusion that HEM is indeed relevant to the origin of many close-in planets.[<cit.> employed similar backward-in-time integrations in an attempt to estimate the values of Q^'_p and Q^'_* by matching the implied e_0 distribution to the observed eccentricity distribution for a>0.2 au. Although their derived best-fit values,Q^'_p∼ 3× 10^6 and Q^'_*∼ 3× 10^5, are close to those obtained by other methods, this approach is subject to a number of caveats. For example, the calculated values of e_0 depend on the durations of the backward integrations, so their inferred distribution is affected by the (sometimes considerable) uncertainty in t_age. Furthermore, the general eccentricity distribution at a>0.2 au may not be representative of the initial distribution for planets that are transported by HEM to the center. Other uncertainties associated with this approach were noted by <cit.>. Such integrations are, however, useful for demonstrating that the data are consistent with the “HEM + circularization” scenario for a broad choice of values for Q^'_p and Q^'_* <cit.>.] It is noteworthy that a fraction of the systems that evolve back to P_orb> 10days in each of the panels (6 out of 14, 5 out of 11, and 2 out of 8 for α = 1, 1/2, and 1/3, respectively) have M_p<150 M_ and thus correspond to planets that lie below the vertices of the circularization isochrone curves in the P_orb–M_p plane. This supports our tentative inference from the results presented in Figure <ref> (and in the α = 1, 1/2, and 1/3 panels of Figure <ref>) that HEM may be implicated in the arrival of close-in planets that span a broad range of masses (from Jupiter to Neptune scales).§ DISCUSSION PHMF11 were the first to draw attention to the existence of an eccentricity gradient in the period–mass plane and to consider its implications. They pointed out that its orientation agreed with that of a circularization isochrone based on tidal dissipation in the planet, for which τ_cir∝ (a/R_p)^5M_p/M_*. (By contrast, τ_cir∝ (a/R_*)^5M_*/M_p if dissipation in the star is dominant.)They further demonstrated that the location of the transition from e>0.1 to e<0.1 roughly corresponds to a 1 Gyr isochrone characterized by Q^'_p=10^6 <cit.>. These findings were confirmed in the more extensive recent study by <cit.>.Our work extends these results by confronting the data with explicit predictions of the HEM model. This approach has made it possible to constrain the value of the planetary tidal quality factor in the circularization zone (Q^'_p≈ 10^6 for P_orb 4 days). Although a similar value is often adopted in the literature, based on theoretical calculations and solar-system observations <cit.>, the constraints derived directly from the exoplanet data have been much less restrictive. Specifically, by requiring that the orbits of planets with e≈ 0 circularize on timescales shorter than their ages and that those with finite eccentricities do not, one can obtain upper and lower limits, respectively, on Q^'_p. The range of values inferred in this way spans several orders of magnitude (∼ 10^5-10^9; e.g., ). By comparison, we were able to estimate a preferred characteristic value for Q^'_p in the circularization region to within a factor of three.Although the database that we employed is not as reliable as the one assembled, for example, by <cit.> using a homogeneous statistical analysis, we have considered lower masses (down to M_p≈ 0.03 M_J) than in previous studies of this topic. This has led us to a tentative conclusion that the HEM process may also play a role in the formation of sub-Jovian-mass planets (down to Neptune size). The shape of the circularization isochrones in the period–mass plane resembles that of a bird's beak, and the presence of an eccentricity gradient for planets with M_p<150 M_ would establish the reality of the lower portion of that beak. The potential added significance of such a determination is that the HEM scenario implies the existence in the P_orb–M_p plane of a similar “bird's beak” structure at lower values of P_orb—the boundary of the sub-Jovian desert (see top panel in figure <ref> and MK16). Given that the shape and origin of the lower boundary of the desert are still being debated <cit.>, the presence of an eccentricity gradient in association with the lower branches of the circularization isochrones would support the HEM interpretation of the sub-Jovian desert even as it broadens the range of planetary masses in which this mechanism is found to operate. Clearly, more data are needed to validate the existence of this gradient: future space missions such as TESS <cit.> and PLATO <cit.> hold promise in this regard.Despite its apparent success in explaining a variety of observational findings, the extent of the contribution of the HEM mechanism to the formation of close-in giant planets is still being investigated. In one test of this scenario, <cit.> looked for observational evidence for highly eccentric Jupiter-mass planets, which were predicted to exist if hot Jupiters originate outside the ice line and their HEM is induced by a distant (stellar) companion <cit.>. <cit.> did not find such evidence, but they suggested that this does not rule out the possibility that hot Jupiters originate interior to the ice line and that their orbits are perturbed by a planetary companion. In a subsequent study, <cit.> inferred that the probability of a giant planet having a Jupiter-mass companion capable of inducing HEM does not depend on whether the planet lies within the hot-Jupiter orbital range or on the location of the companion with respect to the ice line. These results do not support HEM models in which close-in giant planets originate beyond the ice line, but they are again compatible with the possibility (whichalso recognized) thatsuch planets can travel at least part of the distance from their formation sites by means other than HEM (e.g., classical disk migration or a secular dynamical interaction with a companion).[As was noted in Section <ref>, the choice of initial conditions for our model is consistent with this emerging understanding of how the HEM mechanism operates in real systems.] Under these circumstances, and given that not all systems that harbor a close-in giant planet show evidence for an outer planetary companion <cit.>, it is natural to expect that some fraction of the observed giant planets—including hot Jupiters—have not experienced HEM. There have already been attempts to quantify this fraction based on the difference in the orbital characteristics of the two planet arrival modes, and it appears that it could be appreciable <cit.>. It is also worth keeping in mind that a large number of giant planets likely form in the protoplanetary disk and reach the star through classical disk migration <cit.>. Some of these planets may have been stranded near the host star for up to ∼ 1 Gyr before being tidally ingested and—notwithstanding the fact that their contribution to the observed number count of planets is small—could have left a lasting imprint on the obliquity and metallicity properties of their hosts (; KGM17). § CONCLUSION We tested a key prediction of the HEM scenario for the origin of hot Jupiters and other close-in planets. In this interpretation, planets arrive in the vicinity of the host star on high-eccentricity orbits that become circularized through tidal interaction with the star if they reach orbital periods that are less than P_orb,cir (Equation (<ref>)). This picture implies that a spatial eccentricity gradient should be present in the period–mass phase space near the locus of circularization radii that correspond to typical system ages. The existence of such a gradient for close-in giant planets had been first pointed out by PHMF11 <cit.>, and this finding was recently confirmed by <cit.>. Our treatment is distinct from previous work in that it explicitly tests the HEM scenario by comparing the model predictions (obtained by integrating the evolution equations using observationally or theoretically constrained initial conditions) with the data. This approach has enabled us to extract valuable information about the circularization process. It was already deduced by PHMF11 (and, through alternative methods, by other workers) that this process is dominated by tidal dissipation in the planet. We verified that the effect of orbital decay (dominated by tidal dissipation in the star) for planets of mass M_p 2 M_J is not important in the circularization zone (which roughly spans the P_orb range ∼ 4–6 days) and inferred that the characteristic value of the planetary tidal quality factor Q^'_p in this region is ∼ 10^6. We reached this conclusion through a qualitative comparison between the model results and the data in the period–mass and mass–eccentricity planes. Although we checked that these results are not sensitive to the details of the error tolerance criteria used in selecting the data points, we did not perform a formal statistical test since the number of systems that can be used to demonstrate the existence of the gradient is not yet large enough to justify carrying out such an analysis over the three-dimensional (P_orb, M_p, e) parameter space. However, we have found that our procedure is reliable enough to pin down the value of Q^'_p in the circularization zone to within a factor of three, which can be compared with the ∼ 10^5-10^9 range obtained previously through an application of generic constraints. We stress, however, that the inferred value of Q^'_p pertains only to the narrow range of orbital periods where the transition from mostly eccentric to mostly circular orbits occurs in the model; thus, we cannot reliably test the possible spatial variation of Q^'_p. Furthermore, this parameter only provides a very basic description of tidal dissipation, and our procedure does not test realistic models of this process.Planets that reach the Roche limit a_R (Equation (<ref>)) become tidally disrupted: this limit thus provides a natural edge to the observed distribution (corresponding to the boundary of the sub-Jovian desert). We demonstrated that this edge generally lies interior to the predicted location of the circularization zone, consistent with the data. We noted the possible observational indication of an eccentricity gradient for sub-Jovian-mass planets: if confirmed by additional data, such a gradient would attest to the relevance of the HEM mechanism also to non-giant close-in planets and would support the interpretation of the lower boundary of the desertin terms of this mechanism.Giant planets that cross the Roche limit—either on their initial high-eccentricity trajectories or at a later stage (after their orbits are circularized) due to orbital decay—can be expected to lose their gaseous envelopes and be converted into remnant cores. In KGM17 we extend the calculations presented in this paper by continuing to follow the evolution of these cores, and argue that such remnants are natural candidates for dynamically isolated hot Earths. If this interpretation is correct, it will provide an additional—and independent—argument in favor of the “HEM + circularization” scenario. We acknowledge fruitful discussions with Dan Fabrycky and thank him and the referee for helpful suggestions. This work was supported in part by NASA ATP grant NNX13AH56G and by a University of Chicago College Research Fellows Fund award to S.G. § EMPIRICAL DISTRIBUTIONS FROM THE EXTRASOLAR PLANETS ENCYCLOPEDIA As in MK16, we follow the approach of <cit.> and divide the planet population into two sets, “small” and “large,” with R_p=12 R_ and M_p=150 M_ serving as the rough dividing values of radius and mass, respectively. For the smaller planets we adopt the M_p(R_p) relation given by Equation (<ref>), whereas for the larger ones we employ the empirical distributions shown in Figure <ref> (which are sampled independently). It is seen that the variation in the value of R_p for this set is much smaller than that in M_p, justifying the adoption of the ansatz R_p = constant for the analytic approximations in Section <ref>.We maintained the separation into “small” and “large” planets in constructing the age distribution. The left panel of Figure <ref> shows the results for the larger planets. It is seen that the bulk of the systems have t_age in the range ∼1–5Gyr: this is considerably broader than the distribution used in MK16, which was biased toward younger systems. We employed the values that delineate this range (1 and 5 Gyr) in plotting the isochrone curves in Figures <ref> and <ref>. The age distribution for the smaller planets is shown in the right panel of Figure <ref>. Although the number of systems returned by the search in this case was small, there is a strong indication that the observed distribution is qualitatively different from the one in the left panel, justifying the separate catalog queries that we made.The above empirical distributions were obtained without filtering on the basis of the listed errors. While these are typically not large for the planets' radii and masses, they can be significant for the systems' ages, with a factor of 2 uncertainty in the value of t_age being fairly common.natexlab#1#1[Antonini et al.(2016)Antonini, Hamers, & Lithwick]Antonini+16 Antonini, F., Hamers, A. S., & Lithwick, Y. 2016, , 152, 174 [Barker(2011)]Barker11 Barker, A. J. 2011, , 414, 1365 [Barker & Ogilvie(2009)]BarkerOgilvie09 Barker, A. J., & Ogilvie, G. I. 2009, , 395, 2268 [Barker & Ogilvie(2010)]BarkerOgilvie10 —. 2010, , 404, 1849 [Beaugé & Nesvorný(2013)]BeaugeNesvorny13 Beaugé, C., & Nesvorný, D. 2013, , 763, 12 [Bonomo et al.(2017)Bonomo, Desidera, Benatti, Borsa, Crespi, Damasso, Lanza, Sozzetti, Lodato, Marzari, Boccato, Claudi, Cosentino, Covino, Gratton, Maggio, Micela, Molinari, Pagano, Piotto, Poretti, Smareglia, Affer, Biazzo, Bignamini, Esposito, Giacobbe, Hébrard, Malavolta, Maldonado, Mancini, Martinez Fiorenzano, Masiero, Nascimbeni, Pedani, Rainer, & Scandariato]Bonomo+17 Bonomo, A. S., Desidera, S., Benatti, S., et al. 2017, , 602, A107 [Bryan et al.(2016)Bryan, Knutson, Howard, Ngo, Batygin, Crepp, Fulton, Hinkley, Isaacson, Johnson, Marcy, & Wright]Bryan+16 Bryan, M. L., Knutson, H. A., Howard, A. W., et al. 2016, , 821, 89 [Dawson et al.(2015)Dawson, Murray-Clay, & Johnson]Dawson+15 Dawson, R. I., Murray-Clay, R. A., & Johnson, J. A. 2015, , 798, 66 [Dobbs-Dixon et al.(2004)Dobbs-Dixon, Lin, & Mardling]Dobbs-Dixon+04 Dobbs-Dixon, I., Lin, D. N. C., & Mardling, R. A. 2004, , 610, 464 [Eggleton et al.(1998)Eggleton, Kiseleva, & Hut]Eggleton+98 Eggleton, P. P., Kiseleva, L. G., & Hut, P. 1998, , 499, 853 [Essick & Weinberg(2016)]EssickWeinberg16 Essick, R., & Weinberg, N. N. 2016, , 816, 18 [Fabrycky et al.(2007)Fabrycky, Johnson, & Goodman]Fabrycky+07 Fabrycky, D. C., Johnson, E. T., & Goodman, J. 2007, , 665, 754 [Ford & Rasio(2006)]FordRasio06 Ford, E. B., & Rasio, F. A. 2006, , 638, L45 [Husnoo et al.(2012)Husnoo, Pont, Mazeh, Fabrycky, Hébrard, Bouchy, & Shporer]Husnoo+12 Husnoo, N., Pont, F., Mazeh, T., et al. 2012, , 422, 3151 [Hut(1981)]Hut81 Hut, P. 1981, , 99, 126 [Jackson et al.(2008)Jackson, Greenberg, & Barnes]Jackson+08 Jackson, B., Greenberg, R., & Barnes, R. 2008, , 678, 1396 [Königl et al.(2017)Königl, Giacalone, & Matsakos]Konigl+17 Königl, A., Giacalone, S., & Matsakos, T. 2017, , in press [Laughlin et al.(2011)Laughlin, Crismani, & Adams]Laughlin+11 Laughlin, G., Crismani, M., & Adams, F. C. 2011, , 729, L7 [Lin et al.(1996)Lin, Bodenheimer, & Richardson]Lin+96 Lin, D. N. C., Bodenheimer, P., & Richardson, D. C. 1996, Natur, 380, 606 [Lucy & Sweeney(1971)]LucySweeney71 Lucy, L. B., & Sweeney, M. A. 1971, , 76, 544 [Matsakos & Königl(2015)]MatsakosKonigl15 Matsakos, T., & Königl, A. 2015, , 809, L20 [Matsakos & Königl(2016)]MatsakosKonigl16 —. 2016, , 820, L8 [Matsumura et al.(2010)Matsumura, Peale, & Rasio]Matsumura+10 Matsumura, S., Peale, S. J., & Rasio, F. A. 2010, , 725, 1995 [Matsumura et al.(2008)Matsumura, Takeda, & Rasio]Matsumura+08 Matsumura, S., Takeda, G., & Rasio, F. A. 2008, , 686, L29 [Mazeh et al.(2016)Mazeh, Holczer, & Faigler]Mazeh+16 Mazeh, T., Holczer, T., & Faigler, S. 2016, , 589, A75 [Nelson et al.(2017)Nelson, Ford, & Rasio]Nelson+17 Nelson, B. E., Ford, E. B., & Rasio, F. A. 2017, ArXiv e-prints, arXiv:1703.09711 [Petrovich & Tremaine(2016)]PetrovichTremaine16 Petrovich, C., & Tremaine, S. 2016, , 829, 132 [Pont et al.(2011)Pont, Husnoo, Mazeh, & Fabrycky]Pont+11 Pont, F., Husnoo, N., Mazeh, T., & Fabrycky, D. 2011, , 414, 1278 [Rasio & Ford(1996)]RasioFord96 Rasio, F. A., & Ford, E. B. 1996, Sci, 274, 954 [Rauer et al.(2014)Rauer, Catala, Aerts, Appourchaux, Benz, Brandeker, Christensen-Dalsgaard, Deleuil, Gizon, Goupil, Güdel, Janot-Pacheco, Mas-Hesse, Pagano, Piotto, Pollacco, Santos, Smith, Suárez, Szabó, Udry, Adibekyan, Alibert, Almenara, Amaro-Seoane, Eiff, Asplund, Antonello, Barnes, Baudin, Belkacem, Bergemann, Bihain, Birch, Bonfils, Boisse, Bonomo, Borsa, Brandão, Brocato, Brun, Burleigh, Burston, Cabrera, Cassisi, Chaplin, Charpinet, Chiappini, Church, Csizmadia, Cunha, Damasso, Davies, Deeg, Díaz, Dreizler, Dreyer, Eggenberger, Ehrenreich, Eigmüller, Erikson, Farmer, Feltzing, de Oliveira Fialho, Figueira, Forveille, Fridlund, García, Giommi, Giuffrida, Godolt, Gomes da Silva, Granzer, Grenfell, Grotsch-Noels, Günther, Haswell, Hatzes, Hébrard, Hekker, Helled, Heng, Jenkins, Johansen, Khodachenko, Kislyakova, Kley, Kolb, Krivova, Kupka, Lammer, Lanza, Lebreton, Magrin, Marcos-Arenal, Marrese, Marques, Martins, Mathis, Mathur, Messina, Miglio, Montalban, Montalto, Monteiro, Moradi, Moravveji, Mordasini, Morel, Mortier, Nascimbeni, Nelson, Nielsen, Noack, Norton, Ofir, Oshagh, Ouazzani, Pápics, Parro, Petit, Plez, Poretti, Quirrenbach, Ragazzoni, Raimondo, Rainer, Reese, Redmer, Reffert, Rojas-Ayala, Roxburgh, Salmon, Santerne, Schneider, Schou, Schuh, Schunker, Silva-Valio, Silvotti, Skillen, Snellen, Sohl, Sousa, Sozzetti, Stello, Strassmeier, Švanda, Szabó, Tkachenko, Valencia, Van Grootel, Vauclair, Ventura, Wagner, Walton, Weingrill, Werner, Wheatley, & Zwintz]Rauer+14 Rauer, H., Catala, C., Aerts, C., et al. 2014, ExA, 38, 249 [Ricker et al.(2014)Ricker, Winn, Vanderspek, Latham, Bakos, Bean, Berta-Thompson, Brown, Buchhave, Butler, Butler, Chaplin, Charbonneau, Christensen-Dalsgaard, Clampin, Deming, Doty, De Lee, Dressing, Dunham, Endl, Fressin, Ge, Henning, Holman, Howard, Ida, Jenkins, Jernigan, Johnson, Kaltenegger, Kawai, Kjeldsen, Laughlin, Levine, Lin, Lissauer, MacQueen, Marcy, McCullough, Morton, Narita, Paegert, Palle, Pepe, Pepper, Quirrenbach, Rinehart, Sasselov, Sato, Seager, Sozzetti, Stassun, Sullivan, Szentgyorgyi, Torres, Udry, & Villasenor]Ricker+14 Ricker, G. R., Winn, J. N., Vanderspek, R., et al. 2014, Proc. SPIE, 9143, 914320 [Schlaufman & Winn(2016)]SchlaufmanWinn16 Schlaufman, K. C., & Winn, J. N. 2016, , 825, 62 [Socrates et al.(2012)Socrates, Katz, Dong, & Tremaine]Socrates+12 Socrates, A., Katz, B., Dong, S., & Tremaine, S. 2012, , 750, 106 [Southworth et al.(2009)Southworth, Hinse, Dominik, Glitrup, Jørgensen, Liebig, Mathiasen, Anderson, Bozza, Browne, Burgdorf, Calchi Novati, Dreizler, Finet, Harpsøe, Hessman, Hundertmark, Maier, Mancini, Maxted, Rahvar, Ricci, Scarpetta, Skottfelt, Snodgrass, Surdej, & Zimmer]Southworth+09 Southworth, J., Hinse, T. C., Dominik, M., et al. 2009, , 707, 167 [Steffen & Coughlin(2016)]SteffenCoughlin16 Steffen, J. H., & Coughlin, J. L. 2016, PNAS, 113, 12023 [Szabó & Kiss(2011)]SzaboKiss11 Szabó, G. M., & Kiss, L. L. 2011, , 727, L44 [Thommes et al.(2008)Thommes, Matsumura, & Rasio]Thommes+08 Thommes, E. W., Matsumura, S., & Rasio, F. A. 2008, Science, 321, 814 [Valsecchi & Rasio(2014)]ValsecchiRasio14 Valsecchi, F., & Rasio, F. A. 2014, , 787, L9 [Walkowicz & Basri(2013)]WalkowiczBasri13 Walkowicz, L. M., & Basri, G. S. 2013, , 436, 1883 [Weiss et al.(2013)Weiss, Marcy, Rowe, Howard, Isaacson, Fortney, Miller, Demory, Fischer, Adams, Dupree, Howell, Kolbl, Johnson, Horch, Everett, Fabrycky, & Seager]Weiss+13 Weiss, L. M., Marcy, G. W., Rowe, J. F., et al. 2013, , 768, 14 [Wu & Lithwick(2011)]WuLithwick11 Wu, Y., & Lithwick, Y. 2011, , 735, 109 [Youdin(2011)]Youdin11 Youdin, A. N. 2011, , 742, 38
http://arxiv.org/abs/1708.07543v1
{ "authors": [ "Steven Giacalone", "Titos Matsakos", "Arieh Königl" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20170824202317", "title": "A test of the high-eccentricity migration scenario for close-in planets" }
1 CentreforTheoreticalAtomic,MolecularandOpticalPhysics, Queen'sUniversityBelfast,BelfastBT7 1NN,UnitedKingdom2 Laboratoire Kastler Brossel, Sorbonne Université, CNRS, ENS-PSL Research University, Collège de France, 4 place Jussieu Case 74, F-75005 Paris, France3 Laboratoire Charles Coulomb (L2C), UMR 5221 CNRS-Universit de Montpellier, F- 34095 Montpellier, France4 Institut Universitaire de France, 1 rue Descartes, F-75231 Paris, France B. Reid 05.30.-dQuantum statistical mechanics 05.70.-aThermodynamics 07.20.PeHeat engines We propose a system made of three quantum harmonic oscillators as a compact quantum engine for producing mechanical work. The three oscillators play respectively the role of the hot bath, the working medium and the cold bath. The working medium performs an Otto cycle during which its frequency is changed and it is sequentially coupled to each of the two other oscillators. As the two environments are finite, the lifetime of the machine is finite and after a number of cycles it stops working and needs to be reset.Remarkably, we show that this machine can extract more than 90% of the available energy during 70 cycles.Differently from usually investigated infinite-reservoir configurations, this machine allows the protection of induced quantum correlations and we analyse the entanglement and quantum discord generated during the strokes. Interestingly, we show that high work generation is always accompanied by large quantum correlations. Our predictions can be useful for energy management at the nanoscale, and can be relevant for experiments with trapped ions and experiments with light in integrated optical circuits.A self-contained quantum harmonic engine B. Reid1 S. Pigeon1,2 M. Antezza3,4 G. De Chiara 1,3 December 30, 2023 ======================================================== § INTRODUCTION The recent renewed interest in quantum aspects of out-of-equilibrium thermodynamics <cit.> has triggered an intensive research program to design and realise thermal engines whose working substance is a quantum system <cit.>. Most of the proposals employ a few qubits or quantum harmonic oscillators as the working substance but a few works explore the possibility of using quantum many-body systems <cit.>. The interest in quantum thermal engines has received a huge thrust thanks to the experimental realisation of prototypes with trapped ions <cit.> and solid state devices <cit.>. A remarkable open problem is whether quantum effects, such as coherence, entanglement or more general quantum correlations like discord, improve the engines' performance compared to their classical counterparts.A unified response to the question is still lacking as many of the results found in the literature are model dependent.In all these proposals, the hot and cold environments required for the functioning of the machine are assumed to be infinite although not necessarily in equilibrium. In this work, we challenge this paradigm and consider a compact quantum engine made entirely of three quantum harmonic oscillators (see Fig. <ref>(a)). One of them, the working substance, interacts sequentially with each of the other oscillators, acting as the hot and cold reservoirs, and is subject to a compression and expansion stage, thus realising the Otto cycle. Contrary to a recent proposal employing a few qubits subject to dephasing for the hot and cold reservoir <cit.>, the time evolution of our three-oscillator engine is always unitary.Thermalisation of quantum systems in contact to finite structured environments has been discussed in the past <cit.>. The modelling of open quantum system dynamics in the presence of a small environment used as a calorimeter has been studied in Ref. <cit.>.In our model, due to the finiteness and harmonicity of the environments, the working substance does not thermalise in the isochoric strokes and does not evolve in a perfect cyclic fashion. Nevertheless, we show in the following that our harmonic engine is capable of performing many cycles and producing more than 90% of the maximum extractable work. An important consequence of the finiteness of the reservoirs is that the working substance becomesand remains quantum correlated with the reservoirs as we show using entanglement negativity and quantum discord, which we analyse using initial thermal and squeezed states.Our engine is self-contained and can be embedded in a larger quantum system without the need of controlled external reservoirs. Thanks to the simplicity of the model, which can be solved exactly, we believe that our engine could be implemented in any experiment with controlled quantum harmonic oscillators as, for example, trapped ions or optical modes (see Fig. <ref>(b)). Detailed discussion of these experimental proposals follow.§ MODEL We consider three quantum harmonic oscillators of equal mass m and frequency ω_i subject to the local Hamiltonians H_ii=mω_i^2 x_i^2/2+p_i^2/2m. The position and momentum operators for oscillator i are given by x_i=√(ħ/2mω_i)(a_i+a^†_i) and p_i=-i√(ħ mω_i/2)(a_i-a^†_i), respectively, with a_i (a_i^†) the annihilation (creation) operators satisfying [a_i,a_j^†]=δ_ij corresponding to the vacuum state |0⟩_i. We have set ħ=1 everywhere. The interaction Hamiltonian between oscillators i and j isH_ij=α_ij(a_i^† a_j+a_ia_j^†) characterised by the coupling strength α_ij. If the oscillators are resonant (ω_i=ω_j), it follows that [H_ii+H_jj,H_ij]=0 such that turning the interaction on and off does not change the energy of the system or require external work. The oscillators are initially uncorrelated and each prepared either in a thermal state with density matrix ρ_i=e^-β_i H_ii/Z_i where β_i=1/(k_BT_i) is the inverse temperature, k_B the Boltzmann constant (we set k_B=1 everywhere) and Z_i=(e^-β_i H_ii) the corresponding partition function, or in a squeezed vacuum state |r_i⟩=exp[r_i( a_i^2-a_i^† 2)/2]|0⟩_i with real squeezing parameter r_i.The total Hamiltonian is a quadratic form of the oscillators' positions and momenta: H=∑_ij=1^3 H_ij=∑_α,β=1^6 h_αβ R_α R_β, withR={x_1,x_2,x_3,p_1,p_2,p_3}. As a consequence, the state of the three oscillators is always Gaussian and can be fully characterised by the covariance matrix σ, with elements: σ_αβ=1/2⟨ R_α R_β+R_β R_α⟩-⟨ R_α⟩⟨ R_β⟩. Thetime evolution of the covariance matrix is governed by the Lyapunov equation:σ̇(t)=Aσ(t)+σ(t)A^T, where A=Ω h and Ω_αβ=-i[R_α,R_β].§ THE CYCLE We study the Otto cycle, described by two isentropic and two isochoric processes, or strokes. In our finite environment model oscillators 1, 2 and 3 play the role of the hot environment, the working medium and the cold environment respectively. The frequencies of oscillators 1 and 3 are constant in time ω_1>ω_3, while the frequency of the second oscillator is changed in time during the compression/expansion strokes between ω_1 and ω_3. Initially all couplings are zero, α_ij=0.Using the notation E_2i,f^(s) to denote the energy of the second oscillator at either the initial or final time of the stroke s, and noticing that E_2f^(s)≡ E_2i^(s+1), the Otto cycle can be described as follows (see Fig. <ref>(a)):* Compression: The frequency of the second oscillator is changed from ω_3 to ω_1 in a time τ_ comp according to the schedule: ω_2^2(t) = ω_3^2 + (ω_1^2-ω_3^2) t /τ_ comp. The work done on the system during this stroke is defined as W_1=E_2i^(2)-E_2i^(1).* Heating: The first and second oscillators are suddenly coupled for a time τ_H with coupling constant α_12≠ 0, transferring energy between them. The energy transferred in this stroke is Q_1=E_2i^(2)-E_2i^(3).* Expansion: The frequency of the second oscillator is reduced back to ω_3: ω_2^2(t) = ω_3^2 + (ω_3^2-ω_1^2) (t-τ_comp-τ_H)/τ_ exp in a time τ_ exp=τ_ comp. The work done is W_2=E_2i^(4)-E_2i^(3).* Cooling: The second and third oscillators are suddenly coupled for a time τ_C with coupling constant α_23≠ 0, transferring energy Q_2=E^(4)_2i-E^(4)_2f. The evolution of the covariance matrix during each stroke can be calculated analytically. For the heating and cooling strokes, when two oscillators are coupled, the evolution operators can be computed by performing a Bogoliubov transformation from the local coupled modes to the normal uncoupled modes. For the schedule we are considering the evolution operators for the compression and expansion strokes can be expressed in terms of Airy functions <cit.>. Details are provided in the Supplementary Material. Two extreme cases, the instantaneous change of frequency and the very slow ramp, are particularly simple and we will discuss them in detail in the next section.According to our definition negative work means extracted/produced and negative heat means absorbed by the working medium.Contrary to conventional engines, at the end of the Otto cycle it is not guaranteed that the second oscillator has returned to its initial state, so the energy balance dictated by the first law reads: W_1+W_2=Q_1+Q_2+Δ U, where Δ U=E_2f^(4)-E_2i^(1) is the variation of the internal energy of the second oscillator in each cycle.Although at each cycle the amount of work generated and heat exchanged might be different, we define a cycle efficiency as the ratio of the work𝒲=W_1+W_2 produced in one cycle plus the variation of the internal energy Δ U and the heat absorbed from the two environmental oscillators:η = max(0, -𝒲+Δ U)/∑_Q_i<0|Q_i|Thus, the machine has a positive efficiency if it produces work or if it accumulates energy in the working medium (like a battery) to be used in a subsequent cycle. Thanks to the energy balance 0≤η≤ 1 and η=1 when both Q_1 and Q_2 are negative, i.e. no heat is released. In the standard case with infinite reservoirs Q_1<0 and Q_2>0 we obtain the more common formula: η = 1-Q_2/|Q_1|.Looking at the whole system of three quantum oscillators as a driven system initially in a non passive state (the product of local thermal passive states is not necessarily passive), we can assess the performance of the process by looking at the ratio of the work done, if any, and the ergotropy ε, which we compute numerically <cit.>. Erogotropy is described as the maximum amount of work that can be extracted from a quantum system through a unitary operation. Briefly, consider the initial state of a system ρ_i=∑_jq_j|q_j⟩⟨q_j| with Hamiltonian H=∑_jϵ_j|ϵ_j⟩⟨ϵ_j| where we assume the ordering q_j≥ q_j+1 and ϵ_j≤ϵ_j+1. With this ordering, the least energetic state that can be obtained with a unitary operation can be written as ρ_f=∑_jq_j|ϵ_j⟩⟨ϵ_j|. The ergotropy can then be defined as the work extracted in the process: ε=∑_ijq_jϵ_i(|⟨q_j|ϵ_i|⟩|^2-δ_ij).§ ONE CYCLE We start our analysis with the case in which the interactions between the oscillators are very small thus mimicking the weak coupling regime of open quantum systems. In this regime we can expand all our expressions up to second order in α_ijτ_H,C. We start with the case in which the compression/expansion strokes are instantaneous. We also assume that the second and third oscillator are initially thermalised (β_2=β_3) for thermal states or have the same initial squeezing parameter (r_2=r_3) for squeezed states. Under these assumptions the total work obtained after one cycle for thermal states is given by: 𝒲_th=τ_H^2(ω_1^2-ω_3^2)/4ω_1ω_3[ω_1(α_12^2+ω_1^2-ω_3^2)c_3-α_12^2ω_3 c_1],where for ease of notation we have written c_i=(β_iω_i/2). The equivalent expression for initially squeezed states is given by:𝒲_sq=τ_H^2(ω_1^2-ω_3^2)/4ω_1ω_3×[ω_1(α_12^2+ω_1^2-e^4r_3ω_3^2)e^-2r_3 -α_12^2ω_3 e^2r_1].In order for work to be extracted, we require Eqs. (<ref>, <ref>) to be negative. In the limit of zero initial temperature (similarly zero squeezing) in oscillators two and three, the special case we will consider here, the following inequality must be satisfied:X>ω_1/ω_3(1+ω_1^2-ω_3^2/α_12^2),where X=(β_1ω_1/2) for initially thermal states and X=e^2r_1 for initially squeezed states. This inequality holds for sufficiently large temperatures T_1 or for large initial squeezing r_1. Since X represents the excitation energy of the first oscillator, inequality (<ref>) is a requirement on the initial available energy of the hot oscillator. For these two cases we address the entanglement generated during the heating stroke between the first and second oscillator. This is the largest entanglement created during the process since the following strokes deteriorate or destroy it. For Gaussian states it is convenient to use the logarithmic negativity as a measure of entanglement <cit.>. For two oscillators i and j, this is defined as E_ij=max(0,-log|2ν_ij|) where ν_ij is the smallest symplectic eigenvalue of the partially transposed covariance matrix: σ^T_j=Pσ_ijP, where σ_ij is the restriction of the full covariance matrix to the oscillators i and j and P=(1,1,1,-1). Notice that entanglement is nonzero only if ν_ij<1/2. When the oscillators are initially prepared in thermal states we obtain: ν_12=c_3[(1+A)ω_1ω_3c_1^2-A(ω_1^2+ω_3^2)c_1c_3-(1-A)ω_1ω_3c_3^2/2ω_1ω_3(c_1^2-c_3^2)]with A=2(α_12τ_H)^2. If we consider T_3=0 such that c_3→1, the condition for entanglement generation (ν_12<1/2) becomes (β_1ω_1)<(ω_1^2+ω_3^2)/(2ω_1ω_3) which happens for low enough temperatures. However, due to Eq. (<ref>), there is no work produced. Thus for initial thermal states, work extraction and entanglement are incompatible. As we will see later, however, other forms of quantum correlations might arise in the process.If the first oscillator is instead in an initial squeezed state the symplectic eigenvalue ν_12 takes the form:ν_12=1/2-√(A)e^-(r_1+r_3)|Φ|/2√(2ω_1ω_3)+Ae^-2(r_1+r_3)|Φ|^2/8ω_1ω_3,where Φ=(ω_1-e^2(r_1+r_3)ω_3).For quasi-static compression/expansion, using the asymptotic formulae presented in the Supplementary Material, we find that work is always extracted after the first cycle: 𝒲_th=-1/2α_12^2τ_H^2(ω_1-ω_3)(c_1-c_3), if we have c_1>c_3, when the oscillators are initially prepared in thermal states. For oscillators prepared in initially squeezed states, the expression for work extraction after the first cycle is given by:𝒲_sq=-α_12^2τ_H^2(ω_1-ω_3)/4(e^2r_1-e^2r_3)(1-e^-2(r_1+r_3)), which is negative for r_1>r_3.The symplectic eigenvalue between the first and second oscillator for the quasi-static case is given by, for thermal states, ν_12=(β_3ω_3/2){1/2-α_12^2τ_H^2[sinh(β_1ω_1-β_3ω_3/2)/sinh(β_1ω_1+β_3ω_3/2)]}.Taking T_3=0, for which we would expect a higher value of entanglement between the first and second oscillators, the limit of this function is strictly greater than 1/2, so no entanglement is generated.For initially squeezed states the expression for the lowest symplectic eigenvalue is too lengthy to report here. However, preparing the second and third oscillator with zero initial squeezing, entanglement generation is guaranteed between the first and second oscillator: ν_12 = 1/2 -α_12τ_H sinhr_1for non-zero r_1. To summarise, to get entanglement created in the initial heating stroke it is sufficient, but not necessary, to prepare the first oscillator in a squeezed state.In all the cases discussed under the weak coupling approximation, we find that Q_2=0 and thus, if 𝒲<0 , η=1. However it is important to stress that since these analytical calculations refer to a single cycle, the work extracted to ergotropy ratio is very small. In the following we discuss our numerical results in a more general scenario. § MANY CYCLES To fully characterise the performance of the engine over many cycles we use numerical optimisation to enhance work extraction in our machine. We set β_1=10^-2ω_1^-1, β_2,3→∞ and ω_1=1.For values of 0<ω_3<ω_1, the variables to be optimised are the coupling values α_ij and corresponding coupling times τ_H,C as well as the compression/expansion time τ_ comp. To this end, we employ the built-in Mathematica function NMinimize <cit.>. To avoid large interaction times and large coupling values, which would result in a short-lived engine, each parameter was restricted to a finite interval: 10^-4≤α_ijω_1^-1≤0.05, 10^-3≤ω_1τ_H,C≤1, 1≤ω_1τ_R≤100. For ω_3=0.1ω_1, the result of the optimisation is shown in Fig. <ref>(a) where the total work extracted to ergotropy ratio is shown. The machine extracts work until the 70th cycle and then reverses itself absorbing work and dumping heat into the first oscillator, returning almost completely to its initial state. At the end of the 70th cycle the total work extracted from the engine is 𝒲_T≃ -89.5ω_1,approximately 91% of the ergotropy of the initial state. It should be noted that the parameters reported in the caption of Fig. <ref> are not unique and other sets of parameters can provide a similar amount of work extraction.Due to the small value of α_23 we find that the heat exchanged to the third oscillator is negligible: Q_2∼ 0, thus the thermal efficiency of the machine is practically 1. For an optimised engine such as the one presented here, Q_2 will always be small as most of the energy will be extracted during the expansion stroke of the cycle. This feature is specific to an optimised engine: there exist other sets of parameters, leading to a functioning engine, albeit with a smaller value of extracted work, that have larger values of α_23 giving Q_2 small but non-zero. It is interesting that we obtain the same result for the total work extracted, within our working precision, by eliminating the third oscillator altogether. This is a consequence of the non-cyclic nature of the engine.The choice of preparing the first and second oscillators in their vacuum states is not accidental, we found this case allows for the highest work to ergotropy ratio. However, non-zero initial values of T_2,3 also allows for the extraction of work. In fact, we found that for T_2=T_3>0, and disallowing instances where the second oscillator can “ignore” the third with small coupling values and/or times (such as the case presented above) optimal work extraction decreases linearly with T_3. Setting T_3=0 and T_2>0 we found that maximum work extraction is identical to that presented above: 𝒲_T≃-89.5ω_1. However, the work to ergotropy ratio is lower. As shown in the previous section entanglement between initially thermal states is not present in a regime where work can be extracted, and so we evaluate the non-classicality of this process via the Gaussian quantum discord <cit.>. We provide a brief, mathematical definition of Gaussian quantum discord in the Supplementary Material. Figure  <ref>(b) shows the discord dynamics between the first and second oscillator as a function of time. It is symmetric about the point of reversal for the engine and is the only non-classical correlation present in the engine with 𝒟_23, 𝒟_13=0.Detailed information on the internal energies of the three oscillators performing many cycles can be found in the Supplementary Material.Figure <ref> shows optimised work extraction for changing values of ω_3. The maximum amount of extractable work, as a proportion of the ergotropy of the initial state, decreases linearly with increasing ω_3, until it disappears for ω_3∼ω_1. While work extraction is possible for any value of ω_3<ω_1, clearly our machine operates better in the regime ω_3≪ω_1. § QUANTUM CORRELATIONSThe optimised case presented above provides an insight on the performance of the engine for maximum work production. While this is arguably the most important aspect of any machine, it is unclear on what connection, if any, exists between work extraction and non-classical correlations in the system. In order to investigate any link between thermodynamic quantities (work, efficiency) and non-classical correlations (entanglement, discord) we simulate the engine working with random parameters. We consider two cases: initially thermal states and initially squeezed states. We fix the frequency of the first oscillator (ω_1=1) and the initial temperatures (squeezing parameters) for thermal (squeezed) states. The variables to be randomized are ω_3, the coupling strengths α_ij and times τ_H,C and τ_ comp. The engine is prepared with these random variables and performs Otto cycles until the criteria 𝒲<0 is no longer satisfied. This process is repeated ten thousand times. For thermal states, we have chosen variables β_1=10^-2ω_1^-1 and β_2,3→∞. For squeezed states, we fix r_2,3=0 and r_1=1/2arccosh[(β_1ω_1/2)] so the initial energy of the first oscillator is equal in both cases.For initial thermal states, there is no significant generation of entanglement and we assess the amount of quantum correlations between any two oscillators using quantum discord. For initial squeezed states, instead we show entanglement measured by the logarithmic negativity defined above. For thermal states, Fig. <ref>(a-b-c) show the scatter plots of the maximum discord generated between any two oscillators at the end of a cycle and the total work produced with a given set of parameters. The discord 𝒟_12 between the first two oscillators show a clear pattern: high values of work are necessarily accompanied by large quantum discord. In order to produce work, there must be a coherent energy transfer between the first and second oscillator that can only happen if the two oscillators become correlated. This is also reminiscent of synchronisation phenomena in coupled harmonic oscillators (see for example <cit.>). In contrast, the quantum discords of the other pairs of oscillators do not show such a connection, and large values of work are not necessarily followed by large values of discord. Notice also that the discord 𝒟_13 created between the first and third oscillator is mediated through the central oscillator. For initial squeezed states, the results for the maximum negativity created are shown in Fig. <ref>(d-e-f). As in the case of initial thermal states, a strong entanglement between the first two oscillators is necessary for extracting work. There also seems to be a maximum value of entanglement that can be createdconnected to the initial amount of squeezing of the first oscillator.Entanglement between the second and third oscillator and between the first and third oscillator tends to be low and correlated to small values of total work. This suggests a minimal involvement of the third oscillator in the work extraction.§ IMPLEMENTATIONOur self-contained engine can be implemented in any experiment with controlled quantum harmonic oscillators. Two platforms fit especially well our proposal: trapped ions and integrated optical circuits. With trapped ions, the role of each oscillator is played by the ion displacement from equilibrium. The trapping frequency and the ions distance, determining their interactions, can be accurately controlled <cit.>. Thanks to the advance in state engineering,it is possible to prepare the ions in thermal and squeezed states <cit.>.Integrated optical circuits are another interesting candidate. The time evolution of a quantum oscillator can be mapped to the propagation of an electromagnetic field in a single mode nanofibre. Preparation of thermal and squeezed states has been well mastered in quantum optics <cit.>. The circuit proposal in Fig. <ref>(b) allows optical states to undergo frequency shifting and coupling via evanescent fields which realise H_ij <cit.>. The main difficulty with this system is to shift the frequency of the oscillators. For frequency shifting based on acoustic-optic effect only a modulation of a few percent can be reached with high efficiency <cit.>. Frequency conversion is an active field of research and new methods based on doped fibres have been recently developed, allowing for a larger frequency modulation but still with smaller efficiency <cit.>.§ CONCLUSIONS In this work we have designed the simplest quantum engine made of three quantum harmonic oscillators and performing the Otto cycle. This machine does not require the use of external infinite thermal reservoirs but it operates thanks to an initial non-passive state. As such it can be integrated in a larger quantum machine that needs mechanical work in a very localised region of space. Since its resources are finite so is its lifetime. However we have shown that the optimised engine works for about 70 cycles before stopping, extracting approximately 91% of the available energy. Our investigation confirms the common belief, although not conclusively proven, that quantum correlations enhance work extraction as evidenced numerically in our Fig. <ref>.Our research opens a new avenue in the study of quantum thermodynamic devices with finite reservoirs. Extending the lifetime of the engine could be achieved by coupling more oscillators onto the first and third oscillators, thus increasing the size of the energy reservoirs <cit.>. It has also been shown that systems coupled to a finite reservoir can mimic the action of a system connected to an infinite reservoir <cit.>. Finally, it would be interesting to extend a program such as the one we presented here to spin systems and many-body lattice systems.We acknowledge illuminating discussions with O. Abah, A. Ferraro, A. Hewgill, V. Parigi, J. P. Paz and A. J. Roncaglia. GDC thanks the CNRS and the group Theory of Light-Matter and Quantum Phenomena of the Laboratoire Charles Coulomb for hospitality during his stay in Montpellier. BR would like to thank S. Campbell for illuminating discussions. This work was supported by the EU Collaborative project TherMiQ (grant agreement 618074), the John Templeton Foundation (grant ID 43467) the French ANR (ACHN C- Flight), the UK EPSRC and the Professor Caldwell Travel Scholarship. eplbibSupplementary Material§ EVOLUTION OPERATORS FOR THE VARIOUS STROKESIn this section we find analytically the evolution operators needed to evolve the covariance matrix in time. To this end we need to solve the Lyapunov equation σ̇(t)=Aσ(t)+σ(t)A^T. For a time independent Hamiltonian, i.e. the matrix A is constant, the solution is simply:σ(t) = exp(At) σ(0) exp(A^T t)This is the case of the heating and cooling strokes when two resonant oscillators are coupled.When the first and second oscillators, both at frequency ω_1, are coupled α_12≠ 0 and α_23=0. The matrix A reads:A=([0001 α_12/ω_10;000 α_12/ω_110;000001; -ω_1^2 -α_12ω_10000; -α_12ω_1 -ω_1^20000;00 -ω_3^2000 ])and we obtain:exp(At)= ([cosα_12tcosω_1t -sinα_12tsinω_1t0 1/ω_1 cosα_12tsinω_1 t 1/ω_1 sinα_12tcosω_1 t0; -sinα_12tsinω_1tcosα_12tcosω_1t0 1/ω_1 sinα_12tcosω_1 t 1/ω_1 cosα_12tsinω_1 t0;00cosω_3t00 1/ω_3 sinω_3 t;-ω_1cosα_12tsinω_1t-ω_1sinα_12tcosω_1t0cosα_12tcosω_1t-sinα_12tsinω_1 t0;-ω_1sinα_12tcosω_1t-ω_1cosα_12tsinω_1t0-sinα_12tsinω_1 tcosα_12tcosω_1t0;00-ω_3sinω_3t00cosω_3t ])A similar expression for A and exp A t is obtained as well when α_23≠ 0, α_12=0 and ω_2=ω_3. Now let us consider the compression/expansion strokes. In this case the oscillators are uncoupled and the second oscillator is subject to a time-dependent frequency. The reordered Hamiltonian matrix is:A(t) =([ 0_3 1_3; -ω^2(t) 0_3 ])where 0_3 and 1_3 are the 3× 3 zero and identity matrices, respectively, while ω^2(t)=(ω_1^2,ω_2^2(t),ω_3^2). The solution to the Lyapunov equation (1) is thus σ(t)=Uσ(0)U^T, where:U = ([ a b; c d ])and a = (cosω_1t, y(t), cosω_3 t) b = (1/ω_1sinω_1t, x(t), 1/ω_3sinω_3 t) c = ȧ d = ḃThe functions x(t) and y(t) fulfil the equations:ẍ(t) = -ω_2^2 x(t),x(0)=0,ẋ(0)=1ÿ(t) = -ω_2^2 y(t),y(0)=1,ẏ(0)=0These equations can be solved analytically for the scheduleω_2^2(t) = ω_ in^2 + (ω_ fin^2-ω_ in^2) t /τ. Their solutions are given in terms of Airy functions:x(t) =-π(τ/ω_ in^2-ω_ fin^2 )^1/3{[z(t)](w)-(w)[z(t)]}y(t) =-π{[z(t)]'(w)-[z(t)]'(w)}where w = -ω_ in^2 [τ/ω_ in^2-ω_ fin^2]^2/3z(t) = -ω_2^2(t)[τ/ω_ in^2-ω_ fin^2]^2/3 In our calculations we specialise to two extreme cases, one in the limit τ→ 0 in which case we simply have U=1_6 and the other for very slow ramp: τ≫max(ω_ in^-1,ω_ fin^-1). In this case we can use the asymptotic expansion of the Airy functions for large arguments. Under this approximation the evolution operator takes the form:U=([ cosω_1τ 0 01/ω_1sinω_1τ 0 0; 0 √(ω_ in/ω_ fin)cosϕ 0 0 sinϕ/√(ω_ inω_ fin) 0; 0 0 cosω_3τ 0 01/ω_3sinω_3τ; -ω_1sinω_1τ 0 0 cosω_1τ 0 0; 0 -√(ω_ inω_ fin)sinϕ 0 0 √(ω_ fin/ω_ in)cosϕ 0; 0 0 -ω_3sinω_3τ 0 0 cosω_3τ ])where we have setϕ=2/3τω_ fin^2+ω_ inω_ fin+ω_ fin^2/ω_ fin+ω_ in.Equation (<ref>) can be interpreted as the evolution operator for the free evolution of oscillators 1 and 3 with their respective frequencies for a time τ times a squeezing operator dependent on the ratio ω_ in/ω_ fin combined with a phase space rotation with angle ϕ.§ DEFINITION OF GAUSSIAN DISCORD In this section we will give a brief mathematical definition of Gaussian discord <cit.>. For convenience, we use a different ordered basis from the main text, R̃=(x_1, p_1, x_2, p_2), and recall that the elements of a covariance matrix of two oscillators are defined as σ_αβ=1/2⟨R̃_αR̃_β +R̃_αR̃_β⟩ - ⟨R̃_α⟩⟨R̃_β⟩. For a bipartite covariance matrix in this basis,σ=([ A C; C^T B ]),where A, B and C are 2×2 matrices, the matrix A (B) relates only to the mean values in subsystem 1 (2) and the matrix C relates to correlations in these mean values. The Gaussian quantum discord of this state can then written as 𝒟=h(√(I_2))- h(d_-)-h(d_+)+h(√(E_ min))where we haveh(x)= (x+1/2)log(x+1/2)-(x-1/2)log(x-1/2),d_±^2= 1/2(λ±√(λ^2-4I_4)), λ= I_1+I_2+2I_3for I_1=(A), I_2=(B), I_3=(C) and I_4=(σ). The quantity E_ min is given by the following formula: E_ min= (|I_3|+√(I_3^2-(I_1-4I_4)(I_2-1/4))/2(I_2-1/4))^2if(I_1 I_2-I_4)^2≤(I_1+4I_4)(I_2+1/4)I_3^21/I_2(I_1I_2-I_3^2+I_4-√((I_1I_2+I_4-I_3^2)^2-4I_1I_2I_4))otherwise. § INTERNAL ENERGY AND QUANTUM CORRELATIONS FOR THE OPTIMISED MANY-CYCLE ENGINEIn this section, we present the internal energy dynamics of each oscillator for the optimised many cycle case presented in the main text. We also analyse the discord dynamics between each of the oscillators. The parameters used are ω_3=0.1ω_1, τ_ comp=85ω_1^-1, τ_H=0.59ω_1^-1, τ_C=0.9996ω_1^-1,α_12=0.038ω_1,α_23=10^-4ω_1. Figure <ref> shows the internal energy dynamics E_i=⟨ H_i ⟩ for each oscillator plotted against time in units of the Otto cycle time τ=τ_H+τ_C+2τ_ comp. The system performs 70 Otto cycles satisfying 𝒲<0 and the plots also show the subsequent Otto cycles where the engine reverses. At the end of the 140th Otto cycle each oscillator has almost returned to its initial state. As the coupling strength between the second and third oscillator is extremely weak, the internal energy of the third oscillator does not change. The efficiency of this process is unity, as Q_2=0. The number of cycles shown in Fig. <ref> makes the fine details of the dynamics difficult to parse. Figure <ref> shows a closer view of the internal energies of the first and second oscillators from the 40th to the 44th stroke. The internal energy of the first oscillator is shown in Fig. <ref>(a). As it is only interacting with the second oscillator during the heating stroke its energy is constant for most of the cycle. Due to the disparity of the time scales of the compression, expansion stages and the heating, cooling stages the latter appear instantaneous on this scale. In Fig. <ref>(b) the internal energy of the second oscillator is shown. The large slopes of each arch correspond to the energy increase and decrease caused by the compression and expansion stages. The small peak at the top of each arch corresponds to the heating stroke, where the height of each peak corresponds directly to the height of each energy reduction in panel <ref>(a).
http://arxiv.org/abs/1708.07435v3
{ "authors": [ "Brendan Reid", "Simon Pigeon", "Mauro Antezza", "Gabriele De Chiara" ], "categories": [ "quant-ph", "cond-mat.stat-mech" ], "primary_category": "quant-ph", "published": "20170824142354", "title": "A self-contained quantum harmonic engine" }
Learning Spatio-Temporal Features with 3D Residual Networks for Action Recognition Kensho Hara, Hirokatsu Kataoka, Yutaka SatohNational Institute of Advanced Industrial Science and Technology (AIST)Tsukuba, Ibaraki, Japan{kensho.hara, hirokatsu.kataoka, yu.satou}@aist.go.jpDecember 30, 2023 ==================================================================================================================================================================================================================== Convolutional neural networks with spatio-temporal 3D kernels (3D CNNs) have an ability to directly extract spatio-temporal features from videos for action recognition. Although the 3D kernels tend to overfit because of a large number of their parameters, the 3D CNNs are greatly improved by using recent huge video databases. However, the architecture of 3D CNNs is relatively shallow against to the success of very deep neural networks in 2D-based CNNs, such as residual networks (ResNets). In this paper, we propose a 3D CNNs based on ResNets toward a better action representation. We describe the training procedure of our 3D ResNets in details. We experimentally evaluate the 3D ResNets on the ActivityNet and Kinetics datasets. The 3D ResNets trained on the Kinetics did not suffer from overfitting despite the large number of parameters of the model, and achieved better performance than relatively shallow networks, such as C3D. Our code and pretrained models (e.g. Kinetics and ActivityNet) are publicly available at https://github.com/kenshohara/3D-ResNetshttps://github.com/kenshohara/3D-ResNets.§ INTRODUCTIONOne important type of real-world information is human actions. Automatically recognizing and detecting human action in videos are widely used in applications such as surveillance systems, video indexing, and human computer interaction.Convolutional neural networks (CNNs) achieve high performance in action recognition <cit.>. Most of the CNNs use 2D convolutional kernels <cit.>, similar to the CNNs for image recognition. The two-stream architecture <cit.> that consists of RGB and optical flow streams is often used to represent spatio-temporal information in videos. Combining the both streams improves action recognition performance.Another approach that captures the spatio-temporal information adopts spatio-temporal 3D convolutional kernels <cit.> instead of the 2D ones. Because of the large number of parameters of the 3D CNNs, training them on relatively small video datasets, such as UCF101 <cit.> and HMDB51 <cit.>, leads to lower performance compared with the 2D CNNs pretrained on large-scale image datasets, such as ImageNet <cit.>. Recent large-scale video datasets, such as Kinetics <cit.>, greatly contribute to improve the recognition performance of the 3D CNNs <cit.>. The 3D CNNs are competitive to the 2D CNNs even though their architectures are relatively shallow compared with the architectures of 2D CNNs .Very deep 3D CNNs for action recognition have not been explored enough because of the training difficulty caused by the large number of their parameters. Prior work in image recognition shows very deep architectures of CNNs improves recognition accuracy <cit.>. Exploring various deeper models for the 3D CNNs and achieving lower loss at convergence are important to improve action recognition performance. Residual networks (ResNets) <cit.> are one of the most powerful architecture. Applying the architecture of ResNets to 3D CNNs is expected to contribute further improvements of action recognition performance.In this paper, we experimentally evaluate 3D ResNets to get good models for action recognition. In other words, the goal is to generate a standard pretrained model in spatio-temporal recognition. We simply extend from the 2D-based ResNets to the 3D ones. We train the networks using the ActivityNet and Kinetics datasets and evaluate their recognition performance.Our main contribution is exploring the effectiveness of ResNets with 3D convolutional kernels. We expect that this work gives further advances to action recognition using 3D CNNs.§ RELATED WORKWe here introduce action recognition databases and approaches. §.§ Action Recognition DatabaseThe HMDB51 <cit.> and UCF101 <cit.> are the most successful databases in action recognition. The recent consensus, however, tells that these two databases are not large-scale databases. It is difficult to train good models without overfitting using these databases. More recently, huge databases such as Sports-1M <cit.> and YouTube-8M <cit.> are proposed. These databases are big enough whereas their annotations are noisy and only video-level labels (i.e. the frames that do not relate to target activities are included). Such noise and unrelated frames might prevent models from good training. In order to create a successful pretrained model like 2D CNNs trained on ImageNet <cit.>, the Google DeepMind released the Kinetics human action video dataset <cit.>. The Kinetics dataset includes 300,000 or over trimmed videos and 400 categories. The size of Kinetics is smaller than Sports-1M and YouTube-8M whereas the quality of annotation is extremely high.We use the Kinetics in order to optimize 3D ResNets. §.§ Action Recognition ApproachOne of the popular approach of CNN-based action recognition is two-stream CNNs with 2D convolutional kernels. Simonyan et al. proposed the method that uses RGB and stacked optical flow frames as appearance and motion information, respectively <cit.>. They showed combining the two-streams improves action recognition accuracy. Many methods based on the two-stream CNNs are proposed to improve action recognition performance <cit.>. Feichtenhofer et al. proposed combining two-stream CNNs with ResNets <cit.>. They showed the architecture of ResNets is effective for action recognition with 2D CNNs. Different from the above mentioned approaches, we focused on 3D CNNs, which recently outperform the 2D CNNs using large-scale video datasets.Another approach adopts CNNs with 3D convolutional kernels. Ji et al. proposed to apply the 3D convolution to extract spatio-tepmoral features from videos. Tran et al. trained 3D CNNs, called C3D, using the Sports-1M dataset <cit.>. They experimentally found 3 × 3 × 3 convolutional kernel achieved best performance. Varol et al. showed expanding temporal length of inputs for 3D CNNs improves recognition performance <cit.>. They also found using optical flows as inputs to 3D CNNs outperforms RGB inputs and combining RGB and optical flows achieved best performance. Kay et al. showed the results of 3D CNNs on their Kinetics dataset are competitive to the results of 2D CNNs pretrained on ImageNet whereas the results of 3D CNNs on the UCF101 and HMDB51 are inferior to the results of the 2D CNNs. Carreira et al. introduced the inception architecture <cit.>, which is very deep network (22 layers), to the 3D CNNs and achieved state-of-the-art performance <cit.>. In this paper, we introduce the ResNet architecture, which outperforms the inception architecture in image recognition, to the 3D CNNs.§ 3D RESIDUAL NETWORKS §.§ Network ArchitectureOur network is based on ResNets <cit.>. ResNets introduce shortcut connections that bypass a signal from one layer to the next. The connections pass through the gradient flows of networks from later layers to early layers, and ease the training of very deep networks. Figure <ref> shows the residual block, which is an element of ResNets. The connections bypass a signal from the top of the block to the tail. ResNets are conssits of multiple residual blocks.Table <ref> shows our network architecture. The difference between our networks and original ResNets <cit.> is the number of dimensions of convolutional kernels and pooling. Our 3D ResNets perform 3D convolution and 3D pooling. The sizes of convolutional kernels are 3 × 3 × 3, and the temporal stride of conv1 is 1, similar to C3D <cit.>. The network uses 16 frame RGB clips as inputs. The sizes of input clips is 3 × 16 × 112 × 112. Down-sampling of the inputs is performed by conv3_1, conv4_1, conv5_1 with a stride of 2 When the number of feature maps increased, we adopt identity shortcuts with zero-padding (type A in <cit.>) to avoid increasing the number of parameters.§.§ Implementation§.§.§ TrainingWe use stochastic gradient descent (SGD) with momentum to train our network. We randomly generate training samples from videos in training data to perform data augmentation. We first select temporal positions of each sample by uniform sampling. 16 frame clips are generated around the selected temporal positions. If the videos are shorter than 16 frames, we loop the videos as many times as necessary. We then randomly selects the spatial positions from the 4 corner or 1 center, similar to <cit.>. In addition to the positions, we also select the spatial scales of each sample to perform multi-scale cropping <cit.>.The scales are selected from {1, 1/2^1/4, 1/√(2), 1/2^1/4, 1/2}. The scale 1 means a maximum scale (i.e. the size is the length of short side of frame). The aspect ratio of cropped frame is 1. The generated samples are horizontally flipped with 50% probability. We also perform mean subtraction for each sample. All generated samples have the same class labels as their original videos.To train the 3D ResNets on the Kinetics dataset, we use SGD with a mini-batch size of 256 on 4 GPUs (NVIDIA TITAN X) using the training samples described above. The weight decay is 0.001 and the momentum is 0.9. We start from learning rate 0.1, and divide it by 10 for three times after the validation loss saturates. In preliminary experiments on the ActivityNet dataset, large learning rate and batch size was important to achieve good recognition performance.§.§.§ RecognitionWe recognize actions in videos using the trained model. We adopt the sliding window manner to generate input clips, (i.e. each video is split into non-overlapped 16 frame clips.) Each clip is cropped around a center position with the maximum scale. We estimate class probabilities of each clip using the trained model, and average them over all clips of a video to recognize actions in videos.§ EXPERIMENTS§.§ DatasetIn the experiments, we used the ActivityNet (v1.3) <cit.> and Kinetics datasets <cit.>. The ActivityNet dataset provides samples from 200 human action classes with an average of 137 untrimmed videos per class and 1.41 activity instances per video. The total video length is 849 hours, and the total number of activity instances is 28,108. The dataset is randomly split into three different subsets: training, validation and testing, where 50% is used for training, and 25% for validation and testing.The Kinetics dataset has 400 human action classes, and consists of 400 or more videos for each class. The videos were temporally trimmed, so that they do not include non-action frames, and last around 10 seconds. The total number of the videos is 300,000 or over. The number of training, validation, and testing sets are about 240,000, 20,000, 40,000, respectively.The number of activity instances of the Kinetics is ten times larger than that of the ActivityNet whereas the total video lengths of the both datasets are close.For both datasets, we resized the videos to 360 pixels height without changing their aspect ratios, and stored them. §.§ ResultsWe first describe the preliminary experiment on the ActivityNet dataset. The purpose of this experiment is exploring the training of the 3D ResNets on the relatively small dataset. In this experiment, we trained 18-layer 3D ResNet described in Table <ref> and Sports-1M pretrained C3D <cit.>. Figure <ref> shows the training and validation accuracies in the training. The accuracies were calculated based on recognition of not entire videos but 16 frame clips. As shown in Figure <ref> fig:training_activitynet_resnet, the 3D ResNet-18 overfitted so that its validation accuracies was significantly lower than the training ones. This result indicates that the ActivityNet dataset is too small to train the 3D ResNets from scratch. By contrast, Figure <ref> fig:training_activitynet_c3d shows that the Sports-1M pretrained C3D did not overfit and achieved better recognition accuracy. The relatively shallow architecture of the C3D and pretraining on the Sports-1M dataset prevent the C3D from overfitting.We then show the experiment on the Kinetics dataset. Here, we trained 34-layer 3D ResNet instead of 18-layer one because the number of activity instances of the Kinetics is significantly larger than that of the ActivityNet. Figure <ref> shows the training and validation accuracies in the training. The accuracies were calculated based on recognition of 16 frame clips, similar to Figure <ref>. As shown in Figure <ref> fig:training_kinetics_resnet, the 3D ResNet-34 did not overfit and achieved good performance. The Sports-1M pretrained C3D also achieved good validation accuracy, as shown in Figure <ref> fig:training_kinetics_c3d. Its training accuracy, however, was clearly lower than the validation accuracy, (i.e. the C3D underfitted). In addition, the 3D ResNet is competitive to the C3D without pretraining on the Sports-1M dataset. These results indicate that the C3D is too shallow and the 3D ResNets are effective when using the Kinetics dataset. Table <ref> shows accuracies of our 3D ResNet-34 and state-of-the-arts. C3D w/ BN <cit.> is the C3D that employ batch normalization after each convolutional and fully connected layers. RGB-I3D w/o ImageNet <cit.> is the inception <cit.>, which is very deep network (22 layers) similar to the ResNets, -based CNNs with 3D convolutional kernels. Here, we show the results of the RGB-I3D without pretraining on the ImageNet. The ResNet-34 achieved higher accuracies than Sports-1M pretrained C3D and C3D with batch normalization trained from scratch. This result supports the effectiveness of the 3D ResNets. By contrast, RGB-I3D achieved the best performance whereas the number of depth of ResNet-34 is greater than that of RGB-I3D. A reason for this result might be the difference of number of used GPUs. Large batch size is important to train good models with batch normalization <cit.>. Carreira et al. used 32 GPUs to train the RGB-I3D whereas we used 4 GPUs with 256 batch size. They might use more large batch size on their training and it contribute to the best performance. Another reason might be the difference of sizes of input clips. The size for the 3D ResNet is 3 × 16 × 112 × 112 due to the GPU memory limits whereas that for the RGB-I3D is 3 × 64 × 224 × 224. High spatial resolutions and long temporal durations improve recognition accuracy <cit.>. Therefore, using a lot of GPUs and increasing batch size, spatial resolutions, and temporal durations might achieve further improvements of 3D ResNets.Figure <ref> shows examples of classification results of 3D ResNets-34. § CONCLUSIONWe explore the effectiveness of ResNets with 3D convolutional kernels. We trained the 3D ResNets using the Kinetics dataset, which is a large-scale video datasets. The model trained on the Kinetics performs good performance without overfitting despite the large number of parameters of the model. Our code and pretrained models are publicly available at https://github.com/kenshohara/3D-ResNetshttps://github.com/kenshohara/3D-ResNets.Because of the very high computational time of the training of 3D ResNets (three weeks), we mainly focused on the ResNets-34. In future work, we will conduct additional experiments for deeper model (ResNets-50, -101) and other deep architectures, such as DenseNets-201 <cit.>.ieee 10=-1ptYouTube8M S. Abu-El-Haija, N. Kothari, J. Lee, P. Natsev, G. Toderici, B. Varadarajan, and S. Vijayanarasimhan. Youtube-8m: A large-scale video classification benchmark. arXiv preprint, abs/1609.08675, 2016.I3D J. Carreira and A. Zisserman. Quo vadis, action recognition? A new model and the kinetics dataset. arXiv preprint, abs/1705.07750, 2017.imagenet_cvpr09 J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2009.activitynet B. G. Fabian Caba Heilbron, Victor Escorcia and J. C. Niebles. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 961–970, 2015.STResNet C. Feichtenhofer, A. Pinz, and R. Wildes. Spatiotemporal residual networks for video action recognition. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), pages 3468–3476, 2016.Feichtenhofer16 C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for video action recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1933–1941, 2016.ResNet K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.densenets G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–9.BatchNorm S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, pages 448–456, 2015.Ji2013 S. Ji, W. Xu, M. Yang, and K. Yu. 3D convolutional neural networks for human action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):221–231, Jan 2013.KarpathyCVPR14 A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1725–1732, 2014.Kinetics W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, and A. Zisserman. The kinetics human action video dataset. arXiv preprint, arXiv:1705.06950, 2017.HMDB51 H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre. HMDB: a large video database for human motion recognition. In Proceedings of the International Conference on Computer Vision (ICCV), pages 2556–2563, 2011.ReLU V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the International Conference on Machine Learning, pages 807–814, USA, 2010. Omnipress.Simonyan2014 K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In Proceedings of the Advances in Neural Information Processing Systems (NIPS), pages 568–576, 2014.UCF101 K. Soomro, A. Roshan Zamir, and M. Shah. UCF101: A dataset of 101 human action classes from videos in the wild. CRCV-TR-12-01, 2012.Inception C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1–9, June 2015.C3D D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the International Conference on Computer Vision (ICCV), pages 4489–4497, 2015.LongTermTemporalConv G. Varol, I. Laptev, and C. Schmid. Long-term temporal convolutions for action recognition. arXiv preprint, abs/1604.04494, 2016.Wang2015TDD L. Wang, Y. Qiao, and X. Tang. Action recognition with trajectory-pooled deep-convolutional descriptors. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4305–4314, 2015.VeryDeepTwo L. Wang, Y. Xiong, Z. Wang, and Y. Qiao. Towards good practices for very deep two-stream convnets. arXiv, abs/1507.02159, 2015.Wang2016TSN L. Wang, Y. Xiong, Z. Wang, Y. Qiao, D. Lin, X. Tang, and L. Van Gool. Temporal segment networks: Towards good practices for deep action recognition. In Proceedings of the European Conference on Computer Vision (ECCV), pages 20–36, Cham, 2016.
http://arxiv.org/abs/1708.07632v1
{ "authors": [ "Kensho Hara", "Hirokatsu Kataoka", "Yutaka Satoh" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170825070549", "title": "Learning Spatio-Temporal Features with 3D Residual Networks for Action Recognition" }
Department of Physics and Nanoscience Center, University of Jyväskylä, P.O. Box 35 (YFL), FI-40014 University of Jyväskylä, FinlandI apply the recently developed formalism of generalized quasiclassical theoryto show that using hybrid superconducting systems with non-collinear strong ferromagnets one can realize the Josephson junction between Berezinskii-type superconductors. The reported calculationreproduces main features observed in the recent experiment, namely the the slightly asymmetric double-slit Fraunhofer interference pattern of the Josephson current through the ferromagnetic vortex. The double-slit structure results from the spatially inhomogeneous Berezinskii state with the amplitude controlled by the local angle between magnetic moments in two ferromagnetic layers. The critical current asymmetryby the sign of magnetic field can signal the presence of spontaneous supercurrents generated by the non-coplanar magnetic texture near the core of the ferromagnetic vortex core. I demonstrate that ferromagnetic vortex can induce spontaneous vorticity inthe odd-frequency order parameter manifesting the possibilityof the emergent magnetic field to create topological defects. Double-slit Fraunhofer pattern as the signature of the Josephson effect between Berezinskii superconductors through the ferromagnetic vortex.M.A. Silaev December 30, 2023 ===============================================================================================================================================During the recent years large attention has been devoted to the studies of long-range proximity effect and spin-polarized Josephson currents carried by the spin-triplet Cooper pairs in superconductor/ferromagnet/superconductor (S/F/S) heterostructures<cit.>.This interest is motivated the possible applications of spin-polarized superconducting currents in spintronics. Much effort is invested to the studies of tunable spintronic elements where the Josephson current <cit.> or thecritical temperature <cit.> are controllable by the magnetic degrees of freedom. One of the possible ways for the implementation of such a devise has been suggested in my work <cit.> employing the well-controlled properties of the nanomagnets with vortex-like magnetization patterns. In that proposal I have demonstrated that it is possible to gain the effective control over the long-range proximity effect by tuning the position of ferromagnetic (FM) vortexwith the help of external in-plane magnetic field. Recently the conceptually similar devise has been realized experimentally<cit.>.Schematically this system is shown in the Fig.(<ref>)a. It consists of two spin-textured ferromagnetsF and F' with the magnetizations M and M^' and the superconducting layer S. The thin layer F' is in the contact with superconductor and contains a gap. The Josephson current flows through the layer F which contains FM vortex. It has been observed that the presence of FM vortex in the system results in the non-trivial modification of thecritical current dependence I_c= I_c(H) as a function of the external magneticfield H. The striking features produced by the FM vortex is that this dependencebecomes similar to the double-slit Fraunhofer interferencepattern instead of the usual single-slit form produces by the homogeneous junction.I demonstrate below that this behaviour can be considered asthe direct experimental signatures of the Josephson effect between Berezinskii-type superconductors characterized by the spin-triplet odd-frequency s-wave order parameters <cit.>. Such superconducting state is induced in the F layer in result of the combined effect of Zeeman splitting in the S electrode and filtering of equal-spin (ES) Cooper pairs in F. Qualitatively the unusual Fraunhofer patterns are explained by the intrinsic inhomogeneity of the induced Berezinskii state which depends on the relative orientation of the local magnetic moments in F and F' layers,M and M^' respectively. The amplitude of induced order parameter is defined by the polar angle θ between M and M^', see Fig.<ref>b.The variation of θ=θ (x) resulting from the magnetic texture along the junction leads to the the experimentally observeddouble-slit pattern of thecritical current<cit.>.Besides that, the azimuthal angle α produces spin-dependent phases of the order parameter components.The variation of α arises in non-coplanar magnetic textures and in strong ferromagnets with lifted spin degeneracy it produces spontaneous supercurrents <cit.>. The spatial dependence of α= α (x) along the junction is therefore equivalent to the emergent localized magnetic field and results in the critical current asymmetry I_c(H) ≠ I_c(-H) which can signal the presence of spontaneous supercurrents trough the FM vortex. The system shown in Fig. <ref>a can be described analytically usingseveral assumptions. First, the layers are supposed to be thin enough to neglect the variation magnetization fields along the z coordinate. Next, the layer F' is assumed to induce an effective exchange field within the superconducting electrode h_s∥ M^' . This model is justified by the small thickness of the F' layer which allows to neglect the variation of correlation functions along z. The Green's functions in F'/S layer can be found considering just thesuperconducting layer with the thickness-averaged exchange field<cit.> h_s =h_F^'d_F^'/(d_F^' +d_S) whereh_F^'∥ M^' is the exchange field in F', d_F^' and d_Sare the thickness of the F' and S layers correspondingly. For simplicity the density of states is taken equal in S and F'.This exchange field leads to the significant critical temperature suppression of the S/F'/F structure observed in theexperiment<cit.>.The role of this exchange field is to produce the mixed-spin triplet Cooper pairs which can be converted into the equal-spin correlations (ESC) in the ferromagnetic layer due to the spin-dependent tunnelling <cit.>. The F layer hosting FM vortex has rather large thickness as compared to the mean free path, so that only the the equal-spin correlations (ESC)residing on separate spin-split Fermi surfaces can penetrate to the full thickness. The generalized Usadel equation describing ESC quasiclassical propagators ĝ_σ derived in Ref.(<cit.>) reads D_σ∂̂(ĝ_σ∂̂ĝ_σ) - [Δ̂_σ+ωτ̂_3, ĝ_σ] =0 where σ= ± 1 is the spin subband index, D_σ are the spin-dependent diffusion coefficients. The covariant differential operator is∂̂_ r = ∇_ r + i σ Z[τ̂_3, ·] -i eA [τ̂_3, ·] , where A is the electromagnetic vector potential, Z is the adiabatic spin gauge field, and D_σ are the spin-dependent diffusion coefficients.The generalized Usadel equation (<ref>) is supplemented by the expression for the current j = iπ T e ∑_σ =±∑_ων_σ D_σ Tr[τ_3 ĝ_σ∂̂_ rĝ_σ] ,where ν_± is the spin-up/down density of states (DOS).The effective spin-dependent order parameter Δ̂_σ in Eq.(<ref>) describing the ES componentsof the proximity-induced order parameter can be obtained from the non-diagonal part of the general tunnelling self-energy <cit.> Σ̂= γΓ̂F̂Γ̂^† , where F̂ = (g_01σ_0 + g_31σ n_h)e^iχτ_3τ̂_1 is the anomalous Green's function in the superconductor with exchange field, n_h =h_s/h_s. In the absence of spin-orbital relaxation the spin-singlet andspin-triplet parts are given byg_01= [F_bcs(ω+ih_s)+F_bcs(ω-ih_s)]/2 andg_31 = [F_bcs(ω+ih_s)-F_bcs(ω-ih_s)]/2, whereF_bcs(ω) = Δ/√(ω^2+ Δ^2). The spin-polarized tunnelling matrix has the formΓ̌= t σ̂_0 τ̂_0 + u σ m τ̂_3 where m =M/M is the directionof magnetization in the ferromagnet and γ = σ_n R is the parameter describing the barrier strength,R is the normal state tunnelling resistance per unit area. The normalized tunnelling coefficients are t=√(1+√(1-P^2))/2, u=√(1+√(1-P^2))/2 and P is the effective spin-filtering coefficientthat ranges from 0 (no polarization) to 1 (100 % filtering efficiency).Then the ES component of the tunnelling self-energy is given byΣ̂_ES = γ g_31 e^iχτ̂_3[ τ̂_1 σ h_s + P τ̂_2 σ ( h_s× m)] . Projecting Σ̂_ES to the spin-up and spin-down states with respect to the quantization axis set by the magnetization m we get the ES components of the effective spin-triplet order parameter Δ̂_σ = h_⊥Δ_Fστ_1 e^iτ_3 (χ + σα) , with the amplitude given by Δ_Fσ = -( 1+ σ P) g_31 and the prefactorh_⊥ = sinθ which is the projection of exchange field to the plane perpendicular to the magnetization direction, θ is the polar angle of h_s in the coordinate system defined by m. The distribution of sinθ can be obtained by the micromagnetic simulations<cit.> for the realistic geometry. The additional spin-dependent phase α in Eq.(<ref>) is defined by the azimuthal angle of h_s as shown in Fig.(<ref>)b. For the in-plane textures when both m and h_s lie in the xy plane the additional phase in Eq.(<ref>)is absent α =0. However near FM vortex core the texture becomes non-coplanar which leads tothe gradients of α, which are coupled to the spin gauge field<cit.> Z so that the combinationV_s = σ( ∇α - 2 Z) has the meaning of the texture-induced part of the superfluid velocity.In strong ferromagnets with broken spin degeneracy V_s produces spontaneous charge supercurrents<cit.>resulting in the shift of the Fraunhoffer pattern as shown below.Due to the symmetry g_31 (ω) =- g_31 (-ω) the pairing amplitude in Eq. (<ref>) represents the odd-frequency spin-triplet s-wave superconductingorder parameter suggested by Berezinskii <cit.> and intensively studied afterwards<cit.>. The odd superconducting correlations has been studied inseveral setups with proximity- induced superconductivity in ferromagnets<cit.>, normal metals<cit.>, topological insulators<cit.> and non-equilibrium systems<cit.>.Usually due to the broken translational or/and spin-rotation symmetries the total Cooper wave function in proximity systems is a superposition of odd- and even- frequency components which inevitably coexist at one and the same point<cit.>, except the discrete set of points in the cores of proximity-induced vortices<cit.>.In contrast, the order parameter in Eq.(<ref>) represents the pure Berezinskii state without admixtures of spin-singlet and/oreven-frequency components. Therefore the setup shown in Fig.(<ref>)a consisting of two non-collinear strong ferromagnets emulates the Josephson effect between two Berezinskii superconductors through the FM vortex. Note that the absence of even-frequency spin-singlet pairings in the Eq.(<ref>) is not exact, since the singlet correlations can strictly speaking penetrate even to the strong ferromagnetic layers. However in the dirty regime the singlet amplitude is exponentially suppressed at distances larger than the mean free path from the F/S interface, so that the presence of such components does not affecttransport properties in the thick ferromagnetic layer.Experimentally the signature of Josephson current between proximity-induced Berezinskii superconductors can be obtained due to the non-trivial structure of the order parameter (<ref>), which amplitude depends on the angle θ between magnetic moments in F and F' layers. Such a dependence is peculiar for the spin-triplet odd-frequency pairings sincethe spin-singlet component is not sensitive to the exchange field rotations. As shown below, that results in the striking modification of the Fraunhoffer pattern in the critical current as a function of external magnetic field I_c=I_c(H) which reflects the inhomogeneity of the magnetic texture in the S/F'/F system. This behaviour coincides qualitatively with recent experimental observations<cit.>. The Josephson effect in the setup shown in Fig.(<ref>)a can be analysed using theformalism of Eqs.(<ref>,<ref>,<ref>).Due to the small distance between superconducting electrodesone can use the model 1D system shown in Fig.<ref>b which is the long junction with gap functions in the leads given by (<ref>). There are two key points to understand the experimental results. First, the effective gap amplitude is Δ_σ∝sinθ that is determined by the angle θ between m and h_s. As shown by the magnetization pattern data<cit.>, θ changes quite strongly along the junction both due to the vortex-like pattern in F and the distortions of the mono-domain statein F' near the trench that cuts S and F' layers. The model distribution of θ along the Josephson junctionthat is used in calculations is shown in Fig.<ref>c and described by θ (x) = (π/2) θ̃(x/L_J), where θ̃(x̃) =tanh(x̃/w_c) tanh( x̃ -0.5/w_e) tanh( x̃ + 0.5/w_e). The basic features of this profile is that θ=0 at the FM vortex center x=0 and at the edges x=± L_J/2, while reaching ±π/2 at the middle points. The dimensionless widths w_e and w_c determine the shape of the Fraunhofer pattern. Second, near the FM vortex core the magnetization pattern is non-coplanar which generates both the gauge field Z and the spin-dependentphase α=α (x) distribution along the junction. Using the model magnetization distributionm= ( m_⊥cosφ,m_⊥sinφ, m_z), where φ is the real-space polar angleone obtains Z = m_z ∇φ /2and α = arctan [m_z tan (φ - φ_0)], where φ_0 is the azimuthal angle of h_s = h_s (cosφ_0, sinφ_0, 0). Note that for |m_z|=1 the superfluid velocity (<ref>)is zero V_s = 0. However this is not the case at |m_z |< 1. The effective phase difference can be found integrating the superfluid velocity across the junction Δα = ∫_-d^d dy V_Sy, where 2d is the junction width. For φ_0=0 and provided that m_z≈ const at the scale |y|<done obtains in this way Δα (x)= 2[ arctan(m_z tanφ) - m_z φ ] where φ = arctan(d/x).The Josephson current can be calculated analytically assuming that the proximity effect is weak and one can use the linearized theory formulated for the component f_σ =Tr[(τ_x + iτ_y)ĝ_σ]/2.The linearized Usadel equation and expression for the current across the junction readD_σ∇̂^2 f_σ - 2 ω f_σ - Δ_σ =0j = π T e ∑_σ=±∑_ων_σ D_σ Im (f_σ∇̂f_σ^*) Since the distance between superconducting electrodes is small,proximity-induced vortices<cit.> cannot form in the junction. Then the order parameter distribution can be approximated by the step-wise function Δ_σ(y) =h_⊥Δ_Fσ[cos(χ_σ/2) + i Step(y) sin(χ_σ/2)]. It is convenient to choose the gauge so that A_y= 0 and neglect A_x component due to the smalljunction width. Then the total phase difference is given by χ_σ (x) =Δχ + σΔα (x) + ϕ x/L_J, whereΔχ= χ_R - χ_L andΔα (x)= α_R - α_L are the usual and spin-dependent phase differences, ϕ = Φ/Φ_0 where Φ is the total magnetic flux through the junction area including the leads, Φ_0 is the flux quantum.The field Z is treated as locally homogeneous and gauged it out by using the effective phase difference (<ref>). Within the above assumptions the linearized Usadel Eq.(<ref>) can be solvedanalytically and Eq.(<ref>) yields the followingexpression for the Josephson current density j (x) = ∑_σ= ± j_σ (sinθ)^2 sinχ_σ ,j_σ = π T e ν_σ D_σ/8∑_ωΔ_Fσ^2/ξ_Nσω^2, whereξ_Nσ = √(2|ω|/D_σ). The total critical current is the given by I_c = √(I_1^2 + I_2^2), whereI_1 = ∑_σ j_σ∫_-L_J/2^L_J/2 (sinθ)^2 cos(ϕ x + σΔα )dxI_2 = ∑_σ j_σ∫_-L_J/2^L_J/2 (sinθ)^2 sin(ϕ x + σΔα )dx .Fig.(<ref>) presents the dependencies I_c = I_c (ϕ) calculated according to the Eqs.(<ref>,<ref>) with the help model distributions(<ref>,<ref>) for θ(x) and Δα(x), respectively.The upper panel shows the standard single-slit pattern from the homogeneous junction (dashed curve) and the double-slit pattern produced by thejunction with FM vortex (solid curve). Here the spin-dependent phase is absent Δα=0. The distribution of θ (x) is taken in the form (<ref>) with free parameters w_e,c.The homogeneous case is obtained in the limit w_e, w_c → 0. Increasing the widths w_e,v leads to the gradual transform of the interference pattern which at w_e=w_c= 0.3 acquires the double-slit form shown in the upper panel of theFig.(<ref>).Note that it is necessary to take into account the suppression ofθ(x) both at the FM vortex core and at the boundary. In case when w_e→ 0 the interference pattern tends to the deformed homogeneous picture. The spin-dependent phaseΔα (x) can produce the Josephson current (<ref>) even in the absence of the usual phase difference and external magnetic field. Since the phase shift has opposite signs in spin-up/down subbands the net effect on the current shows up only due to the certain amount of spin-filtering in the system<cit.>, that isj_+ ≠ j_-. This is possible only if the spin-up and spin-down diffusion coefficients and/or DOS are different. The model distribution (<ref>) produced by the FM vortex leads to the non-trivial modification ofthe Franhofer pattern, shown by the solid line in Fig.(<ref>), lower panel. The parameters here are d= 0.2 L_J, j_- = 0.5 j_+. Asymmetric curve I_c(H)≠ I_c(-H) is qualitatively similar to the one produced by the localized magnetic field or the internalphase shiftsin junctions between chiral p-wave superconductors<cit.>.Here the asymmetry is finite due to the spin filtering j_+ ≠ j_- and thus it signal the presence of spontaneous currents.For the the S/F'/F setup with magnetic configurations similar to Fig.(<ref>)a the effect of spontaneous current is rather tinysince the induced superfluid velocity is zero V_s = 0 both at the center wherem_z = 0 and outside the FM vortex core where m_z=1. The situation becomes completely different for z · h_s ≠ 0. For example if h_s = h_sz one can see that the spin-dependent phase is constant ∇α =0 so that V_s = σ m_z ∇φ /2 and has singularity at r=0.The current is however finite j ∼ h_⊥^2/r since h_⊥ = √(1-m_z^2)∼ r. This behaviour is similar to the orbital supercurrents around Abrikosov vortices in bulk superconductors.Note that this texture-induced spontaneous current is not related to the anomalous Josephson effect <cit.> since it exists without the weak link in the superconducting layer. The singularity of Z can be removed by the gauge transform introducing the vorticity tothe spin-dependent phase α = φ. Therefore in such system FM vortex generates singly-quantized superconducting vortices in the proximity-induced Berezinskii superconductor.The physics of such vortices and superconducting kinks that are generated by magnetic domain walls in the multilayer setups similar to the Fig.(<ref>)ais potentially quite richbut is beyond the scope of the present paper.To summarize, in this letter I explain the recent observation of the unusual magnetic field dependence ofcritical current of the Josephson junction through ferromagnetic vortex.The key theoretical finding is that the experimental results are consistent with the existence ofproximity-induced Berezinskii superconductors in the S/F'/F systems with non-collinear strong ferromagnets.The effective order parameter amplitude is defined by the angle between magnetic moments in F and F' layers. The inhomogeneous distribution of this angle results in the double-slit interference pattern of the critical current.Besides that, the non-coplanar magnetic texture near the FM vortex core generatesspin-dependent phase gradients and the emergent gauge field which can be combined into the invariantcombination yielding the spin-dependent part of the superfluid velocity V_s. The lifted degeneracy of DOS and diffusion coefficients in spin subbands in strong ferromagnetsconverts V_s to the spontaneous supercurrentwhich is shown to result in the critical current asymmetry I(H) ≠ I(-H).Finally, I show that FM vortex can induce spontaneous vorticity in the Berezinskii state, provided that the magnetization in F' layer has an out-of-plane direction. That finding demonstrates that besides affecting transport properties of textured magnets<cit.> the emergent gauge field can show up in the orbital motion of spin-triplet Cooper pairs and even produce topological defects in the superconducting order parameter .The result obtained here for the FM vortex are in general valid for magnetic skyrmions as well, owning to the similarity in theirmagnetization distributions.I thank T. T Heikkilä ,A. Mel'nikov, I. Bobkova and A. Bobkovfor stimulating discussions. The work was supported by the Academy of Finland.68 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Buzdin(2005)]Buzdin2005 author author A. I. Buzdin, https://link.aps.org/doi/10.1103/RevModPhys.77.935 journal journal Rev. Mod. Phys. volume 77, pages 935 (year 2005)NoStop [Bergeret et al.(2005)Bergeret, Volkov, and Efetov]Bergeret2005 author author F. S. Bergeret, author A. F. Volkov,and author K. B. Efetov, http://link.aps.org/doi/10.1103/RevModPhys.77.1321 journal journal Rev. Mod. Phys. volume 77, pages 1321 (year 2005)NoStop [Eschrig(2015)]Eschrig2015a author author M. Eschrig, http://stacks.iop.org/0034-4885/78/i=10/a=104501 journal journal Reports on Progress in Physicsvolume 78, pages 104501 (year 2015)NoStop [Linder and Robinson(2015)]Linder2015 author author J. Linder and author J. W. A. Robinson, http://dx.doi.org/10.1038/nphys3242 journal journal Nat Phys volume 11, pages 307 (year 2015)NoStop [Ryazanov et al.(2001)Ryazanov, Oboznov, Rusanov, Veretennikov, Golubov, and Aarts]Ryazanov2001 author author V. V. Ryazanov, author V. A. Oboznov, author A. Y. Rusanov, author A. V. Veretennikov, author A. A. Golubov,and author J. Aarts, https://link.aps.org/doi/10.1103/PhysRevLett.86.2427 journal journal Phys. Rev. Lett. volume 86, pages 2427 (year 2001)NoStop [Frolov et al.(2008)Frolov, Stoutimore, Crane, Van Harlingen, Oboznov, Ryazanov, Ruosi, Granata, and Russo]Frolov2008 author author S. M. Frolov, author M. J. A. Stoutimore, author T. A. Crane, author D. J. Van Harlingen, author V. A. Oboznov, author V. V. Ryazanov, author A. Ruosi, author C. Granata,andauthor M. Russo, http://dx.doi.org/10.1038/nphys780 journal journal Nat Phys volume 4, pages 32 (year 2008)NoStop [Feofanov et al.(2010)Feofanov, Oboznov, Bol/'ginov, Lisenfeld, Poletto, Ryazanov, Rossolenko, Khabipov, Balashov, Zorin, Dmitriev, Koshelets, and Ustinov]Feofanov2010 author author A. K. Feofanov, author V. A. Oboznov, author V. V. Bol/'ginov, author J. Lisenfeld, author S. Poletto, author V. V. Ryazanov, author A. N. Rossolenko, author M. Khabipov, author D. Balashov, author A. B. Zorin, author P. N. Dmitriev, author V. P. Koshelets,and author A. V. Ustinov, http://dx.doi.org/10.1038/nphys1700 journal journal Nat Phys volume 6, pages 593 (year 2010)NoStop [Iovan et al.(2014)Iovan, Golod, and Krasnov]Iovan2014 author author A. Iovan, author T. Golod,andauthor V. M. Krasnov, https://link.aps.org/doi/10.1103/PhysRevB.90.134514 journal journal Phys. Rev. B volume 90,pages 134514 (year 2014)NoStop [Golubov et al.(2002)Golubov, Kupriyanov, and Fominov]Golubov2002 author author A. A. Golubov, author M. Y. Kupriyanov,and author Y. V. Fominov, https://doi.org/10.1134/1.1475721 journal journal Journal of Experimental and Theoretical Physics Letters volume 75, pages 190 (year 2002)NoStop [Robinson et al.(2010a)Robinson, HalГЎsz, Buzdin, and Blamire]Robinson2010a author author J. W. A.Robinson, author G. B.HalГЎsz, author A. I.Buzdin,and author M. G.Blamire, https://link.aps.org/doi/10.1103/PhysRevLett.104.207001 journal journal Phys. Rev. Lett. volume 104, pages 207001 (year 2010a)NoStop [Robinson et al.(2010b)Robinson, Witt,and Blamire]Robinson2010 author author J. W. A.Robinson, author J. D. S.Witt,and author M. G.Blamire, http://science.sciencemag.org/content/329/5987/59.abstract journal journal Science volume 329, pages 59 (year 2010b)NoStop [Anwar et al.(2010)Anwar, Czeschka, Hesselberth, Porcu, and Aarts]Anwar2010 author author M. S. Anwar, author F. Czeschka, author M. Hesselberth, author M. Porcu,and author J. Aarts, https://link.aps.org/doi/10.1103/PhysRevB.82.100501 journal journal Phys. Rev. B volume 82,pages 100501 (year 2010)NoStop [Keizer et al.(2006)Keizer, Goennenwein, Klapwijk, Miao, Xiao, and Gupta]Keizer2006 author author R. S. Keizer, author S. T. B. Goennenwein, author T. M. Klapwijk, author G. Miao, author G. Xiao,and author A. Gupta, http://dx.doi.org/10.1038/nature04499 journal journal Nature volume 439, pages 825 (year 2006)NoStop [Tagirov(1999)]Tagirov1999 author author L. R. Tagirov, https://link.aps.org/doi/10.1103/PhysRevLett.83.2058 journal journal Phys. Rev. Lett. volume 83, pages 2058 (year 1999)NoStop [Fominov et al.(2003)Fominov, Golubov, and Kupriyanov]Fominov2003 author author Y. V. Fominov, author A. A. Golubov,and author M. Y. Kupriyanov, https://doi.org/10.1134/1.1591981 journal journal Journal of Experimental and Theoretical Physics Letters volume 77, pages 510 (year 2003)NoStop [Zdravkov et al.(2013)Zdravkov, Kehrle, Obermeier, Lenk, Krug von Nidda, MГјller, Kupriyanov, Sidorenko, Horn, Tidecks, and Tagirov]Zdravkov2013 author author V. I. Zdravkov, author J. Kehrle, author G. Obermeier, author D. Lenk, author H.-A. Krug von Nidda, author C. MГјller, author M. Y. Kupriyanov, author A. S. Sidorenko, author S. Horn, author R. Tidecks,and author L. R. Tagirov, https://link.aps.org/doi/10.1103/PhysRevB.87.144507 journal journal Phys. Rev. B volume 87, pages 144507 (year 2013)NoStop [Fominov et al.(2010)Fominov, Golubov, Karminskaya, Kupriyanov, Deminov, and Tagirov]Fominov2010 author author Y. V. Fominov, author A. A. Golubov, author T. Y. Karminskaya, author M. Y. Kupriyanov, author R. G. Deminov,and author L. R. Tagirov, https://doi.org/10.1134/S002136401006010X journal journal JETP Letters volume 91, pages 308 (year 2010)NoStop [Singh et al.(2015)Singh, Voltan, Lahabi, and Aarts]Singh2015 author author A. Singh, author S. Voltan, author K. Lahabi,and author J. Aarts, https://link.aps.org/doi/10.1103/PhysRevX.5.021019 journal journal Phys. Rev. X volume 5,pages 021019 (year 2015)NoStop [Flokstra et al.(2010)Flokstra, van der Knaap, and Aarts]Flokstra2010 author author M. Flokstra, author J. M. van der Knaap,and author J. Aarts,https://link.aps.org/doi/10.1103/PhysRevB.82.184523 journal journal Phys. Rev. B volume 82, pages 184523 (year 2010)NoStop [Silaev(2009)]Silaev2009 author author M. A. Silaev, http://link.aps.org/doi/10.1103/PhysRevB.79.184505 journal journal Phys. Rev. B volume 79, pages 184505 (year 2009)NoStop [Lahabi et al.(2017)Lahabi, Amundsen, Ouassou, Beukers, Pleijster, Linder, Alkemade,and Aarts]AartsExperiment author author K. Lahabi, author M. Amundsen, author J. Ouassou, author E. Beukers, author M. Pleijster, author J. Linder, author P. Alkemade,and author J. Aarts, @noopjournal journal arXiv:1705.07020(year 2017)NoStop [Berezinskii(1974)]Berezinskii author author V. L. Berezinskii, @noopjournal journal JETP Letters volume 20, pages 287 (year 1974)NoStop [Kirkpatrick and Belitz(1991)]Kirkpatrick1991 author author T. R. Kirkpatrick and author D. Belitz, https://link.aps.org/doi/10.1103/PhysRevLett.66.1533 journal journal Phys. Rev. Lett. volume 66, pages 1533 (year 1991)NoStop [Belitz and Kirkpatrick(1992)]Belitz1992 author author D. Belitz and author T. R. Kirkpatrick, https://link.aps.org/doi/10.1103/PhysRevB.46.8393 journal journal Phys. Rev. B volume 46, pages 8393 (year 1992)NoStop [Belitz and Kirkpatrick(1999)]Belitz1999 author author D. Belitz and author T. R. Kirkpatrick, https://link.aps.org/doi/10.1103/PhysRevB.60.3485 journal journal Phys. Rev. B volume 60, pages 3485 (year 1999)NoStop [Balatsky and Abrahams(1992)]Balatsky1992 author author A. Balatsky and author E. Abrahams, https://link.aps.org/doi/10.1103/PhysRevB.45.13125 journal journal Phys. Rev. B volume 45, pages 13125 (year 1992)NoStop [Abrahams et al.(1995)Abrahams, Balatsky, Scalapino, andSchrieffer]Abrahams1995 author author E. Abrahams, author A. Balatsky, author D. J. Scalapino,andauthor J. R. Schrieffer, https://link.aps.org/doi/10.1103/PhysRevB.52.1271 journal journal Phys. Rev. B volume 52,pages 1271 (year 1995)NoStop [Coleman et al.(1994)Coleman, Miranda, and Tsvelik]Coleman1994 author author P. Coleman, author E. Miranda, and author A. Tsvelik, https://link.aps.org/doi/10.1103/PhysRevB.49.8955 journal journal Phys. Rev. B volume 49,pages 8955 (year 1994)NoStop [Fominov et al.(2015)Fominov, Tanaka, Asano, and Eschrig]Fominov2015 author author Y. V. Fominov, author Y. Tanaka, author Y. Asano,and author M. Eschrig, https://link.aps.org/doi/10.1103/PhysRevB.91.144514 journal journal Phys. Rev. B volume 91,pages 144514 (year 2015)NoStop [Bobkova and Barash(2004)]Bobkova2004 author author I. V. Bobkova and author Y. S. Barash, https://doi.org/10.1134/1.1839298 journal journal Journal of Experimental and Theoretical Physics Letters volume 80, pages 494 (year 2004)NoStop [Braude and Nazarov(2007)]Braude2007 author author V. Braude and author Y. V. Nazarov, http://link.aps.org/doi/10.1103/PhysRevLett.98.077003 journal journal Phys. Rev. Lett. volume 98, pages 077003 (year 2007)NoStop [Grein et al.(2009)Grein, Eschrig, Metalidis, and SchГ¶n]Grein2009 author author R. Grein, author M. Eschrig, author G. Metalidis,andauthor G. SchГ¶n, http://link.aps.org/doi/10.1103/PhysRevLett.102.227005 journal journal Phys. Rev. Lett. volume 102, pages 227005 (year 2009)NoStop [Mironov and Buzdin(2015)]Mironov2015 author author S. Mironov and author A. Buzdin, http://link.aps.org/doi/10.1103/PhysRevB.92.184506 journal journal Phys. Rev. B volume 92, pages 184506 (year 2015)NoStop [Silaev et al.(2017)Silaev, Tokatly, and Bergeret]Silaev2017 author author M. A. Silaev, author I. V. Tokatly,and author F. S. Bergeret,https://link.aps.org/doi/10.1103/PhysRevB.95.184508 journal journal Phys. Rev. B volume 95, pages 184508 (year 2017)NoStop [Bobkova et al.(2017)Bobkova, Bobkov, and Silaev]BobkovaSilaev2017 author author I. V. Bobkova, author A. M. Bobkov,and author M. A. Silaev,@noopjournal journal arXiv:1706.04239 (year 2017)NoStop [Bergeret et al.(2001a)Bergeret, Volkov, and Efetov]Bergeret2001 author author F. S. Bergeret, author A. F. Volkov,and author K. B. Efetov, https://link.aps.org/doi/10.1103/PhysRevLett.86.3140 journal journal Phys. Rev. Lett. volume 86, pages 3140 (year 2001a)NoStop [Bergeret et al.(2012a)Bergeret, Verso,and Volkov]Bergeret2012 author author F. S. Bergeret, author A. Verso, and author A. F. Volkov,http://link.aps.org/doi/10.1103/PhysRevB.86.214516 journal journal Phys. Rev. B volume 86, pages 214516 (year 2012a)NoStop [Bergeret et al.(2012b)Bergeret, Verso,and Volkov]Bergeret2012a author author F. S. Bergeret, author A. Verso, and author A. F. Volkov,https://link.aps.org/doi/10.1103/PhysRevB.86.060506 journal journal Phys. Rev. B volume 86, pages 060506 (year 2012b)NoStop [Kopnin and Melnikov(2011)]Kopnin2011 author author N. B. Kopnin and author A. S. Melnikov, https://link.aps.org/doi/10.1103/PhysRevB.84.064524 journal journal Phys. Rev. B volume 84, pages 064524 (year 2011)NoStop [Kopnin et al.(2013)Kopnin, Khaymovich, and Mel’nikov]Kopnin2013 author author N. B. Kopnin, author I. M. Khaymovich,and author A. S. Mel’nikov, https://link.aps.org/doi/10.1103/PhysRevLett.110.027003 journal journal Phys. Rev. Lett. volume 110, pages 027003 (year 2013)NoStop [Volovik(1987)]Volovik1987 author author G. E. Volovik, http://stacks.iop.org/0022-3719/20/i=7/a=003 journal journal Journal of Physics C: Solid State Physicsvolume 20, pages L83 (year 1987)NoStop [Bergeret et al.(2001b)Bergeret, Volkov, and Efetov]Bergeret2001a author author F. S. Bergeret, author A. F. Volkov,and author K. B. Efetov, https://link.aps.org/doi/10.1103/PhysRevLett.86.4096 journal journal Phys. Rev. Lett. volume 86, pages 4096 (year 2001b)NoStop [Tanaka and Golubov(2007)]Tanaka2007a author author Y. Tanaka and author A. A. Golubov, https://link.aps.org/doi/10.1103/PhysRevLett.98.037003 journal journal Phys. Rev. Lett. volume 98, pages 037003 (year 2007)NoStop [Tanaka et al.(2007a)Tanaka, Golubov, Kashiwaya, and Ueda]Tanaka2007 author author Y. Tanaka, author A. A. Golubov, author S. Kashiwaya,andauthor M. Ueda, https://link.aps.org/doi/10.1103/PhysRevLett.99.037005 journal journal Phys. Rev. Lett. volume 99, pages 037005 (year 2007a)NoStop [Tanaka et al.(2007b)Tanaka, Tanuma,and Golubov]Tanaka2007b author author Y. Tanaka, author Y. Tanuma, and author A. A. Golubov,https://link.aps.org/doi/10.1103/PhysRevB.76.054522 journal journal Phys. Rev. B volume 76, pages 054522 (year 2007b)NoStop [Yokoyama(2012)]Yokoyama2012 author author T. Yokoyama, https://link.aps.org/doi/10.1103/PhysRevB.86.075410 journal journal Phys. Rev. B volume 86, pages 075410 (year 2012)NoStop [Black-Schaffer and Balatsky(2012)]Black-Schaffer2012 author author A. M. Black-Schaffer and author A. V.Balatsky, https://link.aps.org/doi/10.1103/PhysRevB.86.144506 journal journal Phys. Rev. B volume 86,pages 144506 (year 2012)NoStop [Triola and Balatsky(2016)]Triola2016 author author C. Triola and author A. V. Balatsky, https://link.aps.org/doi/10.1103/PhysRevB.94.094518 journal journal Phys. Rev. B volume 94, pages 094518 (year 2016)NoStop [Yokoyama et al.(2007)Yokoyama, Tanaka, and Golubov]Yokoyama2007 author author T. Yokoyama, author Y. Tanaka, and author A. A. Golubov,https://link.aps.org/doi/10.1103/PhysRevB.75.134510 journal journal Phys. Rev. B volume 75, pages 134510 (year 2007)NoStop [Alidoust et al.(2017)Alidoust, Zyuzin, and Halterman]Alidoust2017 author author M. Alidoust, author A. Zyuzin, and author K. Halterman, https://link.aps.org/doi/10.1103/PhysRevB.95.045115 journal journal Phys. Rev. B volume 95,pages 045115 (year 2017)NoStop [Cuevas and Bergeret(2007)]Cuevas2007 author author J. C. Cuevas and author F. S. Bergeret, https://link.aps.org/doi/10.1103/PhysRevLett.99.217002 journal journal Phys. Rev. Lett. volume 99, pages 217002 (year 2007)NoStop [Alidoust and Linder(2013)]Alidoust2013 author author M. Alidoust and author J. Linder, https://link.aps.org/doi/10.1103/PhysRevB.87.060503 journal journal Phys. Rev. B volume 87, pages 060503 (year 2013)NoStop [Alidoust and Halterman(2015)]Alidoust2015 author author M. Alidoust and author K. Halterman, 10.1063/1.4908287 journal journal Journal of Applied Physics volume 117, pages 123906 (year 2015)NoStop [Kidwingira et al.(2006)Kidwingira, Strand, Van Harlingen, andMaeno]Kidwingira2006 author author F. Kidwingira, author J. D. Strand, author D. J. Van Harlingen,and author Y. Maeno, http://science.sciencemag.org/content/314/5803/1267.abstract journal journal Science volume 314, pages 1267 (year 2006)NoStop [Buzdin(2008)]Buzdin2008 author author A. Buzdin, http://link.aps.org/doi/10.1103/PhysRevLett.101.107005 journal journal Phys. Rev. Lett. volume 101, pages 107005 (year 2008)NoStop [Reynoso et al.(2008)Reynoso, Usaj, Balseiro, Feinberg, and Avignon]Reynoso2008 author author A. A. Reynoso, author G. Usaj, author C. A. Balseiro, author D. Feinberg,and author M. Avignon, http://link.aps.org/doi/10.1103/PhysRevLett.101.107001 journal journal Phys. Rev. Lett. volume 101, pages 107001 (year 2008)NoStop [Zazunov et al.(2009)Zazunov, Egger, Jonckheere, andMartin]Zazunov2009 author author A. Zazunov, author R. Egger, author T. Jonckheere,andauthor T. Martin, http://link.aps.org/doi/10.1103/PhysRevLett.103.147004 journal journal Phys. Rev. Lett. volume 103, pages 147004 (year 2009)NoStop [Liu and Chan(2010)]Liu2010 author author J.-F. Liu and author K. S. Chan,http://link.aps.org/doi/10.1103/PhysRevB.82.184533 journal journal Phys. Rev. B volume 82, pages 184533 (year 2010)NoStop [Brunetti et al.(2013)Brunetti, Zazunov, Kundu, andEgger]Brunetti2013 author author A. Brunetti, author A. Zazunov, author A. Kundu,and author R. Egger, http://link.aps.org/doi/10.1103/PhysRevB.88.144515 journal journal Phys. Rev. B volume 88,pages 144515 (year 2013)NoStop [Yokoyama et al.(2014)Yokoyama, Eto, and Nazarov]Yokoyama2014 author author T. Yokoyama, author M. Eto, and author Y. V. Nazarov,http://link.aps.org/doi/10.1103/PhysRevB.89.195407 journal journal Phys. Rev. B volume 89, pages 195407 (year 2014)NoStop [Kulagina and Linder(2014)]Kulagina2014 author author I. Kulagina and author J. Linder, http://link.aps.org/doi/10.1103/PhysRevB.90.054504 journal journal Phys. Rev. B volume 90, pages 054504 (year 2014)NoStop [Nesterov et al.(2016)Nesterov, Houzet, and Meyer]Nesterov2016 author author K. N. Nesterov, author M. Houzet, and author J. S. Meyer, http://link.aps.org/doi/10.1103/PhysRevB.93.174502 journal journal Phys. Rev. B volume 93,pages 174502 (year 2016)NoStop [Konschelle et al.(2015)Konschelle, Tokatly, and Bergeret]Konschelle2015 author author F. Konschelle, author I. V. Tokatly,and author F. S. Bergeret, http://link.aps.org/doi/10.1103/PhysRevB.92.125443 journal journal Phys. Rev. B volume 92, pages 125443 (year 2015)NoStop [Moor et al.(2015a)Moor, Volkov,and Efetov]Moor2015 author author A. Moor, author A. F. Volkov, and author K. B. Efetov,http://link.aps.org/doi/10.1103/PhysRevB.92.214510 journal journal Phys. Rev. B volume 92, pages 214510 (year 2015a)NoStop [Moor et al.(2015b)Moor, Volkov,and Efetov]Moor2015a author author A. Moor, author A. F. Volkov, and author K. B. Efetov,http://link.aps.org/doi/10.1103/PhysRevB.92.180506 journal journal Phys. Rev. B volume 92, pages 180506 (year 2015b)NoStop [Bobkova et al.(2016)Bobkova, Bobkov, Zyuzin, andAlidoust]Bobkova2016 author author I. V. Bobkova, author A. M. Bobkov, author A. A. Zyuzin,andauthor M. Alidoust, http://link.aps.org/doi/10.1103/PhysRevB.94.134506 journal journal Phys. Rev. B volume 94,pages 134506 (year 2016)NoStop [Szombati et al.(2016)Szombati, Nadj-Perge, Car, Plissard, Bakkers, and Kouwenhoven]Szombati2016 author author D. B. Szombati, author S. Nadj-Perge, author D. Car, author S. R. Plissard, author E. P. A. M. Bakkers,andauthor L. P. Kouwenhoven,http://dx.doi.org/10.1038/nphys3742 journal journal Nat Phys volume 12, pages 568 (year 2016)NoStop [Nagaosa and Tokura(2013)]Nagaosa2013 author author N. Nagaosa and author Y. Tokura, http://dx.doi.org/10.1038/nnano.2013.243 journal journal Nat Nano volume 8,pages 899 (year 2013)NoStop
http://arxiv.org/abs/1708.07467v2
{ "authors": [ "M. A. Silaev" ], "categories": [ "cond-mat.supr-con" ], "primary_category": "cond-mat.supr-con", "published": "20170824154810", "title": "Double-slit Fraunhofer pattern as the signature of the Josephson effect between Berezinskii superconductors through the ferromagnetic vortex" }
Freezeout systematics due to the hadron spectrum Subhasis Samanta December 30, 2023 ================================================empty emptyThis paper presents a new teleoperated spherical tensegrity robot capable of performing locomotion on steep inclined surfaces. With a novel control scheme centered around the simultaneous actuation of multiple cables, the robot demonstrates robust climbing on inclined surfaces in hardware experiments and speeds significantly faster than previous spherical tensegrity models. This robot is an improvement over other iterations in the TT-series and the first tensegrity to achieve reliable locomotion on inclined surfaces of up to 24. We analyze locomotion in simulation and hardware under single and multi-cable actuation, and introduce two novel multi-cable actuation policies, suited for steep incline climbing and speed, respectively. We propose compelling justifications for the increased dynamic ability of the robot and motivate development of optimization algorithms able to take advantage of the robot's increased control authority. § INTRODUCTION UC Berkeley and the NASA Ames Research Center are developing a new concept for space exploration robots based on tensegrity structures. A tensegrity structure consists of rods suspended in a network of cables, where the rods and cables experience only compression and tension, respectively, while in equilibrium. Because there are no bending moments, tensegrity systems are inherently resistant to failure <cit.>. Additionally, the structures are naturally compliant, exhibiting the ability to distribute external forces throughout the tension network. This mechanical property provides shock protection from impact and makes the structure a robust robotic platform for mobility in an unpredictable environment. Thus, tensegrity robots are a promising candidate for exploration tasks, especially in the realm of space exploration, because the properties of tensegrity systems allow these robots to fulfill both lander and rover functionality during a mission.Analysis of tensegrity robotic locomotion on inclined terrain is critical in informing path-planning and trajectory tracking decisions in mission settings.Despite the crucial role of uphill climbing in planetary exploration, therobot is the first untethered spherical tensegrity robot to achieve reliable inclined surface climbing. Therobot was rapidly constructed using a novel modular elastic lattice tensegrity prototyping platform <cit.>, which allows for rapid hardware iterations and experiments. This work presents the simulation results of inclined uphill locomotion for a six-bar spherical tensegrity robot as well as the prototyping and hardware experiments performed to validate these results. We show that therobot can achieve robust locomotion on surface inclines up to 24 using a two-cable actuation scheme in hardware, as shown in Fig. <ref>.In this paper, we first describe the topology and design of the , which uses a novel rapid tensegrity prototyping method. Next, we analyze incline locomotion performance in simulation under a single-cable actuation policy. This policy is tested on hardware to establish a performance benchmark against which two-cable actuation policies can be evaluated. Two variants of multi-cable policies are found in simulation, one suited for steep inclines and the other suited for speed. We demonstrate significant performance improvements in both tasks over the single-cable benchmark and discuss the primary factors that lead to improved performance. Finally, we motivate further research in extending this work to develop more efficient multi-cable locomotion policies by leveraging learning algorithms. § PRIOR RESEARCH Tensegrity robots have become a recent subject of interest due to their applications in space exploration <cit.>. The natural compliance and reduced failure modes of tensegrity structures have motivated the development of multiple tensegrity robot forms <cit.>. Some examples include spherical robots designed for locomotion on rugged terrain <cit.>, snake-like robots that crawl along the ground <cit.>, and assistive elements in walking quadrupedal robots <cit.>.Tensegrity locomotion schemes have been studied in both the context of single-cable actuation <cit.>, and (rarely) in the context of multi-cable actuation <cit.>. However, much of this exploration into tensegrity multi-cable actuation policies has been in the context of vibrational, rather than rolling motion. While there exists extensive prior work in incline robotic locomotion, this literature does not directly address tensegrities. For example, Stanford's spacecraft/rover hybrid robot has demonstrated through simulation and hardware tests the potential for uphill locomotion. Rather than a tensegrity mechanism, however, Stanford's hybrid robot uses a flywheel-based hopping mobility mechanism designed for traversing small micro-gravity bodies <cit.>.Movement on rough or uphill terrain is a frequent occurrence in space exploration, and has proven to be a necessary challenge for traditional wheeled rovers. For instance, Opportunity has ascended, with much difficulty, a number of surfaces up to 32 above horizontal <cit.>. On the other hand, NASA's SUPERball, which is also a 6-bar tensegrity robot, has demonstrated successful navigation of an 11.3 (20% grade) incline in simulation <cit.>. However, as will be discussed later, theis the first tensegrity robot to successfully demonstrate significant inclined surface locomotion, not only in simulation, but also in hardware testing.§ SIX-BAR TENSEGRITY ROBOT USING MODULAR ELASTIC LATTICE PLATFORM In order to greatly simplify and expedite the process of assembly we developed a modular elastic prototyping platform for tensegrity robots <cit.>. The , a six-bar spherical tensegrity robot, was the first tensegrity robot assembled using this new prototyping platform and can be rapidly assembled in less than an hour by a single person. To construct the robot, a regular icosahedron structure is first rapidly assembled using the modular elastic lattice platform and six aluminum rods of 25 cm each, creating the passive structure of the tensegrity robot. A total of six actuators and a central controller are then attached to the structure, resulting in a dynamic, underactuated tensegrity robot, as shown in Fig. <ref>. § SINGLE-CABLE ACTUATED CLIMBING ON INCLINED SURFACES A spherical tensegrity robot can perform rudimentary punctuated rolling locomotion by contracting and releasing each of its cables in sequence, deforming its base and shifting its center of mass (CoM) forward of the front edge of its supporting base polygon. This contraction places the robot in a transient, unstable state, from which it naturally rolls onto the following stable base polygon. After the roll, the robot releases the contracted cable and returns to its neutral stance before initiating the next step in the sequence. In this paper, the neutral stance of the robot refers to the stance in which no cables are contracted and the only tension in the system is due to gravity.While other robots have successfully achieved punctuated rolling on flat ground using this technique <cit.>, we show that theis not only capable of the same, but can also do so on an inclined surface. This section summarizes the results for a single-cable actuation policy and sets the standards against which we evaluate the improved climbing capabilities achieved through two-cable actuation (Section V). §.§ Simulation and Analysis of Single-Cable Actuation Policies As there had been very little previous work on uphill climbing with spherical tensegrity robots, we first validated the actuation policy in simulation.Using the NASA Tensegrity Robotics Toolkit (NTRT) simulation framework, we simulated the single-cableactuation scheme (Fig. <ref>) for uphill climbing on surfaces of varying inclines.Results showed that the robot could successfully climb an incline of 16 in simulation using a single-cable actuation policy.Simulation results at this incline are shown in Fig. <ref>. Beyond 16, we found that the robot could no longer reliably perform locomotion, for the following two reasons: (1) The robot was unable to move the projected CoM sufficiently forward to initiate an uphill roll, and (2) Deformation of the base polygon shifted the CoM behind the back (downhill) edge of the polygon, initiating a downhill roll.To analyze the limitations of single-cable actuation policies, we studied the relationship between actuation efficiency and incline angle using simulated sensor data.At each angle of inclination, we recorded the cable actuation required to initiate rolling, as a fraction of initial cable length. As expected, we found that initiating tipping of the robot at greater angles of inclination requires greater cable contraction (Fig. <ref>). Interestingly, the extent to which the angle of inclination affects the required cable contraction is dependent on which particular cable is being actuated. Due to the inherent symmetry of the 6-bar spherical structure, the 's repeating six-step gait can be separated into two repeated three step sub-sequences, which arise from the uneven, yet symmetric, distribution of tensions in the springs suspending the central payload (in this case, the central controller). Our results imply that climbing steeper hills requires greater power consumption and more careful motion, motivating the development of more efficient actuation policies for uphill locomotion. This analysis highlights the mechanical limits of single-cable actuation policies, thus encouraging exploration of alternative actuation policies. §.§ Hardware Experiments of Single-Cable Actuation Policies In order to validate the results from software simulations, we constructed an adjustable testing platform which allowed for incremental adjustments of the surface incline angle. Using this setup, we considered as successful those trials in which thewas able to reliably travel 91.4 cm (3 ft) along the inclined plane. We considered as failure those trials in which thefailed to reach the 91.4 cm mark.We found that the robot was able to successfully perform uphill climbing up to 13 in hardware with a single-cable actuation policy. Beyond 13, relaxing a member after its successful contraction consistently shifted the CoM beyond the robot's backward tipping point, causing the structure to roll down the incline. The coefficient of static friction between the robot and the surface, measured for all 8 stable robot poses, ranged from 0.42 to 0.57 with a mean of 0.49. This corresponds to maximum inclines before slipping ranging from 23to 29, with a mean of 26. We believe the reason for this range is due to the lack of material homogeneity at contact points between the robot and the ground, which consist of some combination of the rubber lattice and metal end-cap. In addition, as the distribution of weight on the end of the rods changes with the robot's orientation, it is likely that the frictional forces for each face are not uniform. Based on these results, we did not expect, nor did we observe any failure due to sliding in the single cable actuation tests. However, as will be discussed in later sections, this does become a limiting factor in the robot's performance at much steeper inclines. These results are consistent with failure modes observed in simulation.As a baseline for comparison in later sections, the robot's average velocity was recorded when travelling 91.4 cm along a 10 incline. Across 10 trials under these conditions, theachieved an overall average velocity of 1.96 cm/s. For reference the robot has a rod length of 25 cm. These results serve as the first demonstration of a tensegrity robot reliably climbing an inclined surface. § ALTERNATING AND SIMULTANEOUS TWO-CABLE ACTUATED CLIMBING ON INCLINED SURFACES Having reached the limits of inclined locomotion for the single-cable actuation policy, the following actuation policies were explored: * Simultaneous actuation policy: Similar to single-cable actuation, except the next cable contracts as the current releases, allowing for more steps to be made in less time. See Fig. <ref>.* Alternating actuation policy: To preserve a low center of mass during uphill rolling, the next cable is fully contracted before the current is released. See Fig. <ref>. We found that multi-cable actuation policies allow the robot to climb steeper inclines and travel at significantly faster speeds than the single-cable actuation policy. The following sections present the performance results of two-cable actuation policies in simulation, and their validation through hardware experiments, summarized in Table <ref>. §.§ Simulation and Analysis of Alternating and Simultaneous Two-Cable Actuation Policies The two-cable actuation policies, as described above, were implemented and tested in NTRT as open-loop controllers using the same robot model and inclined surfaces as the aforementioned single-cable simulations. These simulations demonstrated vast improvements in incline locomotion stability as well as average speed, with the robot able to navigate inclines up to 26 using alternating two-cable actuation and 24 using simultaneous two-cable actuation.The significant performance improvements achieved with the two-cable policies are primarily due to the increased stability of the robot and its subsequent ability to avoid rolling downhill during actuation. We believe that this is due to a combination of two primary factors, namely CoM height and number of contact points between the robot and the ground. From the simulation results in Fig. <ref>, it was observed that the average CoM was consistently lower throughout the actuation sequence of the robot, especially at the critical moments approaching the tipping point. On a flat surface, it was found that the maximum CoM heights were 93.1% and 79.8% of the neutral stance CoM height for simultaneous, and alternating actuation, respectively. In addition to the lower CoM, both two-cable policies maintain at least one cable in contraction at all times. In contrast to the three contact points in single-cable actuation, the contracted cable keeps the robot in a perpetually forward-leaning stance with four points of contact with the ground, resulting in a larger supporting base polygon (the convex hull of the four contact points), as illustrated by Fig. <ref>. Moreover, the stance of robot places the projected CoM uphill of the centroid of the base polygon and farther away from the downhill edge, as opposed to behind it as in the single cable case. This leads to a drastic improvement in incline stability, as the robot is less likely to roll backwards due to external disturbances. Conversely, this also means that it is easier for the robot to roll forwards, as the distance to move the projected CoM outside the supporting polygon in the desired direction is smaller and therefore easier to achieve. This is especially apparent in Fig. <ref>, where the CoM is 51.4% closer to the uphill edge when compared to the single-cable case. The stances of single-cable and two-cable actuation are shown in Fig. <ref>. As the robot no longer returns to a neutral state before initiating the next roll sequence, the simultaneous policy saw a notable increase in average speed. However, it did not appear that the increased speed has much effect on the robot's ability to navigate an incline, as the punctuated manner in which actuation is performed means that little if any momentum is preserved from one roll to the next.As the software incline limits of 24and 26were reached for alternating and simultaneous two-cable actuation respectively, it was found that the robot could no longer reliably navigate the inclined surface, primarily due to insufficient friction. This result was corroborated by our physical hardware experiments. §.§ Hardware Experiments of Alternating and Simultaneous Two-Cable Actuation PoliciesIn accordance with simulation results, the ability of the robot to actuate multiple cables simultaneously and in alternating order resulted in significant improvements in its ability to navigate steep inclines and achieve high speeds.Thewas able to leverage alternating two-cable actuation to reliably climb a 24 (44.5% grade) incline, far outperforming the robot's previous best of 13 (23.1% grade) set via single-cable actuation. Such a significant improvement establishes this performance as the steepest incline successfully navigated by a spherical tensegrity robot to date. Indeed, the primary cause for failure of two-cable alternating actuation at and beyond 24 was not falling backwards, but rather slipping down the slope due to insufficient friction, in accordance with our measurements mapping the robot's mean coefficient of friction to a theoretical max incline of 26. This suggests that further improvements may be made to the robot's incline rolling ability given careful consideration of material choices in the next design iteration.Not only did the robot's incline climbing performance improve, but its locomotion speed did as well. As mentioned previously, on an incline of 10, the traditional, single-cable actuation policy traveled a distance of 91.4 cm with an average velocity of 1.96 cm/s. However, when performing simultaneous two-cable actuation, the robot was able to travel the same distance with a 10-trial average velocity of 4.22 cm/s, achieving an increase of over 115% beyond the single-cable baseline. We anticipate that this improvement can be increased by further overlapping the contractions and relaxations of more cables in the simultaneous actuation policy. As the number of cables being simultaneously actuated increases, the rolling pattern increasingly resembles a fluid, spherical roll. However, more complex actuation patterns also require an increasingly skilled robot tele-operator. We recognized that an increase in operator skill leads to an increase in performance, but this also indicates the great potential for intelligent policy optimization and automation. This has the potential to far outperform human operators and achieve ever faster locomotion and the conquering of steeper inclines. § CONCLUSION In this work, we have demonstrated, through both simulation and hardware results, the ability of a spherical tensegrity robot to perform consistent uphill locomotion on steep inclines. This was made possible through the development of a novel multi-cable actuation scheme, which allow theto reliably perform forward locomotion on much steeper inclines and at greater speeds than what was previously possible by using only single-cable actuation.Due to the inherent coupled, nonlinear dynamics of the robot, multi-cable actuation policies render robotic control a challenging intellectual task, providing a launch point for future work. We look forward to exploring the integration of artificial intelligence (particularly evolutionary algorithms and deep reinforcement learning architectures) in this robotic platform to optimize locomotive gaits on varied inclines, and even generate optimal tensegrity topologies, areas which have proven promising in prior work <cit.>. We hope to leverage learning algorithms to achieve more fluid and efficient locomotion using a robust and fully autonomous control policy.§ ACKNOWLEDGEMENT The authors are grateful for funding support from NASA's Early Stage Innovation grant NNX15AD74G.We also wish to acknowledge the work of the other students on this project: Faraz Ghahani, Carielle U. Spangenberg, Abhishyant Khare, Grant Emmendorfer, Cameron A. Bauer, Kit Y. Mak, Kyle G. Archer, Lucy Kang, Amir J. Safavi, Yuen W. Chau, Reneir Viray, Eirren Viray, and Sebastian Anwar. myIEEEtran
http://arxiv.org/abs/1708.08150v1
{ "authors": [ "Lee-Huang Chen", "Brian Cera", "Edward L. Zhu", "Riley Edmunds", "Franklin Rice", "Antonia Bronars", "Ellande Tang", "Saunon R. Malekshahi", "Osvaldo Romero", "Adrian K. Agogino", "Alice M. Agogino" ], "categories": [ "cs.RO" ], "primary_category": "cs.RO", "published": "20170827224755", "title": "Inclined Surface Locomotion Strategies for Spherical Tensegrity Robots" }
firstpage–lastpage Mass retention efficiencies of He accretion onto carbon-oxygen white dwarfs and type Ia supernovae C. Wu1,2,3,4 B. Wang1,2,3,4 D. Liu 1,2,3,4 Z. Han 1,2,3,4December 30, 2023 ========================================================================================================== 2016 Emission line galaxies (ELGs) are used in several ongoing and upcoming surveys (SDSS-IV/eBOSS, DESI) as tracers of the dark matter distribution. Using a new galaxy formation model, we explore the characteristics of , which dominate optical ELG selections at z≃ 1. Model at 0.5<z<1.5 are selected to mimic the DEEP2, VVDS, eBOSS and DESI surveys. The luminosity functions of model are in reasonable agreement with observations. The selected are hosted by haloes with M_ halo≥ 10^10.3h^-1 M_⊙, with ∼ 90% of them being central star-forming galaxies. The predicted mean halo occupation distributions of has a shape typical of that inferred for star-forming galaxies, with the contribution from central galaxies, , being far from the canonical step function. The can be described as the sum of an asymmetric Gaussian for disks and a step function for spheroids, which plateaus below unity. The model have a clustering bias close to unity, which is below the expectations for eBOSS and DESI ELGs. At z∼ 1, a comparison with observed g-band selected galaxy, which are expected to be dominated by , indicates that our model produces too few that are satellite galaxies. This suggests the need to revise our modelling of hot gas stripping in satellite galaxies.methods: numerical – methods: analytical – galaxies: evolution – galaxies: formation – cosmology: theory§ INTRODUCTION The quest to understand the nature of both dark matter and dark energy has led us to adopt new tracers of the large-scale structure of the Universe, such as emission line galaxies <cit.>. Current ELG samples are small and their characteristics are not well understood <cit.>. Initial tests on relatively small area surveys indicate that there are enough ELGs to chart space-time and understand the transition between the dark matter and the dark energy dominated eras <cit.>. Moreover, by measuring the properties of ELGs as tracers of star formation over a substantial amount of cosmic time, we can shed light on the mechanisms that quench the star formation in typical galaxies since the peak epoch of star formation around z ≃ 2 <cit.>. The SDSS-IV/eBOSS[extended Baryon Oscillation Spectroscopic Survey,<http://www.sdss.org/surveys/eboss/> <cit.>] survey is currently targeting what will become the largest sample to date of ELGs at z≃ 0.85 <cit.>. This large sample will allow us to go beyond the current state-of-the-art cosmological constraints by measuring cosmological probes such as baryon acoustic oscillations and redshift space distortions at z∼ 1 <cit.>. This pioneering use of ELGs as cosmological probes is planned to be enhanced by future surveys, such as DESI[Dark Energy Spectroscopic Instrument, <http://desi.lbl.gov/> <cit.>],PFS[Prime Focus Spectrograph,<http://sumire.ipmu.jp/en/2652> <cit.>],WEAVE[Wide-field multi-object spectrograph for the William Herschel Telescope, <http://www.ing.iac.es/weave/> <cit.>],and 4MOST[4-metre Multi-Object Spectroscopic Telescope, <https://www.4most.eu/> <cit.>]. An ELG is the generic name given to any galaxy presenting strong emission lines associated with star-formation events. Galaxies with nuclear activity also present emission lines. However, the line ratios of such objects tend to be different from those driven by star formation activity because of the different ionisation states present <cit.>. The presence of these features allows for a robust determination of galaxy redshifts. Most of the sampled ELGs at z∼ 1 are expected to present a strong line at a rest-frame wavelength of 3727 Å. For detectors sampling optical to near infra-red wavelengths can be detected up to z=2 <cit.>. The fate of galaxies is determined by the growth of dark matter structures which, in turn, is affected by the nature of the dark energy. However, gravity is not the only force shaping the formation and evolution of galaxies. Baryons are affected by a multitude of other processes, mostly related to the fate of gas. Computational modelling is the only way we can attempt to understand all the processes involved in the formation and evolution of galaxies <cit.>. The emission is particularly difficult to predict since it depends critically on local properties, such as dust attenuation and the structure of the HII regions and their ionization fields. This is why traces star formation and metallicity in a non-trivial way (e.g. ).Previous work on modelling has shown that semi-analytic galaxy formation models can reproduce their observed luminosity function (LF) at z ∼ 1 <cit.>, making them ideal for studying the clustering properties of and hence bias. These predictions are used in the design and interpretation of current and future surveys, such as eBOSS <cit.> and DESI <cit.>. <cit.> inferred the clustering and fraction of satellites for a g-band selected sample of galaxies that is expected to be dominated by ELGs at 0.6<z<1.7. Their results are based on a modified sub-halo abundance matching (SHAM) technique that takes into account the incompleteness in the selection of ELGs, because not all haloes will contain an ELG.found that their sample of g-selected galaxies at z∼ 0.8 is best matched by a model with 22.5± 2.5% of satellite galaxies and a mean host halo mass of (1± 0.5) × 10^12h^-1 M_⊙. With the necessary modifications of the SHAM technique to provide a good description of the clustering of the observed ELGs, which is an incomplete sample of galaxies, the for central ELGs is expected to differ from the canonical step function which reaches one central galaxy per halo, which is typical in mass limited samples.Here we aim to characterise the nature of model , as tracers of the star formation across cosmic time, and to study their expected mean halo occupation distribution and clustering to better understand as tracers of the underlying cosmology. We adopt a physical approach rather than the empirical one used in <cit.>. The use of a semi-analytical model of galaxy formation and evolution <cit.> gives us the tools to understand the physical processes that are the most relevant for the evolution of ELGs in general and in particular. Here we present a new flavour of , developed based upon <cit.>, a model that produced LFs in reasonable agreement with observations <cit.>. The plan of this paper is as follows[The programs used to generate the plots presented in this paper can be foun in <https://github.com/viogp/plots4papers/>]. In  <ref> we introduce a new galaxy model (GP17), which is an evolution of previous versions. In  <ref> the luminosity functions from different observational surveys are compared to model selected to mimic these surveys. These selections are explored in both  <ref> and Appendix <ref>. Given the reasonable agreement found between this GP17 model and current observations, we infer the mean halo occupation distribution in  <ref>, and clustering in  <ref> of . In  <ref> we summarise and discuss our results.§ THE SEMI-ANALYTIC MODELSemi-analytical (SA) models use simple, physically motivated rules to follow the fate of baryons in a universe in which structure grows hierarchically through gravitational instability <cit.>. was introduced by <cit.> and since then it has been enhanced and improved <cit.>. follows the physical processes that shape the formation and evolution of galaxies, including: (i) the collapse and merging of dark matter haloes; (ii) the shock-heating and radiative cooling of gas inside dark matter haloes, leading to the formation of galaxy discs; (iii) quiescent star formation in galaxy discs which takes into account both the atomic and molecular components of the gas <cit.>; (iv) feedback from supernovae, from active galactic nuclei <cit.> and from photo-ionization of the intergalactic medium; (v) chemical enrichment of the stars and gas (assuming instantaneous recycling); (vi) galaxy mergers driven by dynamical friction within common dark matter haloes. provides a prediction for the number and properties of galaxies that reside within dark matter haloes of different masses. Currently there are two main branches of : one with a single initial mass function <cit.> and one that assumes different IMFs for quiescent and bursty episodes of star formation <cit.>. Here we introduce a new version of the model of the formation and evolution of galaxies (hereafter GP17), which will be available in the Millennium Archive Database[<http://www.virgo.dur.ac.uk/>, <http://gavo.mpa-garching.mpg.de/Millennium/>]. The details specific to the GP17 model are introduced in  <ref>. Below we also give further details of how models emission lines,  <ref>,and dust,  <ref>, as these are key aspects to understand the results from this study. §.§ The GP17 modelThe GP17 model uses dark matter halo merger trees extracted from the MS-W7 N-body simulation <cit.>, a box of 500 h^-1Mpc aside and with a cosmology consistent with the 7^ th year release from WMAP <cit.>: matter density Ω_m, 0=0.272, cosmological constant Ω_Λ , 0=0.728, baryon density Ω_b,0=0.0455, a normalization of density fluctuations given by σ_8, 0=0.810 and a Hubble constant today of H(z=0) = 100 h kms^-1 Mpc^-1 with h=0.704. The model in this study, GP17, assumes a single IMF, building upon the versions presented in both GP14 and <cit.>. The main two aspects that are different in the GP17 model with respect to GP14 are: i) the assumption of a gradual stripping of the hot gas when a galaxy becomes a satellite by merging into a larger halo <cit.> and ii) the use of a new merging scheme to follow the orbits of these satellite galaxies <cit.>. Table <ref> summarizes all the differences between the new GP17 model and the GP14 <cit.> implementation. We review below the changes made in the same order as they appear in Table <ref>.The GP17 model assumes the IMF from <cit.>. This IMF is widely used in observational derivations, and thus this choice facilitates a more direct comparison between the model results and observational ones. GP17 uses the flexible <cit.> stellar population synthesis (SPS) model (CW09 hereafter). Coupling this SPS model to gives very similar global properties for galaxies over a wide range of redshifts and wavelengths to using the <cit.> SPS model <cit.>. The CW09 SPS model was chosen here over that of <cit.> because it provides greater flexibility to explore variations in the stellar evolution assumptions.§.§.§ The treatment of gas in satellite galaxies In the GP17 model the hot gas in satellites is removed gradually, using the model introduced by <cit.> based on a comparison to hydrodynamical simulations of cluster environments <cit.>. This change has a direct impact on the distribution of specific star formation rates. Compared to the GP14 model, some galaxies from the GP17 model have higher sSFR values, in better agreement with observational inferences <cit.>. This is clearly seen in the sSFR function around log_10(sSFR /Gyr^-1) ∼ -1.5 presented in Fig. <ref>. This choice reduces the fraction of passive model galaxies with M_*< 10^11 h^-1 M_⊙. As shown in Fig. <ref>,the resulting a passive fraction is closer to the observational results at z=0, compared with models such GP14, which assumes instantaneous stripping of the hot gas from satellites <cit.>. Note that we have not made a direct attempt to reproduce the observed passive fraction by adjusting the time scale for the hot gas stripping in satellite galaxies, but rather we have simply used the parameters introduced in <cit.>. We leave a detailed exploration of the effect of environmental processes on galaxy properties for another study. The passive fraction at z=0 is obtained using the limit on the specific star formation rate, sSFR =SFR/M_*, proposed in <cit.>, i.e.sSFR < 0.3/t_ Hubble(z), where t_ Hubble(z) is the Hubble time, t_ Hubble=1/H, at redshift z. Fig. <ref> shows the z=0 distribution of the sSFR and stellar mass for GP17 model galaxies, together with those from the GP14 model, compared to the limits sSFR = 0.3/t_ Hubble(z) and sSFR< 1/t_ Hubble(z) (horizontal dotted and dashed lines respectively). The contours show that the main sequence of star-forming galaxies, i.e.the most densely populated region in the sSFR-M_* plane, is above both these limits, while passively evolving galaxies, i.e. those with low star formation rates, are below them. Fig. <ref> also shows the model galaxy stellar mass function at z=0 compared with observations.§.§.§ The merging scheme for satellite galaxiesThe GP17 model is the first publicly available model to use the new merging scheme introduced by <cit.>. In this merging scheme, satellite galaxies associated with resolved sub-haloes cannot merge with the central galaxy until their host sub-halo does. Satellite galaxies with no associated resolved sub-halo merge with their central galaxy after a time calculated analytically, taking into account dynamical friction and tidal stripping. As described in <cit.>, compared to observations up to z=0.7, the radial distribution of galaxies is too highly concentrated <cit.>. As a result of using the merging scheme of , satellite galaxies merge more quicklywith their central galaxy than it was previously assumed by the analytical function used <cit.>. This, along with themodification to the radial distribution of satellite galaxies results in an improved match to the observed two point correlation function at small scales <cit.>. §.§.§ Calibration of the free parameters The free parameters in the GP17 model have been calibrated to reproduce the observed luminosity functions[Note that throughout this work all quoted magnitudes are in the AB system, unless specified otherwise.] at z=0 in both the b_J and K-bands <cit.>, as shown in Fig. <ref>, to give reasonable evolution of the UV and V-band luminosity functions and to reproduce the observed black hole-bulge mass relation (not shown here but which matches observations equally well as in GP14). When calibrating the GP17 model, our aim was to make the smallest number of changes to the GP14 model parameters. A side effect of incorporating the merging scheme frominto the model is an increase of the number of massive central galaxies at z=0, that has to be compensated for by modifying the galactic feedback, in order to recover the same level of agreement with the observational datasets used during the calibration of the modelparameters. To achieve this, both the efficiency of the supernova feedback and the mass of haloes within which gas cooling stops due to AGN feedback have been reduced. The changes to these two parameters related to galactic feedback allow for the GP17 model to match the observed z=0 luminosity functions shown in Fig. <ref> with a χ ^2 that is just a factor of 3 larger than that for luminosity functions from the GP14 model. §.§ The emission line model The GP14 model predicts the evolution of the Hα LF reasonably well <cit.>. Hα is a recombination line and thus its unattenuated luminosity is directly proportional to the number of Lyman continuum photons, which is a direct prediction of the model <cit.>. The main uncertainty in the case of the Hα line is the dust attenuation.In , the ratio between the luminosity and the number of Lyman continuum photons is calculated using the region models of <cit.>. The model uses by default eight region models spanning a range of metallicities but with the same uniform density of 10 hydrogen particles per cm^-3 and one ionising star in the center of the region with an effective temperature of 45000 K. The ionising parameter[Following <cit.>, the ionising parameter at a given radius, r, of the HII region, U(r) is defined here as a dimensionless quantity equal to the ionizing photon flux, Q, per unit area per atomic hydrogen density, n_ H, normalised by the speed of light, c: U(r)=Q/(4π r^2 n_ H c).] of these region models is around 10^-3, with exact values depending on their metallicity in a non-trivial way. These ionising parameters are typical within thegrid of regions provided by <cit.>. In this way, the model assumes a nearly invariant ionization parameter. This assumption, although reasonable for recombination lines, is possibly too simplistic in practice for other emission lines such as <cit.>. Nevertheless, with this caveat in mind, we shall study the predictions of for with the simple model here and defer the use of a more sophisticated emission line model to a future paper. We have also run the galaxy formation model together with the empirical emission line ratios described in <cit.>. These authors provide line ratios for 5 metallicities, combining the observational database of <cit.> and <cit.> for Z=0.0004 and Z=0.004, and using <cit.> models for higher metallicities, Z=0.008,0.02,0.05. <cit.> provide line ratios with respect to the flux of the H_β line, which they assume to be 4.757 × 10^-13 times the number of hydrogen ionising photons, N_ Lyc. For the other Hydrogen lines we assume the low-density limit recombination Case B (the typical case for nebulae with observable amounts of gas) and a temperature of 10000 K: F(Ly_α)/F(H_β)=32.7, F(H_α)/F(H_β)=2.87, F(H_γ)/F(H_β)=0.466 <cit.>. These line ratios have been reduced by a factor of 0.7 for gas metallicities Z ≥ 0.08 to account for absorption of ionising photons within the HII region <cit.>. We have done all the analysis presented in this paper using the <cit.> model for HII regions obtaining very similar results to those presented below when using the default models from <cit.>. Thus, all the conclusions from this work are also adequate when the <cit.> models are assumed. §.§ The dust model Emission lines can only be detected in galaxies that are not heavily obscured and thus, survey selections targetting ELGs are likely to miss dusty galaxies. In the dust is assumed to be present in galaxies in two components: diffuse dust (75%) and molecular clouds (25%). This split is consistent, within a factor of two, with estimates based on observations of nearby galaxies <cit.>. The diffuse component is assumed to follow the distribution of stars. Model stars escape from their birth molecular clouds after 1 Myr (the metallicity is assumed to be the same for the stars and their birth molecular clouds). Given the inclinationof the galaxy and the cold gas mass and metallicity, the attenuation by dust at a given wavelength is computed using the results of a radiative transfer model <cit.>.Lines are assumed to be attenuated by dust in a similar way to the stellar continuum, as described above. Thus, the predictedluminosity should be considered as an upper limit as some observational studies find that the nebular emission of star forming galaxies experiences greater (by up to a factor of 2) dust extinction than the stellar component <cit.>. Nevertheless, given the uncertainty in the dust attenuation at the redshifts of interest, the line luminosities are calculated using the model stellar continuum dust attenuation. It is worth noting that, as was also found for cluster galaxies at z∼ 1 <cit.>, lessthan 3% of the model ELGs (mostly the brightest ) are attenuated by more than one magnitude in the rest frame NUV to optical region of the spectra, which is due to the very small sizes and large cold gas content of those galaxies.§ MODEL [OII] EMITTERSStar forming galaxies exhibiting strong spectral emission lines are generically referred to as emission line galaxies (ELGs). Present and future surveys such as eBOSS, Euclid and DESI target galaxies within a particular redshift range <cit.>. The specific redshift range and the type of detectors used by a survey will determine which spectral lines will be observed. We focus here on those surveys with optical and near-infrarred detectors targeting ELGs at z∼ 1 which will have prominent lines. We will refer to these galaxies as .Within the redshift range 0.6 ≤ z≤ 1.5, at most 10% of all model galaxies are , following the definitions of Table <ref> (see  <ref>). This percentage depends on the minimum galaxy mass, which in this case is set by the resolution of the simulation used. Over 99% of model are actively forming stars (as defined in  <ref>) and over 90% are central galaxies. In  <ref> we describe how we select model . We compare the model luminosity functions at 0.6 ≤ z≤ 1.5 with observations in  <ref> and we explore the selection properties in  <ref>. §.§ The selection of [OII] emittersare selected from the model output to mimic the set of surveys summarised in the first column of Table <ref>. The DEEP2 survey used the Keck DEIMOS spectrograph to obtain spectra of ∼50,000 galaxies in four separate fields covering ∼2.8 deg^2 <cit.>. The VVDS survey was conducted using the VIMOS multi-slit spectrograph on the ESO-VLT, observing galaxies up to z=6.7 over 0.6 deg^2 for the Deep survey and 8.6 deg^2 in the Wide one <cit.>.The eBOSS survey is on-going while the DESI survey has not yet started. Appendix <ref> details the eBOSS and DESI selections, summarised in Table <ref>. Model are selected using the same observational magnitude cuts (second column in Table <ref>), omitting colour cuts designed to remove low redshift galaxies as in this study galaxies are selected from the relevant redshift range already. For eBOSS and DESI selections we include the colour cuts designed to remove stellar contaminants (fourth column in Table <ref>; see Appendix <ref> for further details). A limit on flux has been added (third column in Table <ref>) to select model galaxies with a completeness that mimics the constraints from observational surveys. In the redshift range considered, over 85% of all model galaxies are found to be star-forming. From these, a very small percentage, less than 1% in most cases, is classified as by the restrictive VVDS-Wide, eBOSS and DESI selections. Given the mass resolution of our model, for the VVDS-Deep and DEEP2 cuts, account for at most 11% of the total star forming population at z=0.62, and this percentage decreases with increasing redshift. §.§ Luminosity functionsThe luminosity function for at z=0.62 from the GP17 model is compared in Fig. <ref> to the observational compilation done by <cit.>[<http://projects.ift.uam-csic.es/skies-universes/SUwebsite/index.html>], that includes data from the VVDS-Deep, VVDS-Wide and DEEP2 surveys among others <cit.>. The model LFs are in reasonable agreement with the observations, with differences within a factor of 5 for densities above 10^-5 Mpc^-3h^3dex^-1. Given the similarities between the GP17 and GP14 models, this was expected, as the GP14 model was already shown to be in reasonable agreement with observations at z∼1 <cit.>. Galaxies with an ongoing star-burst dominate the bright end of the model LF, L[OII]>10^42 h^-2 ergs^-1, producing the change in the slope of the LF seen at low number densities in Fig. <ref>. The bright end of the LFs of selected with the DEEP2, VVDS-Wide and VVDS-Deep cuts are also dominated (at the ∼80% level) by spheroid galaxies (i.e. those with a bulge to total mass above 0.5). More than half of these with L[OII]>10^42 h^-2 ergs^-1 have half mass radii smaller than 3 h^-1 kpc. The DEEP2 cut uses the DEIMOS R-band filter response, while the VVDS cuts use the CFHT MegaCam[<http://svo2.cab.inta-csic.es/svo/theory/fps3/index.php?mode=browse&gname=CFHT&gname2=MegaCam>] i-band filter response. The value of the luminosity at which there is a turnover in the number of faint galaxies is sensitive to the particular filter response used. Above this luminosity, the model LF changes by less than 0.1 dex in number density if the R or i bands from DEIMOS, CFHT, PAN-STARRS or DES camera are used (note that not all this bands are used for Fig. <ref>.We note that BC99, an updated version of the <cit.> SPS model, is used by default in most models. As the spectral energy distribution below 912Å can widely vary among different SPS models <cit.>, we verified that using the CW09 SPS model (as done in GP17) has a negligible impact on the luminosity function of . Finally, Fig. <ref> shows that at z=0.62 the model reproduces reasonably well the observed LF for , including the decline in numbers due to the corresponding flux limits (summarised in Table <ref>). In Fig. <ref> the predicted LFs of are shown at z=0.76, 0.91, 1.17 and 1.5, sampling the relevant redshift ranges of the current eBOSS and future DESI surveys. The LFs include effects from the five selection criteria summarised in Table <ref> and are compared to the observational data compiled by <cit.>. Fig. <ref> shows only the LFs with dust attenuation included. As noted before, since the model assumes the same attenuation for the emission lines as for the stellar continuum, the predicted luminosity functions inFig. <ref> should be considered as overestimates <cit.>. Nevertheless, the predicted continuum extinction is significant and arguably larger than suggested by observations at z∼ 1, which might compensate for the lack of any additional attenuation being applied to the emission lineluminosities in the model.§.§ Exploring the ELG selection The selected with the cuts presented in Table <ref> are star-forming galaxies <cit.>. Fig. <ref> shows this by presenting the GP17 model SFR-stellar mass plane for all galaxies at z=0.76 and selected by four of the cuts summarized in Table <ref>. Similar trends are found over the redshift range 0.6≤ z ≤1.5, whenever a sufficiently high galaxy number density is used. Most selections are dominated by the cut in line flux. For the mass resolution of our model, in the case of the VVDS-Deep and DEEP2 cuts, the fraction of star forming galaxies classified as , varies by less than 2% when only the cut in flux is applied.The eBOSS and the DESI selections remove the brightest, L [OII]>10^42h^-2 erg s^-1, and the most strongly star-forming galaxies with their selection criteria, as seen in Figs. <ref> and <ref>. The effect of simply imposing a cut in flux in the SFR-stellar mass plane can be seen in Fig. <ref>. We find a clear correlation between the luminosity and the average SFR such that a cut in luminosity is approximately equivalent to selecting galaxies with a minimum SFR. This is with the exception of the most massive galaxies, which are removed when imposing a cut in luminosity (as shown in the top panel of Figs. <ref> and <ref>). Indeed, the most massive galaxies in the model are also those most affected by dust attenuation, in agreement with observational expectations <cit.>. The ELG samples summarised in Table <ref> are limited by their optical apparent magnitudes. Figs. <ref> and <ref> show how this cut in apparent magnitude further reduces the number of low mass galaxies, with respect to a cut only in the luminosity. Brighter galaxies in the optical tend to be brighter and thus galaxies with either low masses or low SFR tend to be removed with a cut in apparent optical magnitude. The distribution of model optical colours remains rather flat with luminosity, and imposing the eBOSS colour cuts at a given redshift reduces the number of selected galaxies in a non-trivial way within the sSFR-stellar mass parameter space. We remind the reader that some of these colours cuts are actually imposed to remove low redshift galaxies, including ELGs, but also to avoid stellar contamination, as described in Appendix <ref>.Further ELG selection characteristics include: (i) the DEEP2 and VVDS-Deep cuts select over 95% of that form stars quiescently; (ii) galaxies with disks with radii greater than 3 h^-1 kpc account for ∼ 60% of the ; (iii) the VVDS-Wide cut selects the brightest model at the highest redshifts, while the eBOSS and DESI cuts remove the brightest model with L [OII]>10^42 h^-2 erg s^-1. We note also that the brightest emitters are dominated by spheroids that experience a burst of star formation. Given the above, a rough approximation to select samples at z∼ 1 is to impose a cut in stellar mass, typically M_S<10^11 h^-1 M_⊙ (since massive galaxies in the model tend to be too attenuated and thus too faint to be selected as ) and SFR (which is tightly correlated to the cut in luminosity), at least SFR >1 h^-1 M_⊙ yr^-1.§ THE HOST HALOES OF Model at 0.5< z<1.5 are hosted by haloes with masses above 10^10.3h^-1M_⊙ and mean masses in the range 10^11.41≤ M_ halo(h^-1M_⊙)≤ 10^11.78, as summarised in Table <ref>. From this tableit is clear that the host halo mean masses slightly increase with redshift in the studied range. These model masses are consistent with the estimation from <cit.> for at z=1.47.In this section, the masses of haloes hosting selected as indicated in Table <ref> are furthered explored through the stellar mass-halo mass relation and mean halo occupation distribution.§.§ The stellar mass-halo mass relation The stellar-to-halo mass relation for model is presented in Fig. <ref> at z=0.76, together with the global relation for central galaxies. We only show central galaxies in this plot as the sub-haloes hosting satellite galaxies are being disrupted due to tidal stripping and dynamical friction.At low halo masses, the stellar-to-halo mass relation for the model flattens out as the cut in the emission line flux effectively imposes a lower limit on the stellar mass of the selected galaxies (see Fig. <ref>). Above this flattening the stellar mass of model galaxies increases with their host halo mass, with a change of slope around M_halo∼ 10^12 h^-1 M_⊙, where star formation is most efficient at this redshift <cit.>. At this halo mass the dispersion in the stellar-to-halo mass relation increases, being about 1.1 dex for all centrals in the model and between 0.5 and 0.8 dex for central . This is a behaviour particular to and it is related to the modelling of the growth of bulges <cit.>.For haloes with M_halo∼ 10^12.5 h^-1 M_⊙, the median stellar mass of model is ∼1.5 greater than that of the global population. This is driven by the cut in flux removing low mass galaxies. The selection of removes the most massive star forming galaxies because they are dusty on average and thus, the difference with respect to the global population is smeared out.§.§ The mean halo occupation distributionThe mean halo occupation distribution, , encapsulates the average number of a given type of galaxy hosted by haloes within a certain mass range. is usually parametrised separately for central, ⟨ N ⟩_ cen, and satellite galaxies, ⟨ N ⟩_ sat. When galaxies are selected by their luminosity or stellar mass, ⟨ N ⟩_ cen can be approximately described as a smooth step function that reaches unity for massive enough host haloes, while ⟨ N ⟩_ sat is close to a power law <cit.>. However, when galaxies are selected by their star formation rates, ⟨ N ⟩_ cen does not necessarily reach unity <cit.>. This implies that haloes above a certain mass will not necessarily harbour a star forming galaxy or, in our case, an ELG. For star forming galaxies, the shape of the ⟨ N ⟩_ cen as a function of halo mass can also be very different from a step function and in some cases it can be closer to a Gaussian <cit.>. Fig. <ref> shows the for model , , selected following the specific survey cuts detailed in Table <ref>. does not reach unity for all the survey selections in the explored redshift range (see also Fig. <ref>). This result is fundamental for interpreting the observed clustering of ELGs, as the standard expectation for ⟨ N ⟩_ cen is to tend to unity for large halo masses. This point is further emphasized by splitting for galaxies selected with DEEP2-like cuts into satellites andcentrals. The of model central , , is very different from the canonical smooth step function, which is usually adequate to describestellar mass threshold samples and is the basis of (sub) halo abundance matching <cit.>.We further discuss the in  <ref>.On the other hand the predicted ⟨ N ⟩_ sat of closely follows the canonical power law above a minimum halo mass that is typically an order of magnitude larger than the minimum halo mass required to host a central galaxy with the same selection. In the cases studied, less than 10% of the modelled are satellite galaxies, and thus there are very few haloes hosting even one satellite emitter. The redshift evolution of the is presented in Fig. <ref> for both galaxies selected with the VVDS-Deep cuts and for a simple cut in the flux line, F_[OII]>1.9× 10^-17 ergs^-1cm^-2. There is a clear drop with redshift for all halo masses in the average halo occupancy of model . This is mostly driven by the survey magnitude cut, as a simple flux cut reduces the mean occupation much more gradually with redshift. Over the redshift range probed, the minimum mass of haloes hosting increases with redshift, as they are selected for a fixed cut in either flux line alone or also in apparent magnitude. Finally we note that similar shapes are seen for the other cuts considered in this redshift range, with the main change being in the average number of galaxies occupying a given mass halo. §.§ central galaxiesAs seen in Fig. <ref> already, the is clearly different from a step function. Note that this shape cannot be recovered if a cut in SFR and stellar mass is applied, similar to the rough approximation to select suggested at the end of  <ref>. The shape of the seen in Fig. <ref> for model central is closer to a Gaussian plus a step function or even a power law. This might point to the contribution of at least two different types of model central . We have explored splitting central in different ways, including separating those experiencing a burst of star formation. When splitting central into disks and spheroid galaxies, using a bulge over total mass ratio of 0.5 to set the disk-spheroid boundary, we recover an that can be roughly described as an asymmetric Gaussian, for disk centrals, plus a step function that rises slowly to a plateau, for bulges or spheroid centrals. This is shown in Fig. <ref> for selected with DEEP2 cuts at z=0.62, but similar results are found for other selections and redshifts, as long as the number density of galaxies is sufficiently large for the split to remain meaningful. Surveys such as eBOSS and DESI will obtain low resolution spectra for which are unlikely to be sufficient to gather the information needed to split the population into disks and spheroids. Within the studied redshift range, model that are central disks, tend to be less massive, have lower stellar metallicities and larger sizes than central spheroids, for all the selections presented in Table <ref>. In particular, for a given halo mass central galaxies that are spheroids have stellar masses up to a factor of 1.6 larger than central discs. However, since the bulge to total mass ratio varies smoothly with stellar mass, the distributions of these model properties have a large overlap for central disks and spheroids and thus, it is unclear if they could be used observationally to split the central population. A split into three components might describe better the presented in Fig. <ref>. However, on top of the becoming noisy for large halo masses it will already be difficult to split observed central into disks and spheroids to test our model, as in most cases only spectroscopic information is available. Thus, to encapsulate into an illustrative function the shape of the for model central , we have opted to propose a function that adds together a softly rising step function for central spheroids (or bulges), b, with an asymmetric Gaussian for central disks, d: ⟨ N ⟩_ cen = f_b/2( 1 +erf(log_10 M_*-log_10 M_b/σ) ) + f_d/2e^α_d/2( 2 log_10 M_d + α_dσ^2 - 2 log_10 M_* )× erfc(log_10 M_d + α_dσ ^2 -log_10 M_*/σ√(2))In the above equation, erf is the error function (erfc=1- erf)[erf(x)=2/√(π)∫^x_0e^-t^2dt], which behaves like a softly rising step function. M_b gives the characteristic halo mass of the error function for the central bulges, and M_d gives the average halo mass of the Gaussian component for central disks. f_b and f_d control the normalisation of the error function and the Gaussian component, respectively. σ controls the rise of the error function and the width of the asymmetric Gaussian. The level of asymmetry of the Gaussian component is controlled both by σ and α_d.As an illustration, Fig. <ref> shows in grey the function described in Eq. <ref> with parameters: log_10M_b=11.5, log_10M_d=11.0, f_b=0.05, f_d=1, σ=0.09, α _d=1.7. Adequately fitting the shape of the with Eq. <ref> is out of the scope of this paper. Moreover, an individual fit to disk and spheroid central galaxies will be more adequate. We defer such an exploration because, as it will be discussed in the next section  <ref>, it is unclear that our model is producing a large enough number of that are also satellite galaxies compared to the expectations from observations. Moreover, the proposed split might not actually be achieved observationally. Nevertheless, given that uncommon features in the mean HOD can affect the inferred galaxy clustering <cit.>, our proposed Eq. <ref> is a useful tool to explore the impact that such a mean HOD has when interpreting mock catalogues generated for cosmological purposes.§ THE CLUSTERING OFIn this section we explore how trace the dark matter distribution. In Fig. <ref> we present a 50× 50× 10 h^-3Mpc^3 slice of the whole simulation box at redshift z=1, highlighting in grey the cosmic web of the dark matter, together with the location of (filled circles) and of dark matter haloes above 10^11.8h^-1M_⊙ (open circles). The environment where model are found is not the densest as expected for other cosmological tracers such as luminous red galaxies, but instead the are also found in filamentary structures.Below we explore the two point correlation function monopole in both real and redshift space for model . The two point correlation function has been obtained using two algorithms that give similar results; the plots show the calculation from the publicly available CUTE code <cit.>. The linear bias is also calculated in real space and we compare it with the expectations for eBOSS and DESI ( <ref>). <cit.> measured the redshift space monopole for a sample at z∼ 0.8 of g-selected galaxies that they claim is comparable to a selection of . We also make cuts similar to those inin order to compare the results for g-band selected galaxies and with their observed clustering ( <ref>). §.§ The correlation function and galaxy bias in real-space Fig. <ref> shows the real-space two point correlation function, ξ(r), for model galaxies at z=0.76 selected following the cuts in Table <ref>. The different galaxy selections result in a very similar ξ(r), in particular on scales above 0.1 h^-1Mpc. The same is true for the other redshifts explored. Compared to the dark matter real-space two point correlation function, ξ_ DM, model galaxies follow closely the dark matter clustering for comoving separations greater than ∼1 h^-1Mpc. The real space bias, √(ξ_gg/ξ_DM), is practically unity and constant for comoving separations greater than 2 h^-1Mpc. In comparison, SDSS luminous red galaxies (LRGs) have a bias of ∼ 1.7 σ_8(0)/σ_8(z)[σ_8(z) gives the normalization of the density fluctuations in linear theory and has a value of 0.81 at z=0 for the cosmology assumed in this work.]. From a pilot study, eBOSS ELGs are expected to be linearly biased, with a bias of ∼ 1.0 σ_8(0)/σ_8(z) <cit.>. For the cosmology assumed in this study, eBOSS LRGs are then expected to have a bias of 2.7 at z=1 and ELGS have b=1.62 at the same redshift[The ratio σ_8(0)/σ_8(z) has been obtained using the ICRAR cosmological calculator <http://cosmocalc.icrar.org/>.].Fig. <ref> shows the evolution of the bias over the redshift range of interest for this study for DEEP2 model galaxies. For both DEEP2 and VVDS-Deep selections, the bias on large scales increases by a factor of 1.2 from z=0.6to z=1.2. For all the considered selections, when the propagated Poisson errors are below σ_b=0.2, the linear bias remains between 1 and 1.4 in all cases. Given the predicted small fraction of that are satellite galaxies, these galaxies have the potential to be extraordinary cosmological probes for redshift space distortion analysis as they are possibly almost linearly biased for the 2-halo term. §.§ The correlation function in redshift-spaceThe redshift-space two point correlation function is shown in Fig. <ref> for model galaxies selected with two different flux and g-band cuts. Model galaxies with a brighter [OII] flux are less clustered in both real and redshift space. This contradicts what is found observationally at z=0 <cit.> and is related to the number of star-forming galaxies satellites in the model (see Fig. <ref>). Samples of model are hosted by haloes with minimum masses that increase with the flux. However, the fraction of satellite decreases for brighter cuts in flux. At z=0.91, the percentage of satellites is reduced from 20% to 4% when the selection in flux is changed from 10^-18 ergs^-1cm^-2 to 10^-16 ergs^-1cm^-2. This reduction in the numbersof model satellite galaxies in bright samples of , lowers the average mass of their host haloes, reducing the bias and clustering predicted by the model. <cit.> measured the clustering of a g-band selected galaxy sample, with an average density of 500 galaxies per deg^2 in the redshift range 0.6<z<1.7. The selectionof galaxies based on their apparent g-band magnitude around z=1 is very close to selecting ELGs. <cit.> showed that the g-band magnitude is correlated to the luminosity in the studied redshift range. We also find such a correlation for in the model. This correlation is due to the fact that emission lines are directly related to the rest-frame UV luminosity, as this gives a measure of the ionizing photons.The two point correlation functions for galaxies with 20<g<22.8 and a colour cut to remove low redshift galaxies as measured byare shown with grey symbols in Fig. <ref>. In this figure we compare model galaxies selected with the same g-band cut to the results from <cit.>. Note that the clustering of model galaxies with F_[OII]>10^-18ergs^-1cm^-2 and 20<g<22.8 overlap for separations above 10h^-1Mpc and below this separation they are comparable. The reduced χ^2 is 3.1 when comparing the clustering of model galaxies with 20<g<22.8 to that of <cit.>. The reduced χ^2 decreases to ∼2.6, if the g-band faint cut is changed by 0.6 magnitudes, 20<g<22.2. Model galaxies appear to be less clustered than the current observations of g-selected samples. <cit.> used weak lensing to estimate the typical mass of haloes hosting g-band selected galaxies, finding (1.25±0.45)× 10^12h^-1M_⊙. Within the same redshift range, the model g-band sample is hosted by haloeswith an average mass of ∼ 10^11.8h^-1M_⊙, consistent with the values reported observationally, although somewhat on the low side. <cit.> also estimated the fraction of satellite g-band selected galaxies using a modified sub-halo abundance matching method that accounts for the incompleteness of small samples of galaxies that do not populate every halo. The model that best fits their measured clustering had ∼20% satellite galaxies, while here we find that satellites account for only 2% of our sample. Both aspects, the lower satellite fraction and slightly lower host halo masses contribute to explaining the lower two point correlation function obtained for model g-band selected galaxies in comparison to the observational results fromThis result suggests that too large a fraction of model satellite galaxies are not forming stars at z∼ 1. In fact, even at z=0 we find too large a fraction of low mass galaxies with a very small star formation rate, compared to the observations (see Fig. <ref>, note that the problem is even larger for the GP14 model). The obvious place to start improving the model would be to allow satellite galaxies to retain their gas for longer, so they can have higher star formation rates on average. However, a thorough exploration of how expelled gas is reincorporated at different cosmic times might be needed <cit.>.§ CONCLUSIONS The GP17 semi-analytical model is a new hierarchical model of galaxy formation and evolution that incorporates the merger scheme described in <cit.> and the gradual stripping of hot gas in merging satellite galaxies <cit.>. The GP17 model also includes a simple model for emission lines in star-forming galaxies that uses the number of ionizing photons and metallicity of a galaxy to predict emission line luminosities based on the properties of a typical HII region <cit.>. The free parameters in the GP17 model have been chosen to reproduce at z=0 the rest-frame luminosity functions (LF) in the b_J and K bands and also to improve the match to the local passive fraction of galaxies.Using the GP17 model, we study the properties of . These are the dominant emission line galaxies (ELGs) selected by optical-based surveys at 0.5<z<1.5. In particular, we have applied emission line flux, magnitude and colour cuts to the model galaxies, to mimic five observational surveys DEEP2, VVDS-Deep, VVDS-Wide, eBOSS and DESI, as summarised in Table <ref>. Over 99% of the selected model are actively forming stars, and over 90% are central galaxies.The GP17 LFs of model are in reasonable agreement with observations (see  <ref>). For this work, we have assumed that the dust attenuation experienced by the emission lines is the same as that for the stellar continuum. However, the assumed dust attenuation in the emission lines is expected to be a lower limit, which may alter the LF comparison. The bright end of the LF of is dominated by galaxies undergoing a starburst. The luminosity at which this population dominates depends on the interplay between the stellar and the AGN feedback. For model galaxies, we find that the cut in luminosity removes galaxies below a certain SFR value, but that it also removes the most massive galaxies in the sample due to dust attenuation of the line (see  <ref>).Model are typically hosted by haloes with masses above 10^10.3h^-1M_⊙ and mean masses in the range 10^11.41≤ M_ halo(h^-1M_⊙)≤ 10^11.78 (see Table <ref>). For haloes with M_halo∼ 10^12.5M_⊙h^-1, model have median stellar masses a factor of 1.5 above the global population. This is driven by the cut on luminosity being directly translated into a cut in SFR, which in turn is correlated with stellar mass and thus, low mass galaxies are also being removed from the selection. As expected for star forming galaxies, the mean halo occupation of central , ,cannot be described by a step function that reaches unity above a certain host halo mass (the typical shape for mass selected galaxies). The can be approximately decomposed into an asymmetric Gaussian for central disk galaxies, i.e. with bulge-to-mass ratio below 0.5, and a smoothly rising step function for central spheroids, which, in general, would not reach unity (see  <ref>). This last point implies that not every dark matter halo is expected to host an ELG and it is particularly relevant for HOD models used to populate very large dark matter simulations with cosmological purposes. Model at z ∼ 1 have a real-space two point correlation function that closely follows that of the underlying dark matter above separations of 1 h^-1Mpc, resulting in a linear bias close to unity. This is lower than the preliminary results for eBOSS ELGs, by a factor of ∼1.6 (see  <ref>).We have compared the clustering of g-band selected model galaxies with the observational results from <cit.>, who argue that the cut 20<g<22.8 selects ELGs at 0.6<z<1, once an additional colour cut is applied to remove lower redshift galaxies. The typical mass of haloes hosting such g-band selected galaxies as inferred from weak lensing inis consistent with the values we find for our corresponding model galaxies (see  <ref>). However, our model g-selected galaxies are slightly less clustered in redshift space compared to the findings of <cit.>. This is mostly due to the smaller fraction of g-band selected satellites in GP17, ∼ 2%, compared to their ∼ 20%.inferred the satellite fraction from a modified sub-halo abundance matching model that accounts for incompleteness, as not all haloes above a certain mass contain a g-band selected galaxy. This is an indication that too large a fraction of model satellite galaxies are not forming stars at z∼ 1. This suggests that our model of galaxy formation and evolution can be improved by allowing satellite galaxies to retain their hot halo gas for longer, so their average star formation range is increased. However, other possibilities should be also explored, such as the reincorporation of expelled gas through cosmic time, which will most likely also have an impact on the selection of star forming satellite galaxies. Future theoretical studies of emission line galaxies will benefit from the use of a more realistic model for the mechanisms that produce emission line galaxies. Given the small fraction of that are satellite galaxies, ELGs have the potential to became ideal candidates for redshift space distortions studies at different cosmic times, due to the ease of modelling their clustering. However, the non-canonical shape of their mean halo occupation distribution should be studied and maybe accounted for in cosmological studies.§ ACKNOWLEDGEMENTS The authors would like to thank the helpful comments provided by Will Percival, Will Cowley, Coleman Krawczyk, Carlos Alvarez, Marc Kassis, Zheng Zheng, Charles Jose and Analía Smith-Castelli. VGP acknowledges support from the University of Portsmouth through the Dennis Sciama Fellowship award and past support from a European Research Council Starting Grant (DEGAS-259586). JRH acknowledges support from the Royal Astronomical Society Grant for a summer studentship. VGP and JRH thank Alastair Edge for his help getting this studentship. PN, CMB, CL, NM and JH were supported by the Science and Technology Facilities Council (ST/L00075X/1). PN acknowledges the support of the Royal Society through the award of a University Research Fellowship, and the European Research Council, through receipt of a Starting Grant (DEGAS-259586). This work used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grants ST/H008519/1 and ST/K00087X/1, STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure. This work has benefited from the publicly available programming language python. mnras§ COLOUR CUTS Fig. <ref> presents the location of model galaxies at redshifts z=0.62, 0.76, 0.91, 1.17, 1.5 in the (g-r)_ DECam vs. (r-z)_ DECam, colour-colour space, compared to the regions delimited by the colour cuts decam180 described in <cit.> and summarized in Table <ref> as eBOSS, top panel, and the DESI selection <cit.>, bottom panel. The top panel shows model galaxies with Flux_ [OII]>10^-16 ergs^-1 cm^-2 and 22.1< g_AB^ DECam < 22.8. This magnitude cut mostly removes red galaxies from the colour-colour plot in Fig. <ref>. The bottom panel shows model galaxies with Flux_ [OII]>8· 10^-17 ergs^-1 cm^-2 and r_AB^ DECam < 23.4. The colours of model galaxies are roughly consistent with the regions defined tentatively for eBOSS <cit.> and DESI <cit.> to select ELGs at z∼ 1. A more detailed comparison shows the model (g-r)_ DECAM to be just about 0.2 magnitudes redder than the observations presented in <cit.>.In the selection of model galaxies we use total magnitudes. The difference with respect to SDSS model magnitudes is expected to be less than 10% for most model galaxies <cit.>. Fig. <ref> also shows the location of stars in the (g-r)_ DECam vs. (r-z)_ DECam space <cit.>. These overlap with the region occupied by galaxies at z=0.62 and for the bluest galaxies at 0.6<z<1.6. Both the eBOSS and DESI selections reported here, are trading off selecting high redshift galaxies while minimising the stellar contamination.
http://arxiv.org/abs/1708.07628v2
{ "authors": [ "V. Gonzalez-Perez", "J. Comparat", "P. Norberg", "C. M. Baugh", "S. Contreras", "C. Lacey", "N. McCullagh", "A. Orsi", "J. Helly", "J. Humphries" ], "categories": [ "astro-ph.GA", "astro-ph.CO" ], "primary_category": "astro-ph.GA", "published": "20170825065143", "title": "The host dark matter halos of [OII] emitters at 0.5< z< 1.5" }
=1 28 [4]totalwidth=480pt, totalheight=680ptshapes,arrows block = [rectangle, draw, text width=7em, text centered, rounded corners, minimum height=3em]positioning,arrows positioning,automata,decorations.pathreplacing,shapes matrix, arrowsequ theoremTheorem[section] lemma[theorem]Lemmadefinition[theorem]Definitioncorollary[theorem]Corollary
http://arxiv.org/abs/1708.07907v1
{ "authors": [ "Lara B. Anderson", "Xin Gao", "James Gray", "Seung-Joo Lee" ], "categories": [ "hep-th" ], "primary_category": "hep-th", "published": "20170825224022", "title": "Fibrations in CICY Threefolds" }
Let G be a compact Lie group. (Compact) topological G-manifolds have the G-homotopy type of (finite-dimensional) countable G-CW complexes.This partly generalizes Elfving's theorem for locally linear G-manifolds <cit.>.Finding by Counting: A Probabilistic Packet Count Model for Indoor Localization in BLE Environments [ December 30, 2023 =================================================================================================== § TOPOLOGICAL MANIFOLDS By topological manifold, we shall mean a separable metrizable space M such that each point x ∈ M has a neighborhood homeomorphic to some ^n. Notice we allow n to depend on x, so that different components of M may have different dimensions. Note separable and metrizable imply paracompact <cit.> and second-countable. Let M be a topological manifold. There exists an increasing sequence M_0 ⊆ M_1 ⊆…⊆ M_i ⊆… of open sets whose union is M such that each closure M_i in M both is compact and is a neighborhood deformation retract in M_i+1. It is a lament of the author to not have discovered a proof from basic topology. If M is compact, then simply use the constant sequence M_i=M. Otherwise, assume M is noncompact. Since M has only countably many components, say {M^c}_c=0^∞ and by forming the finite unions M_i := ⋃_c+j=i M^c_j when such sequences {M^c_j}_j=0^∞ exist, we may further assume that M is connected, say with n:= M.First suppose n ⩽ 4. If n=1, then M is homeomorphic to either S^1 or<cit.> hence admits a (C^∞) smooth structure. If n=2, then M admits a triangulation by Radó <cit.> hence a smooth structure by Richards <cit.>[One can avoid this later classification of Richards and instead use elementary (1920s) methods: the regular neighborhoods of simplices in the second barycentric subdivision are PL handles (see <cit.> for all n); each attaching map of a 1-handle or a 2-handle is isotopic to a smooth one.]. If n=3, then M admits a triangulation by Moïse <cit.> hence a smooth structure by Cairns <cit.>. If n=4, then M admits a smooth structure by Freedman–Quinn <cit.>, since M is noncompact connected. In any case, we may select a smooth structure on M. There exists a Morse function f: M[0,∞) with each i ∈ a regular value <cit.> and each M_i := f^-1[0,i) precompact <cit.>. Furthermore, each M_i is a neighborhood deformation retract in M_i+1 <cit.>.Now suppose n>4. The topological manifold M admits a topological handlebody structure M_0⊂…⊂M_i⊂…, by Kirby–Siebenmann <cit.> if n>5 and by Quinn <cit.> if n=5. That is, each M_i is open in M, the closure M_i is compact, the frontier M_i := M_i-M_i is a bicollared topological submanifold of M (that is, M_i is clean in M), and M_i+1 is the union of M_i and a handle. So, since M_i is bicollared, each M_i is a neighborhood deformation retract in M_i+1. An absolute neighborhood retract (with respect to the class of metric spaces, denoted ANR) is a metrizable space such that any closed embedding into a metric space admits a retraction of a neighborhood to the embedded image <cit.>.A 1951 theorem of O Hanner has this nonseparable generalization <cit.>. Topological manifolds are absolute neighborhood retracts.§ EQUIVARIANT TOPOLOGICAL MANIFOLDS A G-cofibration is a G-map with the G-homotopy extension property to any G-space. Some G-homotopy commutative squares can be made strictly commutative. Let G be a topological group. Let A,B,C,D be (topological) G-spaces. Let i: AB be a G-cofibration. Let f: AC and g: BD and h: CD be (continuous) G-maps. Suppose that g∘ i is G-homotopic to h ∘ f. Then g is equivariantly homotopic to a G-map g': BD such that g' ∘ i = h ∘ f.Write H: A × [0,1]D for the G-homotopy from g ∘ i to h ∘ f. Since i: AB is a G-cofibration, there exists a G-homotopy E: B × [0,1]D from g to some g' such that E ∘ (i ×𝕀_[0,1]) = H. Then g' ∘ i = H|(A ×{1}) = h ∘ f. Recall G-cofibrant is equivalent to G-NDR for closed G-subsets <cit.>. Let G be a topological group. A G-space X is proper if, for the continuous mapG × XX × X  ;  (g,x) ⟼ (gx,x),the image of closed is closed and the preimage of compact is compact <cit.>.By a topological G-manifold, we mean a topological G-space M so, for each closed subgroup H of G, the H-fixed subspace is a topological manifold:M^H  := {x ∈ M| ∀ g ∈ H : gx=x }.Indeed the restriction of M^H being locally euclidean is not automatic <cit.>.[Bing] A C_2-action on ^4 exists with fixed set not a C^0-manifold.Let M be a topological manifold with the action of a Lie group G. For any prime p, the fixed set M^H is an _p-cohomology manifold if H ⩽ G is a finite p-group (Smith <cit.>) or toral group (Connor–Floyd <cit.>). T Matumoto introduced the notion of a G-CW complex for any G <cit.>. Let G be a compact Lie group. Any topological G-manifold is G-homotopy equivalent to a G-equivariant countable CW complex. Furthermore, if the manifold is compact, then the CW complex can be selected to be finite-dimensional.Let M be a topological G-manifold. By Lemma <ref>, each M^H is an ANR. Also, by the Bredon–Floyd theorem, each compact set in M has only finitely many[An example with infinitely many orbit types is M=× S^1 with U_1-action u(n,z) := (n,u^n z).] conjugacy classes of isotropy group <cit.>. Consider the increasing sequence {M_i}_i=0^∞ of Lemma <ref>. We may assume that each open set M_i is G-invariant by replacement with its G-saturation G M_i; indeed, the compactness of G implies the compactness of M_i <cit.>. Then each M_i, hence M_i, has finitely many orbit types. Also, the open set M_i^H = M_i ∩ M^H in M^H is an ANR by Hanner <cit.>. So, since M_i is a separable metrizable finite-dimensional locally compact space, by a criterion of Jaworowski <cit.>[Beware the G-Wojdysławski theorem therein is incorrect and must be amended by <cit.>.], we obtain that M_i is a G-ENR. That is, there exist a closed G-embedding of M_i in a euclidean G-space (equipped with a smooth orthogonal representation of G), an open G-neighborhood U of M_i, and a G-retraction r: UM_i. By a theorem of Illman <cit.>, the smooth G-manifold U admits a G-CW structure, finite-dimensional and countable.Write e: M_iU for the G-inclusion. By the G-version of Mather's trick <cit.>, the mapping torus (r ∘ e) is G-homotopy equivalent to (e ∘ r). Note (r ∘ e) is G-homeomorphic to M_i × S^1 with trivial G-action on S^1. By G-cellular approximation <cit.>, the G-map e ∘ r: UU is G-homotopic to a cellular G-map c: UU. Then (e ∘ r) is G-homotopy equivalent to the finite-dimensional countable G-CW complex (c). Thus M_i × S^1, hence the infinite cyclic cover M_i ×≃ M_i, is G-homotopy equivalent to a finite-dimensional countable G-CW complex K_i, namely the bi-infinite mapping telescope of c.Thus, for each i, we obtain a G-homotopy equivalence f_i: M_iK_i. Choose a G-homotopy inverse f_i: K_iM_i. Write c_i: M_iM_i+1 for the G-cofibrant inclusion. By G-cellular approximation <cit.>, the composite f_i+1∘ c_i ∘f_i is G-homotopic to cellular G-map g_i : K_iK_i+1. Recursively by Lemma <ref> for each i, we may reselect f_i+1 up to G-homotopy such that f_i+1∘ c_i = g_i ∘ f_i holds. These assemble to a G-homotopy equivalence of the mapping telescopesf: _i ∈ (M_i,c_i)K := _i ∈ (K_i, g_i).Thus, as the c_i are cofibrations, the topological G-manifold M = _i (M_i, c_i) is G-homotopy equivalent to the countable G-CW complex K (<cit.>).Notice that, since the dimension of the euclidean space equivariantly containing M_i may be unbounded over all i, the CW complex K may not be finite-dimensional. § EXAMPLES AND RELATED RESULTS [Bing] Consider the double D := E ∪_A E of the closed exterior E in S^3 of the Alexander horned sphere A ≈ S^2. This double has an obvious involution r_B that interchanges the two pieces and leaves the horned sphere fixed pointwise. R H Bing showed that D is homeomorphic to the 3-sphere <cit.>.By <ref>, (S^3,r_B) has the C_2-homotopy type of a finite-dimensional countable C_2-CW complex. [Montgomery–Zippin] Adaptation of Bing's ideas produces an involution r_MZ of S^3 with fixed set a wildly embedded circle <cit.>. So (S^3,r_MZ) also has the C_2-homotopy type of a finite-dimensional countable C_2-CW complex. [Lininger] For each k ⩾ 3, there exist uncountably many inequivalent free U_1-actions on S^2k-1 with quotient not a C^0-manifold <cit.>. Each has the U_1-homotopy type of a finite-dimensional countable U_1-CW complex. Indeed, using <cit.> for mutation[His first step is C^0-existence of a product tube around a principal orbit <cit.>.] of S^2k-1 P_k keeps isotropy groups trivial. The case of G finite in Theorem <ref> was proven by S Kwasik <cit.>. However, in his first step, he implicitly assumed the continuity of the G-action, defined by g · f := (x ↦ f(g^-1 x)), on the Banach space B(X) of bounded continuous functions X equipped with the sup-norm. This popular assumption was implicit in the infamous assertion of an equivariant version of Wojdysławski's theorem. Non-equivariantly it states that the Kuratowski embedding, defined by x ⟼ (y ↦ d(x,y)), of a metrizable space X into B(X) <cit.>, has image closed in its convex hull, where d is any bounded metric on X <cit.>. Nonetheless, Kwasik's assumption is indeed true and proven later by S Antonyan; it follows from <cit.> that it holds when G is a discrete group. Continuity of the G-action on B(X) fails if G = U_1 <cit.>, a flaw in <cit.>.Letbe a virtually torsionfree[Virtually torsionfree holds necessarily for all finitely generated < GL_n() by Selberg <cit.>.Deligne gives an example of this property failing for a latticein a nonlinear Lie group <cit.>. I conjecture that this corollary is true if “virtually torsionfree” is replaced with “residually finite.”], discrete group. Any topological -manifold with proper action has the -homotopy type of a countable -CW complex. Moreover if the action is cocompact, then the CW complex can be finite-dimensional.Using intersection with finitely many conjugates, there is a normal, torsionfree subgroup ^+ of . Write G := /^+ for the finite quotient group. Since the -action on the given topological -manifold M^+ is properly discontinuous and ^+ is torsion-free, M^+ is a regular ^+-covering space of the orbit space M := M^+/^+. Thus, by the evenly covered property, M is a topological G-manifold. Therefore, by Theorem <ref> (or just <cit.> amended by Remark <ref>), there is a G-homotopy equivalence f: XM from a G-equivariant countable CW complex X.Write X^+ := f^* M^+ and consider the pullback diagram of -spaces and -maps:X^+ @>f^+>> M^+@Vf^*qVV @VqVVX @>f>> M.Since q is a regular ^+-covering map, the induced map f^*q given by inclusion-projection is also a regular ^+-covering map. Then the covering space X^+ is a CW complex <cit.>. This -invariant CW structure on X^+ is a countable -CW complex <cit.>. The induced map f^+ is a -homotopy equivalence. An application of Corollary <ref>: any such topological -manifold, whose fixed set of any finite subgroup is contractible, is a model for the classifying space E_; the -CW condition is assumed for the existence of classifying maps. This corollary thus simplifies the fundamentally complicated proofs of <cit.> <cit.>. Let G be a topological group. A locally linear G-manifold <cit.> (`locally smooth' therein) is a proper G-space M such that, for each x ∈ M, there are a G_x-invariant subset x ∈ S ⊂ M and a homeomorphism S ^k equivariant with respect to an O(k)-representation of the isotropy groupG_x  := { g ∈ G|gx=x }such that the multiplication map G ×_G_x SM is an open embedding. Any smooth G-manifold is a locally linear G-manifold <cit.>. M Freedman constructed a nonsmoothable free involution r_F on the 4-sphere (more in <cit.>). Hence (S^4,r_F) is a locally linear C_2-manifold that is not a smooth C_2-manifold. F Quinn constructed a locally linear G-disc Δ where G=C_15⋊_2 C_4 such that Δ does not contain a finite G-CW complex in its G-homotopy type <cit.>. So for non-smooth actions there is no G-analog of the Borsuk Conjecture <cit.>.Let G be any Lie group, and further assume that all actions are proper in the more specialized sense of Palais <cit.>. E Elfving concluded that any locally linear G-manifold has the G-homotopy type of a G-CW complex <cit.>. To contrast with Theorem <ref>, the input type of group is broader, but the input type of manifold is restricted, and the output type of CW complex less sharp. Philosophically, for a homotopy-type result, one does not need the homeomorphism structure of locally linear tubes, but simply equivariant neighborhood retractions. § COUNTABILITY FROM SEPARABILITY By a G-ANR we mean that a G-spacePalais-proper? such that every closed G-embedding into a G-space with G-invariant metric has its image a G-retract of a G-neighborhood. A G-simplicial complex is countable means countably many G-simplices G/H ×Δ^k.Antonyan–Elfving showed, for any locally compact group G, any metric G-ANR has the G-homotopy type of a G-CW complex <cit.>. We rework their proof to provide a parallel result, one which has a restricted input to get a restricted output. Let G be a locally compact Hausdorff group. If a metric G-ANR is separable, then it is G-homotopy equivalent to a countable G-simplicial complex. A special case is when G is a compact Lie group and was proven incorrectly by Murayama <cit.>. We now sharpen Theorem <ref> to include countability. Let G be a compact Lie group. Any topological G-manifold with only finitely many G-orbit types has the G-homotopy type of a countable G-complex. S Kwasik proved the case of G a finite group <cit.>, modulo Remark <ref>. Let M be a topological G-manifold.Palais-proper automatic. Recall that any topological manifold is an absolute neighborhood retract (ANR).citation of ANR fact? Then each fixed subspace M^H is an ANR. So, since M is paracompact metrizable, by a Jaworowski's criterion, M is a G-ANR <cit.>[The main proof therein implicitly uses a G-version of the ANR criterion <cit.>.]. Therefore, since M is separable, we are done by Theorem <ref>. To begin, <cit.> is an equivariant generalization of Wojdysławski's properties of Kuratowski's embedding i: XB(X), which was stated in Remark <ref>. Therein the action of G on the set B(X) makes the function i equivariant. Let G be a countably compact Hausdorff group. Let X be a metric G-space. There is a G-invariant closed linear subspace L ⊃ i(X) of B(X) on which the G-action is continuous, such that i(X) is closed in the L-convex hull. Recall a G-Banach space is a G-linear space with a complete G-invariant norm. Let G be a countably compact Hausdorff group. Any separable metric G-space equivariantly embeds into a separable G-Banach space so that the embedded image is closed in its convex hull.Let X be a separable metric G-space. By Lemma <ref>, there is a G-Banach space L and a G-embedding i: XL so that i(X) is closed in its convex hull. There exists a countable dense subset S of X. Write B for the closure of the linear span of i(S) in L. Observe B is separable, contains i(X), and is G-invariant.INCOMPLETE § APPLICATION OF THE HILBERT–SMITH CONJECTURE An early attempt on Hilbert's fifth problem (1900) is this approximation <cit.>. Any compact Hausdorff group is isomorphic to the filtered limit of an infinite tower of compact Lie groups and smooth epimorphisms.We outline a later proof, as it is a beautiful interaction of algebra, analysis, and topology <cit.>.Haar (1933) showed that any locally compact Hausdorff group G admits a translation-invariant measure μ on its Borel σ-algebra. Since G is compact, we take μ to be a probability measure. Urysohn's lemma (1925) implies G is completely Hausdorff. So on the Hilbert space := L^2(G,μ;) the G-action via pre-multiplication is faithful. Peter–Weyl (1927) showed the sum of all finite-dimensional G-invariant -linear subspaces is dense in . Thus each nonidentity element of G acts nontrivially on such a subspace of .So, since G is compact, it follows that each neighborhood B of the identity e in G contains the kernel N of a finite product of (finite-dimensional) unitary representations. That is, B ⊃ N ◃ G and G/N is monomorphic to a finite product of unitary groups. So G/N is a Lie group (von Neumann, 1929). If G is first-countable, then e has a countable decreasing basis {B_i}_i=0^∞, so we can also assume decreasing for the corresponding sequence {N_i}_i=0^∞ of closed normal subgroups of G with G/N_i a (compact) Lie group. So, since ⋂_i=0^∞ N_i is trivial, the homomorphism G lim_i (G/N_i) is injective. It is surjective, since for any {a_i N_i }_i=0^∞ in the limit, the intersection of the descending chain of the compact sets a_i N_i ⊂ G is nonempty.Weil removed first-countability by limiting over a larger directed set <cit.>. Consider the setof finite subsets of G-{e} partially ordered by inclusion; note it is directed as I,J ∈ implies I ∪ J ∈. For each e ≠ x ∈ G, our unitary representation M_x: GU_d(x) satisfies M_x(x)≠𝕀; call the kernel N_x ◃ G. For each I ∈, define M_I := ∏_x ∈ I M_x with kernel N_I = ⋂_x ∈ I N_x, so that again G/N_IM_I(G) ⩽∏_x ∈ I U_d(x) is a compact Lie group. Thus I ⟼ G/N_I is a functor from ^op to the category of compact Lie groups and smooth epimorphisms. Since each e ≠ x ∉ N_x, the homomorphism q: G lim_I ∈ (G/N_I) is injective. Since {a_I N_I}_I ∈ has finite-intersection property, q is surjective <cit.>. I was unable to apply this tower to generalize Theorem <ref> to compact Hausdorff G. However, under certain circumstances, G becomes Lie, so that Theorem <ref> applies. If G is compact, by Lemma <ref>, G has no small subgroups if and only if it is Lie. Notice, for any prime p, the additive group _p of p-adic integers is compact Hausdorff with arbitrarily small subgroups. So G is Lie implies it contains no _p. Let G be a locally compact group. If G admits a faithful action on a connected topological manifold M, then G must be a Lie group. In short, G is Lie if G ⩽(M). Hilbert's fifth problem is the case M=G. Partial affirmations exist if one assumes some sorts of regularity of the action. Conjecture <ref> is true for G, if there exists a C^1-manifold structure on M and the action is by C^1-diffeomorphisms. If G is compact, it is unworthy to use Theorem <ref> as the C^1-action becomes C^ω <cit.>. Any C^1-map (continuous first partial derivatives) is locally Lipschitz. Conjecture <ref> is true for G, if the action is by Lipschitz homeomorphisms with respect to a riemannian metric on smooth M. Their argument shows the impossibility of G containing _p for any prime p, by otherwise establishing inequalities with Hausdorff dimension for various metrics. The riemannian hypothesis is only used to obtain equality with covering dimension.Later it was noticed that the Baire-category part of the argument could be done locally in a euclidean chart, averaging a metric on it over a small subgroup of _p. In this situation, the new lemma is: M and M/_p have equal covering dimensions.Both short proofs rely on C T Yang's 1960 result: _(M/_p)=_(M)+2. Conjecture <ref> is true for G, if the action is by locally Lipschitz homeomorphisms on the connected topological manifold M. Notice any self-homeomorphism is Lipschitz if the metric space M is compact. The following consequence is pleasant because of no smoothness on G or the action. Let G be a compact group. Let M be a topological G-manifold. Suppose the action is faithful and that M is compact and connected. Then M has the G-homotopy type of a G-equivariant finite-dimensional countable CW complex. To such (M,G), including Examples <ref>–<ref> or others with C^0 dynamics, one can now apply equivariant homology, with variable coefficients in abelian groups (Bredon–Matumoto <cit.>) or more generally in spectra (Davis–Lück <cit.>). Therefore, inductively calculable invariants of these wild objects are now available. §.§ Acknowledgements Frank Connolly (U Notre Dame) provided early motivation (<ref>). Craig Guilbault (U Wisconsin–Milwaukee) reminded me topological manifolds have handles (<ref>). The following brief but fruitful discussions happened in June 2017, at Wolfgang Lück's 60th birthday conference (U Münster). Wolfgang Steimle (U Augsburg) assured me that strictification of homotopy commutative diagrams is easy for squares (<ref>). Elmar Vogt (Freie U Berlin) excited me further into classical topology (ANRs and Lemma <ref>). Finally, individually Jim Davis (Indiana U), Nigel Higson (Penn State U), Wolfgang Lück (U Bonn), and Shmuel Weinberger (U Chicago) encouraged me toward the Hilbert–Smith Conjecture (<ref>).The referee corrected some exposition and recommended Examples <ref> & <ref>. Chris Connell (Indiana U) was helpful in discussing Hausdorff dimension (<ref>, <ref>). alpha
http://arxiv.org/abs/1708.08094v3
{ "authors": [ "Qayum Khan" ], "categories": [ "math.GT" ], "primary_category": "math.GT", "published": "20170827153619", "title": "Countable approximation of topological $G$-manifolds: compact Lie groups $G$" }
W. Chen Department of Mechanical Engineering University of Maryland College Park, MD 20742 [email protected]. Fuge Department of Mechanical Engineering University of Maryland College Park, MD 20742 Active Expansion Sampling for Learning Feasible Domains in an Unbounded Input SpaceThis work was funded through a University of Maryland Minta Martin Grant. Wei Chen Mark Fuge Received: date / Accepted: date ================================================================================================================================================================ Many engineering problems require identifying feasible domains under implicit constraints. One example is finding acceptable car body styling designs based on constraints like aesthetics and functionality. Current active-learning based methods learn feasible domains for bounded input spaces. However, we usually lack prior knowledge about how to set those input variable bounds. Bounds that are too small will fail to cover all feasible domains; while bounds that are too large will waste query budget. To avoid this problem, we introduce Active Expansion Sampling (AES), a method that identifies (possibly disconnected) feasible domains over an unbounded input space. AES progressively expands our knowledge of the input space, and uses successive exploitation and exploration stages to switch between learning the decision boundary and searching for new feasible domains. We show that AES has a misclassification loss guarantee within the explored region, independent of the number of iterations or labeled samples. Thus it can be used for real-time prediction of samples' feasibility within the explored region. We evaluate AES on three test examples and compare AES with two adaptive sampling methodsthe Neighborhood-Voronoi algorithm and the straddle heuristicthat operate over fixed input variable bounds. lLength scale of the Gaussian kernel xInput sample x^*Optimal query yLabel of an input sample (feasible/infeasible) ŷLabel estimated by Gaussian Process X_LSet of labeled samples X_USet of unlabeled samples (candidates) dInput space dimension f̅Predictive mean of Gaussian Process VPredictive variance of Gaussian Process hActual evaluation function fEstimated feasibility function p_ϵϵ-margin probability ΦCumulative distribution function of standard Gaussian distribution LMisclassification loss§ INTRODUCTION In applications like design space exploration <cit.> and reliability analysis <cit.>, people need to find feasible domains within which solutions are valid. Sometimes the constraints that define those feasible domains are implicit, , they cannot be represented analytically. Examples of these constraints are aesthetics, functionality, or performance requirements, which are usually evaluated by human assessment, experiments, or time-consuming computer simulations. Thus usually it is expensive to detect the feasibility of a given input. In such cases, one would like to use as few samples as possible while still approximating the feasible domain well.To solve such problems, researchers have used active learning (or adaptive sampling)[Note that in this paper the terms “active learning" and “adaptive sampling" are interchangeable.] to sequentially select the most informative instances and query their feasibility, so that the number of queries can be minimized <cit.>. These methods require fixed bounds over the input space, and only pick queries inside those bounds. But what if we do not know how wide to set those bounds? If we set the bounds too large, an active learner will require an excessively large budget to explore the input space; whereas if we set the bounds too small, we cannot guarantee that an algorithm will recover all the feasible domains <cit.>. In this case, we need an active learning method that can gradually expand our knowledge about the input space until we have either discovered all feasible domains or used up our remaining query budget.This paper proposes a methodwhich we call Active Expansion Sampling (AES)to solve that problem by casting the detection of feasible domains as an unbounded domain estimation problem. In an unbounded domain estimation problem, given an expensive function h: 𝒳∈ℝ^d→{-1,1} that evaluates any point x in an unbounded input data space 𝒳, we want to find (possibly disconnected) feasible domains in which h(x)=1. Specifically, h could be costly computation, time-consuming experiments, or human evaluation, so that the problem cannot be solved analytically. By unbounded, we mean that we don't manually bound the input space. Thus the input space can be considered as infinite, and theoretically if the query budget allows, our method can keep expanding the explored area of the input space. To use as few function evaluations as necessary to identify feasible domains, AES first fully exploits (up to an accuracy threshold) any feasible domains it knows about and then, budget permitting, searches outward to discover other feasible domains.The main contributions of this paper are: * We introduce the AES method for identifying (possibly disconnected) feasible domains over an unbounded input space.* We provide a framework that transfers bounded active learning methods into methods that can operate over unbounded input space.* We introduce a dynamic local pool method that efficiently finds near optimal solutions to the global optimization problem (Eq. <ref>) for selecting queries.* We prove a constant theoretical bound for AES's misclassification error at any iteration inside the explored region.§ BACKGROUND AND RELATED WORK Essentially, the unbounded domain estimation problem breaks down into two tasks explored by past researchers: 1) the active learning task, where we efficiently query the feasibility of inputs; and 2) the classification task, where we estimate decision boundaries (, boundaries of feasible domains) that separates the feasible class and the infeasible class (, feasible regions and infeasible regions). For the first task, we will review relevant past work on active learning. For the second task, we use the Gaussian Process as the classifier in this paper and will introduce basic concepts of Gaussian Processes.§.§ Feasible Domain Identification Past work in design and optimization has proposed ways to identify feasible domains or decision boundaries of expensive functions. Generally those methods were proposed to reduce the number of simulation runs and improve the accuracy of surrogate models in simulation-based design and reliability assessment <cit.>. Also, the problem of feasible domain identification is also equivalent to estimating the level set or the threshold boundaries of a function, where the feasible/infeasible region becomes superlevel/sublevel set <cit.>. Such methods select samples that are expected to best improve the surrogate model's accuracy. A common rule is to sample on the estimated decision boundary, but not close to existing sample points. Existing methods achieve this by (1) explicitly optimizing or constraining the decision function or the distance between the new sample and the existing samples <cit.>, or (2) selecting points based on the estimated function values and their confidence at candidate points <cit.>.§.§ Active LearningMethods for feasible domain identification usually require strategies that sequentially sample points in an input space, such that the sample size is minimized. These strategies fall under the larger category of active learning.There are three main scenarios of active learning problems: (1) membership query synthesis, (2) stream-based selective sampling, and (3) pool-based sampling <cit.>. In the membership query model, the learner generates samples de novo for labeling. For classification tasks, researchers have typically applied membership query models to learning finite concept classes <cit.> and halfspaces <cit.>. In the stream-based selective sampling model, an algorithm draws each unlabeled sample from an incoming data distribution, and then decides whether or not to query that label. This decision can be based on some informativeness measure of the drawn sample <cit.>, or whether the drawn sample is inside a region of uncertainty <cit.>. In the pool-based sampling model, there is a small pool of labeled samples and a large (but finite) pool of unlabeled samples, where the learner selects new queries from the unlabeled pool. The unbounded domain estimation problem assumes that synthesizing an unlabeled sample from the input space is not expensive (as in the membership query scenario), since otherwise we have to use existing samples and the input space will be bounded. An example that satisfies this assumption is experimental design, where we can form an experiment by selecting a set of parameters. With this assumption, our proposed method approximates the pool-based sampling setting by synthesizing a pool of unlabeled samples in each iteration.A pool-based sampling method first trains a classifier using the labeled samples. Then it ranks the unlabeled samples based on their informativeness indicated by an acquisition function. A query is then selected from the pool of unlabeled samples according to their rankings. After that, we add the selected query into the set of labeled data and repeat the previous process until our query budget is reached. Many of these methods use the informativeness criteria that select queries with the maximum label ambiguity <cit.>, contributing the highest estimated expected classification error <cit.>, best reducing the version space <cit.>, or where different classifiers disagree the most <cit.>. Such methods are usually good at exploitation, since they keep querying points close to the decision boundary, refining our estimate of it. However, when the input space may have multiple regions of interest (, feasible regions), these methods may not work well if the active learner is not aware of all the regions of interest initially. Note that while some of the methods mentioned above also consider representativeness <cit.>, or the diversity of queries <cit.>, they don't explicitly explore unknown regions and discover other regions of interests. To address this issue, an active learner also has to allow for exploration (, to query in unexplored regions where no labeled sample has been seen yet). A learner must trade-off exploitation and exploration. To query in an unexplored region, there are methods that (1) take into account the predictive variance at unlabeled samples when selecting new queries <cit.>, (2) naturally balance exploitation/exploration by looking a the expected error <cit.>, or (3) make exploitative and exploratory queries separately using different strategies  <cit.>. In previous methods, the exploitation-exploration trade-off was performed in a bounded input space or a fixed sampling pool. However, in the unbounded domain estimation problem, there is no fixed sampling pool and we are usually uncertain about how to set the bounds of the input space for performing active learning. If the bounds are too small, we might miss feasible domains; while if the bounds are too large, the active learner has to query more samples than necessary to achieve the required accuracy.In this paper, we introduce a method of using active learning to expand our knowledge about an unbounded input data space, and discover feasible domains in that space. A naïve solution would be to progressively expand a bounded input space, and apply the existing active learning techniques. However, there are two problems with this naïve solution: (1) it is difficult to explicitly specify when and how fast we expand the input space; and (2) the area we need to evaluate increases over time increasing the computational cost.Thus existing active learning techniques cannot apply directly to the unbounded domain estimation problem. To the best of our knowledge, <cit.> is the first to deal with the active learning problem over an unbounded input space (, the unbounded domain estimation problem). The AES method proposed in this paper improves upon that previous work (as illustrated in Sect. <ref>). §.§ Gaussian Process Classification (GPC) Gaussian Processes (GP, also called Kriging) are often used as a classifier in active learning <cit.>. Compared to other commonly used classifiers such as Support Vector Machines or Logistic Regression, GP naturally models probabilistic predictions. This offers us a way to evaluate a sample's informativeness based on its predictive probability distribution.The Gaussian process uses a kernel (covariance) function k(x,x') to measure the similarity between the two points x and x'. It encodes the assumption that “similar inputs should have similar outputs". Some commonly used kernels are the Gaussian kernel and the exponential kernel <cit.>. In this paper we use the Gaussian kernel:k(x,x')=exp(-x-x'^2/2l^2)where l is the length scale.For binary GP classification, we place a GP prior over the latent function f(x), and then “squash" f(x) through the logistic function to obtain a prior on π(x) = σ(f(x)) = P(y=1|x). In the feasible domain identification setting, we can consider f : 𝒳∈ℝ^d →ℝ as an estimation of feasibility, thus we can call it estimated feasibility function. Under the Laplace approximation, given the labeled data (X_L, y), the posterior of the latent function f(x) at any x∈ X_U is a Gaussian distribution: f(x)|X_L,y,x∼𝒩(f̅(x), V(x)) with the mean and the variance expressed asf̅(x)= k(x)^TK^-1f̂ = k(x)^T ∇log P(y|f̂) V(x)= k(x,x)-k(x)^T(K+W^-1)^-1k(x)where W=-∇∇log P(y|f) is a diagonal matrix with non-negative diagonal elements; f is the vector of latent function values at X_L, , f_i=f(x^(i)) where x^(i)∈ X_L; K is the covariance matrix of the training samples, , K_ij=k(x^(i),x^(j)); k(x) is the vector of covariances between x and the training samples, , k_i(x)=k(x,x^(i)); and f̂ = arg max_f P(f|X,y). When using the Gaussian kernel shown in Eq. <ref>, k(x,x)=1. We refer interested readers to a detailed description by Rasmussen <cit.> about the Laplace approximation for the binary GP classifier.The decision boundary corresponds to f̅(x)=0 or π̅(x)=0.5. We predict y=-1 when f̅(x)<0, and y=1 otherwise.§ ACTIVE EXPANSION SAMPLING (AES) The Active Expansion Sampling algorithm Algorithm <ref> summarizes our proposed Active Expansion Sampling method. Overall, the method consists of the following steps: * Select an initial sample x^(0) to label.* In each subsequent iteration, * check the exploitation/exploration status (Sect. <ref>),* generate a pool of candidate samples X_U based on the exploitation/exploration status and previous queries (Sect. <ref> and <ref>),* train a GP classifier using the labeled set X_L to evaluate the informativeness of candidate samples in X_U,* select a sample from X_U based on its informativeness and its distance from c (Sect. <ref>),* label the new sample and put it into X_L. * Exit when the query budget is reached. This AES method improves upon our previous domain expansion method <cit.> in several ways. For example, the previous method generates a pool X_U that expands with the explored region each iteration. So its pool size and hence the computational cost increase significantly over time if using a constant sample density. To avoid this problem, this paper proposes a dynamic local pool method (Sect. <ref>). Another major difference is that AES provides a verifiable way to distinguish between exploitation and exploration (Sect. <ref>); while the previous method uses a heuristic based on the labels of last few queries (which is more likely to make mistakes). In this section and Sect. <ref>, we show comprehensive theoretical analysis and experiments to prove favorable properties of our new method.§.§ ϵ-Margin ProbabilityWe train a GP classification model to evaluate the informativeness of candidate samples based on the ϵ-margin probability (Fig. <ref>):p_ϵ(x)=P(f(x)<-ϵ|x),if ŷ=1P(f(x)>ϵ|x),if ŷ=-1= P(ŷf(x) < -ϵ|x) = Φ(-|f̅(x)|+ϵ/√(V(x)))where ŷ is the estimated label of x, the margin ϵ>0, and Φ(·) is the cumulative distribution function of standard Gaussian distribution 𝒩(0,1). The ϵ-margin probability represents the probability of x being misclassified with some degree of certainty (controlled by the margin ϵ).Let the misclassification loss beL(x)= max{0,-f(x)},if y=1 max{0,f(x)},if y=-1where y is the true label of x. L(x) measures the deviation of the estimated feasibility function value f(x) from 0 when the class prediction is wrong. Then, based on Eq. <ref> and <ref>, p_ϵ(x)=P(L(x)>ϵ), which is the probability that the expected misclassification loss exceeds ϵ. A high p_ϵ(x) indicates that x is very likely to be misclassified, and requires further evaluation. Thus we use this probability to measure informativeness. §.§ Exploitation and ExplorationSince our input space is unbounded, naïvely maximizing the ϵ-margin probability (informativeness) will always query points infinitely far away from previous queries.[A point infinitely far away from previous queries has the f̅(x) close to 0 and the maximum V(x), thus the highest p_ϵ(x).] To avoid this issue, one solution is to query informative samples that are close to previously labeled samples. This allows the active learner to progressively expand its knowledge as the queries cover an increasingly large area of the input space. When a new decision boundary is discovered during expansion, we want a query strategy that continues querying points on that decision boundary, such that the new feasible region can be identified as quickly as possible. Therefore, to enable continuous exploitation of the decision boundary, we propose the following query strategymin_x∈ X_U V(x)s.t. p_ϵ(x) ≥τwhere V(x) is the predictive variance at x, and τ is a threshold of the informativeness measure p_ϵ(x). The solution to Eq. <ref> will lie at the intersection of the estimated decision boundary (f̅(x)=0) and the isocontour of p_ϵ(x)=τ (Point A in Fig. <ref>), if that intersection A exists. In the following proof, we denote f̅_P=f̅(x_P), and V_P=V(x_P). For a sample x_A at the intersection of f̅(x)=0 and p_ϵ(x)=τ, we have f̅_A=0 and p_ϵ(x_A)=Φ(-ϵ/√(V_A))=τ (Point A in Fig. <ref>); and for a sample x_B that is any feasible solution to Eq. <ref>, we have p_ϵ(x_B)=Φ(-(|f̅_B|+ϵ)/√(V_B))≥τ (Point B in Fig. <ref>). Thus we get ϵ/√(V_B)≤ (|f̅_B|+ϵ)/√(V_B)≤ϵ/√(V_A). Therefore, V_A ≤ V_B. The equality holds when |f̅_B|=0 and p_ϵ(x_B)=τ, , x_B is also at the intersection of f̅(x)=0 and p_ϵ(x)=τ. Thus we proved the intersection has the minimal predictive variance among feasible solutions to Eq. <ref>, and hence it is the optimal solution. Theorem <ref> indicates that when applying the query strategy shown in Eq. <ref>, the active learner will only query points at the estimated decision boundary[In Sect. <ref>, we assume that the queried point is the exact solution to the query strategy. However since we approximate the exact solution by using a pool-based sampling setting, the query may be deviate from the exact solution slightly.] as long as the estimated decision boundary and the isocontour of p_ϵ(x)=τ intersect. The fact that this intersection exists indicates that there are points on the decision boundary that are informative to some extent (, with p_ϵ(x) ≥τ). We call this stage the exploitation stageat this stage the active learner exploits the decision boundary. Equation <ref> ensures that the queries are always on the estimated decision boundary until the exploitation stage ends (, there are no longer informative points on the decision boundary).If the estimated decision boundary and the isocontour of p_ϵ(x)=τ do not intersect, then the algorithm has fully exploited any informative points on the estimated decision boundary (, for all the points on the estimated decision boundary, we have p_ϵ(x)<τ). We call this stage the exploration stage since the active learner starts to search for another decision boundary (Fig. <ref>). In this stage, we want the new query to be both informative and close to where we started, since we don't want the new query to deviate too far from where we start. Therefore, the query strategy at the exploration stage ismin_x∈ X_U  x-c s.t. p_ϵ(x) ≥τwhere the objective function is the Euclidean distance between x and a center c. This objective keeps the new query selected by Eq. <ref> close to c. In practice, initially when there are only samples from one class, we set c as the initial point x^(0) to keep new queries close to where we start; once there are both positive and negative samples, we set c as the centroid of these initial positive samples, since we want to keep new queries close to the initial feasible region. Given x^* as the solution to Eq. <ref>, we have p_ϵ(x^*)=τ, if p_ϵ(c)<τ. Since p_ϵ(c)<τ, c itself is not the solution of Eq. <ref>. Thus x^*-c>0. Then we have p_ϵ(x)<τ at any point within a (d-1)-sphere centered at c with radius x^*-c, because otherwise the query will be inside the sphere. Thus on that sphere we have p_ϵ(x) ≤τ. So p_ϵ(x^*) ≤τ, since x^* is on that sphere. Because x^* is a feasible solution to Eq. <ref>, we also have p_ϵ(x^*) ≥τ at x^*. Therefore p_ϵ(x^*)=τ. Theorem <ref> shows that in each iteration, the optimal query x^* selected by Eq. <ref> is on the isocontour of p_ϵ(x)=τ.For both Eq. <ref> and <ref>, the feasible solutions are in the region of p_ϵ(x)≥τ. Intuitively this means that we only query samples with at least some level of informativeness. We call the region where p_ϵ(x)≥τ the unexplored region, since it contains informative samples (feasible solutions) that our query strategy cares about; while we call the rest of the input space (p_ϵ(x)≤τ) the explored region (Fig. <ref>).The upper bound of p_ϵ(x) is Φ(-ϵ/sup_x V(x)), and it lies infinitely far away from the labeled samples. In Eq. <ref>, K+W^-1 is positive semidefinite, thus k(x)^T(K+W^-1)^-1k(x) ≥ 0 and V(x)≤ k(x,x). For a kernel k(·) with k(x,x)=1 (, the Gaussian or the exponential kernel), we have V(x)≤ 1. Thus p_ϵ(x) ≤Φ(-ϵ). To ensure that Eq. <ref> has a feasible solution, we have to set τ≤Φ(-ϵ). Therefore, we can set τ=Φ(-ηϵ), where η≥ 1.0. Then the constraint in Eq. <ref> and <ref> can be expressed asΦ(-|f̅(x)|+ϵ/√(V(x))) ≥Φ(-ηϵ)which can be written asηϵ√(V(x)) - |f̅(x)| ≥ϵThe left-hand side of Eq. <ref> is identical to the acquisition function of the straddle heuristic when ηϵ=1.96 <cit.>. The straddle heuristic queries the sample with the largest value of the acquisition function. This acquisition function accounts for the ambiguity of samples in terms of their confidence intervals <cit.>:a(x)= min{-min Q(x), max Q(x)}= 1.96√(V(x)) - |f̅(x)|where Q(x) is the 95% confidence interval of x.Substituting Eq. <ref> for the constraint in Eq. <ref> and <ref>, and combining the exploitation and exploration stages, our overall query strategy becomesmin_x∈ X_U V(x)^αx-c^1-α s.t.  ηϵ√(V(x)) - |f̅(x)| ≥ϵwhere the indicator α is 1 at the exploitation stage, and 0 otherwise. Section <ref> introduces how to set α (, when to exploit vs explore).In general, the unbounded domain estimation problem can be solved using a family of query strategies with the following formmin_x∈ X_U D(x)s.t. I(x) ≥τwhere D(x) is a function that increases as x moves away from the labeled samples, and I(x) is the informativeness measure that is used in any bounded active learning methods. Our query strategies of Eqn <ref> and <ref> all have this form. Comparatively, for bounded active learning methods, the query strategies are usually in the form of max_x∈ X_U I(x).§ DYNAMIC LOCAL POOL GENERATIONWe cast our problem as pool-based sampling by generating a pool of unlabeled instances de novo in each iteration. A naïve way to generate this pool is to try to sample points anywhere near the p_ϵ(x)=τ isocontour. However, intuitively, as the algorithm searches progressively larger volumes of the input space, the pool volume will likewise expand. This expansion means that the size of the pool will increase dramatically over time (assuming we want a constant sample density). This increase, however, makes the computation of Eq. <ref> and <ref> expensive during later expansion stages.To bypass this problem, we propose a dynamic local pool method that generates the pool of candidate samples only at a certain location in each iteration, rather than sampling the entire domain.[Sampling methods like random sampling or Poisson-disc sampling <cit.> can be used to generate the pool. We use random sampling here thereby for simplicity. The specific choice of the sampling method within the local pool is not central to the overall method.] The key insight behind our local pooling method is that while the optimal solution to Eq. <ref> can, in principle, occur anywhere on the p_ϵ(x)=τ isocontour, in practice, multiple points on the isocontour are equally optimal. All we need to do is sample points around any one of those optima. Below, we derive guarantees for how to sample volumes near one of those optima, thus only needing to sample a small fraction of the total domain volume.§.§ Scope of an Optimal QueryLet δ be the distance between an optimal query[The optimal query means the exact solution to the AES query strategy shown in Eq. <ref>, <ref>, or <ref>.] and its nearest labeled sample.We haveδ < β lwhere β is a coefficient depends on ϵ, η, and the GP model.We include the proof of Theorem <ref> and the way of computing β in the appendix (Sect. <ref>). Theorem <ref> indicates that if we set the pool boundary by extending the current labeled sample range by β l, then that pool is guaranteed to contain all solutions to Eq. <ref>; that is, extending the overall pool boundary further will not increase the chances of sampling near p_ϵ(x)=τ, and will only decrease the sample density (given a fixed pool size) or increase the evaluated samples (given a fixed sample density). However, if we generate the pool based solely on this principle (, extending the current labeled sample range by β l), the pool size will still increase over time as the domain size grows. The next two sections show how, for the exploration and exploitation stages respectively, we can further reduce the sample boundary to only a local hyper-sphere.§.§ Pool for the Exploration Stage During the exploration stage of Active Expansion Sampling, the distance between an optimal query and its nearest labeled sample isδ < δ_explore = β l Theorem <ref> is derived from Eq. <ref>. The nearest labeled sample of the optimal query could be any border point (a sample lying on the periphery of the labeled set). There are multiple local optima that are equally useful for expanding the explored region (Fig. <ref>). Thus we just sample near one of those optima. Specifically, we approximate the nearest labeled sample as the previous query. With this approximation, incorporating Theorem <ref>, the optimal query will be inside 𝒞(x^(t-1), δ_explore), the (d-1)-sphere with a radius of δ_explore, centered at the previous query x^(t-1). Thus during the exploration stage, we set the pool boundary to be that sphere (Fig. <ref>).Sometimes when AES switches from exploitation to exploration, the previous query may not lie on the periphery of the labeled samples.This causes samples around the previous query to have low values of p_ϵ(x). In this case, there might not be feasible solution to Eq. <ref>. Thus, every time AES switches from exploitation to exploration, we center the pool around the farthest labeled sample from the centroid of the initial positive samples (, argmax_x∈ X_Lx-c). This ensures that AES generates pool samples near the periphery of the labeled samples. §.§ Pool for the Exploitation Stage During the exploitation stage of Active Expansion Sampling, the distance between an optimal query and its nearest labeled sample isδ < δ_exploit = γ lwhere γ is a coefficient depends on ϵ, η, and the GP model.We include the proof of Theorem <ref> and the way of computing γ in the appendix (Sect. <ref>). Similar to the exploration stage, based on Theorem <ref>, we define the pool boundary during the exploitation stage as 𝒞(x^(t-1), δ_exploit), a (d-1)-sphere with a radius of δ_exploit, centered at the previous query x^(t-1) (Fig. <ref>). §.§ Choosing when to Exploit versus ExploreSince we use different rules to generate the pool at the exploitation and exploration stage, we need to distinguish between the two stages at the beginning of each iteration. In the exploitation stage, according to Theorem <ref>, the optimal query lies within the (d-1)-sphere 𝒞(x^(t-1), δ_exploit) centered at the previous query. While, according to Theorem <ref>, that same query must lie where the estimated decision boundary and the isocontour of p_ϵ(x)=τ intersect. Thus, the decision boundary and the isocontour divide the sphere 𝒞 into four regions (Fig. <ref>):unexplored negative ℛ_1 = {x|p_ϵ(x)>τ, f̅(x)<0}; unexplored positive ℛ_2 = {x|p_ϵ(x)>τ, f̅(x)>0}; explored negative ℛ_3 = {x|p_ϵ(x)<τ, f̅(x)<0}; and explored positive ℛ_4 = {x|p_ϵ(x)<τ, f̅(x)>0}.In contrast, during exploration the estimated decision boundary and the p_ϵ(x)=τ isocontour do not intersectmeaning, unlike exploitation, there exist only two of the four regions (either ℛ_1 & ℛ_3 or ℛ_2 & ℛ_4). In particular, within the unexplored region, f̅(x) will be either all positive or all negative, , ℛ_1 and ℛ_2 cannot exist simultaneously (Fig. <ref>). We use this property to detect exploitation or exploration by generating a pool (a set of uniformly distributed samples) within the boundary 𝒞(x^(t-1), δ_exploit) and checking if, for samples with p_ϵ(x)>τ, samples differ in f̅(x)>0 and f̅(x)<0. If so, AES is in the exploitation stage; otherwise it is in the exploration stage. § THEORETICAL ANALYSIS In this section, we derive a theoretical accuracy bound for AES with respect to its hyperparameters. We further discuss the influence of those hyperparameters on the classification accuracy, the query density, and the exploration speed. The results of this section guides the selection of proper hyperparameters given an accuracy or budget requirement.§.§ Accuracy AnalysisIt is impossible to discuss the function accuracy across the entire input space, since the input space is unbounded. However, we can consider ways to bound the accuracy within bounded explored regions at any time step.As mentioned in Sect. <ref>, p_ϵ(x)=P(L(x)>ϵ), where L(x) is the misclassification loss at x defined in Eq. <ref>. Thus within the explored region, we haveP(L(x)≥ϵ)≤τ   ∀x∈{x|p_ϵ(x)≤τ}orP(L(x)≤ϵ)≥ 1-τ   ∀x∈{x|p_ϵ(x)≤τ} This shows that at any location within the explored region of the input space, the proposed method guarantees an upper bound ϵ of misclassification loss with a probability of at least 1-τ at any given point. Since, in the exploration stage, the estimated decision boundary lies inside the p_ϵ(x)≤τ region (as discussed in Sect. <ref>), we have P(L(x)≤ϵ)≥ 1-τ   ∀x∈{x|f̅(x)=0}This means that in the exploration stage, the estimated decision boundary f̅(x)=0 lies in between the isocontours of f(x)=±ϵ with a probability of at least 1-τ, where f is the true latent function.Note that Eq. <ref> shows that AES's accuracy bound within the explored region is independent of the number of iterations or labeled samples. One advantage of keeping a constant accuracy bound for AES is that the accuracy in the explored region meets our requirements[We can set ϵ and τ such that the accuracy bound is as required. Details about how to set hyperparameters are in Sect. <ref>.] whenever AES stops. This also means that the estimation within the explored region is reliable at any iteration (although this is not true if one includes the unexplored region). In contrast, bounded active learning methods usually only achieve required accuracy after a certain number of iterations, before which the estimation may not be reliable. Therefore, AES can be used for real-time prediction of samples' feasibility in the explored region. §.§ Query Density In Gaussian Processes, given a fixed homoscedastic Gaussian or exponential kernel, we can measure the query density by looking at the predictive variance at queried points. According to Eq. <ref>, V(x) only depends on k(x), which is affected by the distances between x and other queries. A smaller variance at a query indicates that it is closer to other queries, and hence a higher query density; and vise versa. The predictive variance of an optimal query in the exploitation and exploration stage isV(x_exploit) = 1/η^2andV(x_explore) = 1/η^2(1+|f̅(x_explore)|/ϵ)^2where x_exploit and x_explore are optimal queries at the exploitation stage and exploration stage, respectively.The proof of Theorem <ref> is in the appendix (Sect. <ref>). This theorem indicates that the predictive variances of queries at the exploitation stage are always smaller than those at the exploration stage (as |f̅(x_explore)|>0). Thus the query density at the exploitation stage is always higher than that at the exploration stage. The property of having a denser set of points along the decision boundary (queried during the exploitation stage) and a sparser set of points at other regions (queried during the exploration stage) is desirable because we want to save our query budget for refining the decision boundary rather than other regions of the input space.Equation <ref> and <ref> also reflect the trade-off between the accuracy and the running time. When the query density near the decision boundary is high (small V(x_exploit) in Eq. <ref>), η is large, thus τ in Eq. <ref> is small, which means our model will have a higher probability of having a misclassification loss less than ϵ. However, as the query density gets higher, we need more queries to cover a certain region, thus the running time increases. §.§ Influence of HyperparametersThere are four hyperparameters that control Active Expansion Samplingthe initial point x^(0), ϵ and η in the exploitation/exploration stage, and the length scale l of the GP kernel. The choice of the kernel function and length scale depends on assumptions regarding the nature and smoothness of the underlying feasibility function. Such kernel choices have been covered extensively in prior research and we refer interested readers to  <cit.> for multiple methods of choosing l. Note that it is difficult to optimize the length scale at each iteration, since the length scale will eventually be pushed to extremes. In the exploitation stage, for example, once the length scale is smaller than the previous iteration, the distance between the new query and its nearest query will also be smaller (due to Eq. 12). Then the maximum marginal likelihood estimation will result in a smaller length scale, as the estimated function is steeper. This process will repeat and eventually cause the optimal length scale to converge to 0. The initial point x^(0) can be any point not too far away from the boundary of feasible regions, since otherwise it will take a large budget to just search for a sample from the opposite class. Here we focus on the analysis of the other two hyperparametersϵ and η.According to Eq. <ref>, ϵ and τ affect the classification accuracy in a probabilistic way. When τ=Φ(-ηϵ), we have P(L(x)≤ϵ)≥ 1-Φ(-ηϵ) in the explored region. This offers us a guideline for setting ϵ and η with respect to a given accuracy requirement.According to Eq. <ref> and <ref>, η controls the density of queries in both exploitation and exploration stages. Specifically, as we increase η, V_exploit and V_explore decreases, increasing the query density and essentially placing labeled points closer together.In contrast, ϵ only controls the distances between queries in the exploration stage. [Technically, due to sampling error introduced when generating the pool, the exploitation stage will be influenced by ϵ (since f̅(x^*) is only ≈ 0). But this effect is negligible compared to ϵ's influence on the exploration stage.] Increasing ϵ decreases V_explore and hence increases the density of queries in the exploration stage.This density of queries affects (1) how fast we can expand the explored region, and (2) how likely we are to capture small feasible regions. When η or ϵ increases, we expand the explored region slower, making it more likely that we will discover smaller feasible regions. Likewise, we also slow down the expansion in exploitation stages, making the classifier more likely to capture a sudden change along domain boundaries.Note that when ϵ=0, the constraint of p_ϵ(x)≥τ in Eq. <ref> is equivalent to f̅(x)=0, thus theoretically all queries should lie near the estimated decision boundary. In this case, the Active Expansion Sampling acts like Uncertainty Sampling <cit.>. In practice, however, AES will be unable to find a feasible solution when ϵ=0 since no candidate sample will be exactly on the decision boundary under the pool-based sampling setting.§ EXPERIMENTAL EVALUATIONWe evaluate the performance of AES in capturing feasible domains using both synthesized and real-world examples. The performance is measured by the F1 score, which is expressed asF_1 = 2·precision·recall/precision+recallwhereprecision=true positives/true positives+false positivesandrecall=true positives/true positives+false negativesWe compare AES with two conventional bounded adaptive sampling methodsthe Neighborhood-Voronoi (NV) algorithm <cit.> and the straddle heuristic <cit.>. We also investigate the effects of noise and dimensionality on AES.We use the same pool size (500 candidate samples[For NV algorithm, its pool size refers to the test samples generated for the Monte Carlo simulation.]) in all the experiments. In Fig. <ref>-<ref> and <ref>, the F1 scores are averaged over 100 runs. We run all 2-dimensional experiments on a Dell Precision Tower 5810 with 16 GB RAM, a 3.5 GHz Intel Xeon CPU E5-1620 v3 processor, and a Ubuntu 16.04 operating system. We run all higher-dimensional experiments on a Dell Precision Tower 7810 with 32 GB RAM, a 2.4 GHz Intel Xeon CPU E5-2620 v3 processor, and a Red Hat Enterprise Linux Workstation 7.2 operating system. The Python code needed to reproduce our AES algorithm, our baseline implementations of NV and Straddle, and all of our below experiments is available at <https://github.com/IDEALLab/Active-Expansion-Sampling>. §.§ Effect of Hyperparameters We first use two 2-dimensional test functionsthe Branin function and Hosaki function, respectivelyas indicator functions to evaluate whether an input is inside the feasible domain. Both examples construct an input space with multiple disconnected feasible regions, which makes the feasible domain identification task challenging.The Branin function isg(x) =(x_2-5.1/4π^2x_1^2+5/πx_1-6)^2 +10(1-1/8π)cos x_1+10We define the label y=1 if x∈{x|g(x)≤ 8, -9<x_1<14, -7<x_2<17}; and y=-1 otherwise. The resulting feasible domains resemble three isolated feasible regions (Fig. <ref>). The initial point x^(0) = (3,3). For the Gaussian process, we use a Gaussian kernel (Eq. <ref>). We set the kernel length scale l=0.9. To compute the F1 scores, we generate samples along a 100 × 100 grid as the test set in the region where x_1 ∈ [-13,18] and x_2 ∈ [-8,23].This section mainly describes the Branin exampleas both the Branin and Hosaki examples show similar resultshowever we direct interested readers to the appendix (Sect. <ref>) where we describe the Hosaki example in detail and show its experimental results.For both examples, we use three levels of ϵ (0.1, 0.3, 0.5) and η (1.2, 1.3, 1.4) to demonstrate their effects on AES's performance.Figures <ref> and <ref> show the sequence of queries selected by AES and the two bounded adaptive sampling methods, respectively, applied on the Branin example. For AES, there are three exploitation stages, as there are three disconnected feasible domains. AES starts by querying samples along the initial estimated decision boundary, and then expands queries outward to discover other feasible regions. In contrast, the straddle heuristic simultaneously explores the whole bounded input space, and refines all three decision boundaries. Fig. <ref> shows the corresponding F1 scores of the experiment in Fig. <ref>. During exploitation stages, AES's F1 score non-monotonically increases as part of the estimated decision boundary is outside the explored region (where AES has confidence on the accuracy); while in the exploration stage, the current decision boundaries are inside the explored region and remain unchanged, thus the F1 score stabilizes.Figures <ref> and <ref> demonstrate the effects of hyperparameters ϵ and η, respectively, on AES's performance. Increasing ϵ or η leads to a slower expansion of the explored region and a higher F1 score. This means that using a higher ϵ or η enables accuracy improvement but requires larger query budget. In both examples, the F1 score is more sensitive to η than ϵ. §.§ Unbounded versus BoundedWe use the NV algorithm and the straddle heuristic as examples of bounded adaptive sampling methods.Because these two methods do not progressively expand the region (as in AES), but rather assumes a fixed region, we create a “bounding box” in the input space, and generate queries inside this box.When comparing AES with the bounded methods, we use ϵ=0.3 and η=1.3 for AES. In each experiment, we change the size of the input space bounds to evaluate the effect of bound size on these methods.Specifically, we simulate the cases where we set tight, loose, and insufficient bounds, as shown in Tab. <ref> and Fig. <ref>. “Tight” means the bounds cover the entire feasible domain while being no larger than needed (in practice we use bounds slightly larger than this to ensure the feasible domain boundary is inside the tight bounds); “loose” means the bounds cover the entire feasible domain but are larger than the tight bounds; “insufficient” means the variable bounds do not cover the entire feasible domain.As shown in Fig. <ref>, the NV algorithm makes fast accuracy improvement at early stages, and slows down after some iterations. The F1 score of NV is almost monotonically increasing; while AES's score fluctuates because it focuses first on refining the domains it knows about during exploitation (at the expense of accuracy on domains it has not seen yet). This causes AES to have a lower F1 score early on. For the NV algorithm, when the input variable bounds are set properly, both AES and NV achieve similar final F1 scores. However, NV requires more iterations to achieve a similar final accuracy to AES, especially when the bounds are set too large, where NV exhausts its query budget exploring unknown regions. When the bounds are set too small to cover certain feasible regions, NV stops improving the F1 score when it begins to over-sample the space and is unable to reach similar accuracy as AES. Note that in this case, we purposefully set the bounds such that they cover the vast majority of the feasible region, leaving only a small feasible area outside of those bounds. Our explicit purpose here is to demonstrate how sensitive such bounded heuristics can be when their bounds are misspecified (even by small amounts). The performance of bounded methods degrades rapidly as their bound sizes decrease further.Although AES shows slow accuracy improvement over the entire test region, it keeps a constant accuracy bound within the explored region (as discussed in Sect. <ref>). Fig. <ref> shows the F1 scores within the p_ϵ(x)<τ region, which is AES's explored region. Specifically, we set ϵ=0.3, η=1.3, and τ=Φ(-ηϵ). For the NV algorithm, we use the tight input space bounds from the previous experiments. By just considering the explored region, AES's F1 scores are quite stable throughout the sampling sequence; while NV's F1 scores are low at the beginning, and then increase until stable.[This difference is because NV's explored region covers more area than AES at the beginning.] Since AES's accuracy inside the explored region is invariant of the number of iterations, it can be used for real-time prediction of samples' feasibility in the explored region.Table <ref> shows the final F1 scores and wall-clock running time of AES, NV, and the straddle heuristic. Note that the confidence interval for NV's averaged F1 scores are much larger than AES. This is because during some runs NV fails to discover all the three feasible regions (Fig. <ref> for example). §.§ Effect of Noise Label noise is usually inevitable in active learning tasks. The noise comes from, for example, simulation/experimental error or human annotators' mistakes. We test the cases where the labels are under (1) uniform noise and (2) Gaussian noise centered at the decision boundary. We simulate the first case by randomly flipping the labels. The noisy label is set as y' = (-1)^λ y, where λ∼Bernoulli(p), p is the parameter of the Bernoulli distribution that indicates the noise level, and y is the true label. The second case is probably more common in practice, since it is usually harder to decide the labels near the decision boundary. To simulate this case, we add Gaussian noise to the test functions: g'(x)=g(x)+e, where g(x) is the Branin or Hosaki function, and e ∼ s·𝒩(0, 1).In each case we compare the performance of AES (ϵ=0.3, η=1.3) and NV (with tight bounds) under two noise levels. As expected, adding noise to the labels decreases the accuracy of both methods (Fig. <ref> and <ref>). However, in both cases (Bernoulli noise and Gaussian noise), the noise appears to influence NV more than AES. As shown in Fig. <ref>, when adding noise to the labels, NV has high error mostly along the input space boundaries, where it cannot query samples outside to further investigate those apparent feasible regions. In contrast, AES tries to exploit those rogue points to try to find new feasible regions, realizing after a few new samples that they are noise.§.§ Effect of Dimensionality To test the effects of dimensionality on AES's performance, we apply both AES and NV on higher-dimensional examples where the feasible domains are inside two (d-1)-spheres of radius 1 centered at a and b respectively. Here a=0 and b=(3,0,...,0). Fig. <ref> shows the input space of the 3-dimensional double-sphere example. The initial point x^(0) = 0. For the Gaussian process, we use a Gaussian kernel with a length scale of 0.5. We set ϵ=0.3 and η=1.3. To compute the F1 scores, we randomly generate 10,000 samples uniformly within the region where x_1 ∈ [-2,5] and x_k ∈ [-2,2], k=2,...,d. The input space bounds for the NV algorithm are x_1 ∈ [-1.5,4.5] and x_k ∈ [-1.5,1.5], k=2,...,d. We get the F1 scores and running time after querying 1,000 points.As shown in Fig. <ref>, both AES and NV shows an accuracy drop and running time increase as the problem's dimensionality increases. This is expected, since based on the curse of dimensionality <cit.>, the number of queries needed to achieve the same accuracy increases with the input space dimensionality. The curse of dimensionality is inevitable in machine learning problems. However, since AES explores the input space only when necessary (, only after it has seen the entire decision boundary of the discovered feasible domain), its queries do not need to fill up the large volume of the high-dimensional space. Therefore, AES's accuracy drop with problem dimensionality is not as severe as bounded methods like NV. For particularly high-dimensional design problems, another complementary approach is to construct explicit lower-dimensional design manifolds upon which to run AES <cit.>. §.§ Nowacki Beam Example To test AES's performance in a real-world scenario, we consider the Nowacki beam problem <cit.>. The original Nowacki beam problem is a design optimization problem where we minimize the cross-section area A of a cantilever beam of length l with concentrated load F at its end. The design variables are the beam's breadth b and height h. We turn this problem into a feasible domain identification problem by replacing the objective with a constraint A = bh ≤ 0.0025 m^2. Other constraints are (1) the maximum tip deflection δ = Fl^3/(3EI_Y) ≤ 5mm, (2) the maximum blending stress σ_B = 6Fl/(bh^2) ≤σ_Y, (3) the maximum shear stress τ = 1.5F/(bh) ≤σ_Y/2, (4) the ratio h/b ≤ 10, and (5) the failure force of buckling F_crit = (4/l^2)√((G I_T)(E I_Z)/(1-ν^2))≥ fF, where I_Y=bh^3/12, I_Z=b^3h/12, I_T=I_Y+I_Z, and f is the safety factor. And σ_Y, E, ν, and G are the yield stress, Young's modulus, Poisson's ratio, and shear modulus of the beam's material, respectively. We use the settings from <cit.>, where l=0.5m, F=5kN, f=2, σ_Y=240MPa, E=216.62GPa, ν=0.27, and G=86.65GPa. As shown in Fig. <ref>, the feasible domain is a crescent-shaped region. Given only these constraints, it is unclear what appropriately tight bounds on the design variables should be.In this experiment, we set the Gaussian kernel's length scale as 0.005, ϵ=0.3 and η=1.3. The initial point x^(0)=(b_0,h_0)=(0.05,0.05). The test samples are generated along a 100 × 100 grid in the region where b ∈ [0,0.02] and h ∈ [0.1,0.16].After 242 iterations, the F1 score of AES reaches 0.933 and remains constant. Note that mostly the estimation error comes from the two sharp ends of the crescent-shaped feasible region (Fig. <ref>). This is because the kernel's assumption on function smoothness (, similar inputs should have similar outputs) causes the GP to have bad performance where the labels shift frequently. The similar problem also exists when using other classifiers like SVM, where a kernel is also used to enforce similar outputs between similar inputs. This problem can be alleviated by using a smaller kernel length scale. § CONCLUSION We presented a pool-based sampling method, AES, for identifying (possibly disconnected) feasible domains over an unbounded input space. Unlike conventional methods that sample inside a fixed boundary, AES progressively expands our knowledge of the input space under an accuracy guarantee. We showed that AES uses successive exploitation and exploration stages to switch between learning the decision boundary and searching for new feasible domains. To avoid increasing the pool size and hence the computation cost as the explored area grows, we proposed a dynamic local pool generation method that samples the pool locally at a certain location in each iteration. We showed that at any point within the explored region, AES guarantees an upper bound ϵ of misclassification loss with a probability of at least 1-τ, regardless of the number of iterations or labeled samples. This means that AES can be used for real-time prediction of samples' feasibility inside the explored region. We also demonstrated that, compared to existing methods, AES can achieve comparable or higher accuracy without needing to set exact bounds on the input space.Note that AES cannot be applied on input spaces where synthesizing a useful sample is difficult. For example, in an image classification task, we cannot directly synthesize an image by arbitrarily setting its pixels, since most of the synthesized images may be unrealistic and hence useless. Usually in such cases, we use real-world samples as the pool and apply bounded active learning methods (since we know the bounds of real-world samples). Or instead, we first embed the original inputs onto a lower-dimensional space, such that given the low-dimensional representation, we can synthesize realistic samples. We can then apply AES on that embedded space. This approach can be used for discovering novel feasible domains (, finding feasible inputs that are nonexistent in the real-world). We refer interested readers to a detailed introduction of this approach by <cit.>.One limitation of AES is that the accuracy improves slowly at the early stage compared to bounded active learning methods. This is because AES focuses on only the explored region (which is small at the beginning), while bounded active learning methods usually do space-filling at first. In the situation where we want fast accuracy improvement at the beginning, one possible way of tackling this problem is by dynamically setting AES's hyperparameters. Specifically, since the expansion speed increases as ϵ or η decreases, we can accelerate AES's accuracy improvement at earlier stages by setting small values of ϵ and η, so that queries quickly fill up a larger region. Then to achieve high final accuracy, we can increase ϵ and η to meet the accuracy requirement. spbasicAppendix A: Theorem Proofs § PROOF OF THEOREM <REF>According to Eq. <ref>, given an optimal query x^*, we have|f̅(x^*)|= |k(x^*)^T ∇log p(y|f̂)| = |k(x^*)^T ∇log[ Φ(y_1f_1); ⋮; Φ(y_t-1f_t-1) ]| = |k(x^*)^T [ y_1𝒩(f_1)/Φ(y_1f_1); ⋮; y_t-1𝒩(f_t-1)/Φ(y_t-1f_t-1) ]|= | ∑_i=1^t-1 k(x^*,x^(i))y_i𝒩(f_i)/Φ(y_if_i)|≤∑_i=1^t-1| k(x^*,x^(i))y_i𝒩(f_i)/Φ(y_if_i)|< ∑_i=1^t-1 k_m sign(y_i)y_i𝒩(f_i)/Φ(y_if_i)= k_m sign(y)^T [ y_1𝒩(f_1)/Φ(y_1f_1); ⋮; y_t-1𝒩(f_t-1)/Φ(y_t-1f_t-1) ]= k_m μwhere k_m= max_x^(i)∈ X_L k(x^*,x^(i)) = exp(-min_x^(i)∈ X_Lx^*-x^(i)^2/2l^2) = e^-δ^2/(2l^2)andμ = sign(y)^T ∇log p(y|f̂) Similarly,V(x^*)= 1-k(x^*)^T(K+W^-1)^-1k(x^*) > 1-(k_m1)^T (K+W^-1)^-1 (k_m1) = 1-k_m^21^T(K+W^-1)^-11= 1-k_m^2 νwhereν = 1^T(K+W^-1)^-11 Therefore for the optimal query x^* we havep_ϵ(x^*) = Φ(-|f̅(x^*)|+ϵ/√(V(x^*)))> Φ(-k_m μ+ϵ/√(1-k_m^2 ν)) Both Theorem <ref> and <ref> state that p_ϵ(x^*)=τ, thusΦ(-k_m μ+ϵ/√(1-k_m^2 ν)) < τ When τ=Φ(-ηϵ), we havek_m μ+ϵ/√(1-k_m^2 ν) > ηϵ Plugging Eq. <ref> into Eq. <ref> and solving for the distance δ, we getδ < β lwhereβ=√(2logμ^2+η^2ϵ^2ν/ηϵ√(μ^2+(η^2-1)ϵ^2ν)-ϵμ)§ PROOF OF THEOREM <REF>Theorem <ref> states that the optimal query in the exploitation stage lies at the intersection of f̅(x)=0 and p_ϵ(x)=τ. By substituting Φ(-ηϵ) for τ, we haveV(x^*) = 1/η^2According to Eq. <ref>, we have V(x^*)>1-k_m^2ν. Combining Eq. <ref>, <ref>, and <ref>, we get δ < δ_exploit = γ lwhereγ = √(logη^2ν/η^2-1)§ PROOF OF THEOREM <REF>According to Eq. <ref>, the predictive variance of an optimal query x_exploit in the exploitation stage isV(x_exploit) = 1/η^2While in the exploration stage, we have p_ϵ(x_explore)=τ at the optimal query x_explore (Theorem <ref>). And by applying Eq. <ref> and setting τ = Φ(-ηϵ), we haveV(x_explore) = 1/η^2(1+|f̅(x_explore)|/ϵ)^2 Appendix B: Additional Experimental Results § HOSAKI EXAMPLE We use the Hosaki example as an additional 2-dimensional example to demonstrate the performance of our proposed method. Different from the Branin example, the Hosaki example has feasible domains of different scales. Its feasible domains resemble two isolated feasible regionsa large “island” and a small one (Fig. <ref>). The Hosaki function isg(x)=(1-8x_1+7x_1^2-7/3x_1^3+1/4x_1^4)x_2^2e^-x_2We define the label y=1 if x∈{x|g(x)≤ -1, 0<x_1,x_2<5}; and y=-1 otherwise. For AES, we set the initial point x^(0) = (3,3). We use a Gaussian kernel with a length scale l=0.4. The test set to compute F1 scores is generated along a 100 × 100 grid in the region where x_1 ∈ [-3,9] and x_2 ∈ [-3.5,8.5]. For NV and straddle, the input space bounds are shown in Tab. <ref>.Table <ref> shows the final F1 scores and running time of AES, NV, and the straddle heuristic. Fig. <ref> shows the F1 scores and queries under different ϵ and η. Fig. <ref> compares the performance of AES and NV with different boundary sizes. Fig. <ref> shows the performance of AES and NV under Bernoulli and Gaussian noise.§ RESULTS OF STRADDLE HEURISTICIn this section we list experimental results related to the straddle heuristic. Specifically, Fig. <ref> shows straddle's F1 scores and queries using different sizes of input variable bounds, and the comparison with AES. Fig. <ref> shows the comparison of AES and straddle under noisy labels.
http://arxiv.org/abs/1708.07888v3
{ "authors": [ "Wei Chen", "Mark Fuge" ], "categories": [ "cs.LG", "stat.ML" ], "primary_category": "cs.LG", "published": "20170825211240", "title": "Active Expansion Sampling for Learning Feasible Domains in an Unbounded Input Space" }
Y>X
http://arxiv.org/abs/1708.08031v4
{ "authors": [ "Jeff Greensite", "Roman Höllwieser" ], "categories": [ "hep-lat" ], "primary_category": "hep-lat", "published": "20170827012921", "title": "A finite-density transition line for QCD with 695 MeV dynamical fermions" }
0000-0002-2759-633X University Grenoble Alpes, CNRS, Grenoble INP, Institut Néel, 38000 Grenoble, France0000-0002-0395-6791 University Grenoble Alpes, CNRS, Grenoble INP, Institut Néel, 38000 Grenoble, France 0000-0002-6547-6005Centre for Engineered Quantum Systems, School of Mathematics and Physics, The University of Queensland, St Lucia, QLD 4072, Australia 0000-0001-9460-825XUniversity Grenoble Alpes, CNRS, Grenoble INP, Institut Néel, 38000 Grenoble, France Genuinely Multipartite Noncausality Cyril Branciard December 7, 2017 =================================== The study of correlations with no definite causal order has revealed a rich structure emerging when more than two parties are involved.This motivates the consideration of multipartite “noncausal” correlations that cannot be realised even if noncausal resources are made available to a smaller number of parties.Here we formalise this notion: genuinely N-partite noncausal correlations are those that cannot be produced by grouping N parties into two or more subsets, where a causal order between the subsets exists.We prove that such correlations can be characterised as lying outside a polytope, whose vertices correspond to deterministic strategies and whose facets define what we call “2-causal” inequalities.We show that genuinely multipartite noncausal correlations arise within the process matrix formalism, where quantum mechanics holds locally but no global causal structure is assumed, although for some inequalities no violation was found.We further introduce two refined definitions that allow one to quantify, in different ways, to what extent noncausal correlations correspond to a genuinely multipartite resource.§ INTRODUCTION Understanding the correlations between events, or between the parties that observe them, is a central objective in science.In order to provide an explanation for a given correlation, one typically refers to the notion of causality and embeds events (or parties) into a causal structure, that defines a causal order between them <cit.>.Correlations that can be explained in such a way, i.e., that can be established according to a definite causal order, are said to be causal <cit.>.The study of causal correlations has gained a lot of interest recently as a result of the realisation that more general frameworks can actually be considered, where the causal assumptions are weakened and in which noncausal correlations can be obtained <cit.>.Investigations of causal versus noncausal correlations first focused on the simplest bipartite case <cit.>, and were soon extended to multipartite scenarios, where a much richer situation is found <cit.>—this opens, for instance, the possibility for causal correlations to be established following a dynamical causal order, where the causal order between events may depend on events occurring beforehand <cit.>. When analysing noncausal correlations in a multipartite setting, however, a natural question arises: is the noncausality of these correlations a truly multipartite phenomenon, or can it be reduced to a simpler one, that involves fewer parties?The goal of this paper is precisely to address this question, and provide criteria to justify whether one really deals with genuinely multipartite noncausality or not.To make things more precise, let us start with the case of two parties, A and B.Each party receives an input x, y, and returns an output a, b, respectively.The correlations shared by A and B are described by the conditional probability distribution P(a,b|x,y).If the two parties' events (returning an output upon receiving an input) are embedded into a fixed causal structure, then one could have that A causally precedes B—a situation that we shall denote by A ≺ B, and where B's output may depend on A's input but not vice versa: P(a|x,y) = P(a|x)—or that B causally precedes A—B ≺ A, where P(b|x,y) = P(b|y).(It can also be that the correlation is not due to a direct causal relation between A and B, but to some latent common cause; such a situation is however still compatible with an explanation in terms of A ≺ B or B ≺ A, and is therefore encompassed in the previous two cases.)A causal correlation is defined as one that is compatible with either A ≺ B or B ≺ A, or with a convex mixture thereof, which would describe a situation where the party that comes first is selected probabilistically in each run of the experiment <cit.>.Adding a third party C with input z and output c, and taking into account the possibility of a dynamical causal order, a tripartite causal correlation is defined as one that is compatible with one party acting first—which one it is may again be chosen probabilistically—and such that whatever happens with that first party, the reduced bipartite correlation shared by the other two parties, conditioned on the input and output of the first party, is causal (see Definition <ref> below for a more formal definition, and its recursive generalisation to N parties) <cit.>.In contrast, a noncausal tripartite correlation P(a,b,c|x,y,z) cannot for instance be decomposed asP(a,b,c|x,y,z) = P(a|x)P_x,a(b,c|y,z)with bipartite correlations P_x,a(b,c|y,z) that are causal for each x,a.Nevertheless, such a decomposition may still be possible for a tripartite noncausal correlation if one does not demand that (all) the bipartite correlations P_x,a(b,c|y,z) are causal.Without this constraint, the correlation (<ref>) is thus compatible with the “coarse-grained” causal order A ≺{B,C}, if B and C are grouped together to define a new “effective party” and act “as one”. This illustrates that although a multipartite correlation may be noncausal, there might still exist some definite causal order between certain subsets of parties; the intuition that motivates our work is that such a correlation would therefore not display genuinely multipartite noncausality.This paper is organised as follows.In Sec. <ref>, we introduce the notion of genuinely N-partite noncausal correlations in opposition to what we call 2-causal correlations, which can be established whenever two separate groups of parties can be causally ordered; we furthermore show how such correlations can be characterised via so-called 2-causal inequalities.In Sec. <ref>, as an illustration we analyse in detail the simplest nontrivial tripartite scenario where these concepts make sense; we present explicit 2-causal inequalities for that scenario, investigate their violations in the process matrix framework of Ref. <cit.>, and generalise some of them to N-partite inequalities.In Sec. <ref>, we propose two possible generalisations of the notion of 2-causal correlations, which we call M-causal and size-S-causal correlations, respectively.This allows one to refine the analysis, and provides two different hierarchies of criteria that quantify the extent to which the noncausality of a correlation is a genuinely multipartite phenomenon. § GENUINELY N-PARTITE NONCAUSAL CORRELATIONSThegeneral multipartite scenario that we consider in this paper, and the notations we use, are the same as in Ref. <cit.>.A finite number N ≥ 1 of parties A_k each receive an input x_k from some finite set (which can in principle be different for each party) and generate an output a_k that also belongs to some finite set (and which may also differ for each input). The vectors of inputs and outputs are denoted by x⃗ = (x_1, …, x_N) and a⃗ = (a_1, …, a_N).The correlations between the N parties are given by the conditional probability distribution P(a⃗|x⃗). For some (nonempty) subset = {k_1, …, k_||} of {1, …, N}, we denote by x⃗_ = (x_k_1, …, x_k_||) and a⃗_ = (a_k_1, …, a_k_||) the vectors of inputs and outputs of the parties in ; with this notation, x⃗_\ and a⃗_\ (or simply x⃗_\ k and a⃗_\ k for a singleton = {k}) denote the vectors of inputs and outputs of all parties that are not in . For simplicity we will identify the parties' names with their labels, so that = {1, …, N}≡{A_1, …, A_N}, and similarly for any subset .§.§ Definitions The assumption that the parties in such a scenario are embedded into a well-defined causal structure restricts the correlations that they can establish. In Refs. <cit.>, the most general correlations that are compatible with a definite causal order between the parties were studied and characterised. Such correlations include those compatible with causal orders that are probabilistic or dynamical—that is, the operations of parties in the past can determine the causal order of parties in the future. These so-called causal correlations—which, for clarity, we shall often call fully causal here—can be defined iteratively in the following way: * For N=1, any valid probability distribution P(a_1|x_1) is (fully) causal; * For N ≥ 2, an N-partite correlation is (fully) causal if and only if it can be decomposed in the form P(a⃗|x⃗) = ∑_k ∈ q_kP_k(a_k|x_k)P_k,x_k,a_k(a⃗_\ k|x⃗_\ k) with q_k ≥ 0 for each k, ∑_k q_k = 1, where (for each k) P_k(a_k|x_k) is a single-party probability distribution and (for each k, x_k, a_k) P_k,x_k,a_k(a⃗_\ k|x⃗_\ k) is a (fully) causal (N-1)-partite correlation. As the tripartite example in the introduction shows, there can be situations in which no overall causal order exists, but where there still is a (“coarse-grained”) causal order between certain subsets of parties, obtained by grouping certain parties together.The correlations that can be established in such situations are more general than causal correlations, but nevertheless restricted due to the existence of this partial causal ordering. If we want to identify the idea of noncausality as a genuinely N-partite phenomenon, we should, however, exclude such correlations, and characterise correlations for which no subset of parties can have a definite causal relation to any other subset.This idea was already suggested in Ref. <cit.>; here we define the concept precisely.Note that if several different nonempty subsets do have definite causal relations to each other, then clearly there will be two subsets having a definite causal relation between them—one can consider the subset that comes first and group the remaining subsets together into the complementary subset, which then comes second.We shall for now consider partitions ofinto just two (nonempty) subsetsand \, and we thus introduce the following definition: An N-partite correlation (for N ≥ 2) is said to be 2-causal if and only if it can be decomposed in the form P(a⃗|x⃗) =∑_∅⊊⊊ q_P_(a⃗_|x⃗_) P_,x⃗_,a⃗_(a⃗_\|x⃗_\) where the sum runs over all nonempty strict subsetsof , with q_≥ 0 for each , ∑_ q_ = 1, and where (for each ) P_(a⃗_|x⃗_) is a valid probability distribution for the parties inand (for each , x⃗_,a⃗_) P_,x⃗_,a⃗_(a⃗_\|x⃗_\) is a valid probability distribution for the remaining N-|| parties. For N=2, the above definition reduces to the standard definition of bipartite causal correlations <cit.>, which is equivalent to Definition <ref> above.In the general multipartite case, it can be understood in the following way: each individual summand P_(a⃗_|x⃗_) P_,x⃗_,a⃗_(a⃗_\|x⃗_\) for each bipartition {, \} describes correlations compatible with all the parties inacting before all the parties in \, since the choice of inputs for the parties in \ does not affect the outputs for the parties in .The convex combination in Eq. (<ref>) then takes into account the possibility that the subsetacting first can be chosen randomly. [One can easily see that it is indeed sufficient to consider just one term per bipartition {, \} in the sum (<ref>).That is, for some given ,some correlations P'(a⃗|x⃗) = P'_(a⃗_|x⃗_) P'_,x⃗_,a⃗_(a⃗_\|x⃗_\) and P”(a⃗|x⃗) = P”_(a⃗_|x⃗_) P”_,x⃗_,a⃗_(a⃗_\|x⃗_\), and some weights q', q”≥ 0 with q' + q” = 1, the convex mixture P(a⃗|x⃗) = q' P'(a⃗|x⃗) + q” P”(a⃗|x⃗) is also of the same form P(a⃗|x⃗) = P_(a⃗_|x⃗_) P_,x⃗_,a⃗_(a⃗_\|x⃗_\) (with P_(a⃗_|x⃗_) = q' P'_(a⃗_|x⃗_) + q” P”_(a⃗_|x⃗_) and P_,x⃗_,a⃗_ = P(a⃗|x⃗) / P_(a⃗_|x⃗_)).This already implies, in particular, that 2-causal correlations form a convex set.]For correlations that are not 2-causal, we introduce the following terminology: An N-partite correlation that is not 2-causal is said to be genuinely N-partite noncausal. Thus, genuinely N-partite noncausal correlations are those for which it is impossible to find any definite causal relation between any two (complementary) subsets of parties, even when taking into consideration the possibility that the subset acting first may be chosen probabilistically.§.§ Characterisation of the set of 2-causal correlations as a convex polytope As shown in Ref. <cit.> for the bipartite case and in Refs. <cit.> for the general N-partite case, any fully causal correlation can be written as a convex combination of deterministic fully causal correlations.As the number of such deterministic fully causal correlations is finite (for finite alphabets of inputs and outputs), they correspond to the extremal points of a convex polytope—the (fully) causal polytope.The facets of this polytope are given by linear inequalities, which define so-called (fully) causal inequalities.As it turns out, the set of 2-causal correlations can be characterised as a convex polytope in the same way:The set of 2-causal correlations forms a convex polytope, whose (finitely many) extremal points correspond to deterministic 2-causal correlations.For a given nonempty strict subsetof , P_(a⃗_|x⃗_) P_,x⃗_,a⃗_(a⃗_\|x⃗_\) defines an “effectively bipartite” correlation, that is, a bipartite correlation between an effective partywith input x⃗_ and output a⃗_ and an effective party \ with input x⃗_\ and output a⃗_\, which are formed by grouping together all parties in the respective subsets.That effectively bipartite correlation is compatible with the causal order [The notation _1 ≺_2 (or simply A_k_1≺ A_k_2 for singletons _j = {A_k_j}), already used in the introduction, formally means that the correlation under consideration satisfies P(a⃗__1|x⃗) = P(a⃗__1|x⃗_\_2). It will also be extended to more subsets, with _1 ≺_2 ≺⋯≺_m meaning that P(a⃗__1∪⋯∪_j|x⃗) = P(a⃗__1∪⋯∪_j|x⃗_\ (_j+1∪⋯∪_m)) for all j = 1, …, m-1.]≺\.As mentioned above, the set of such correlations forms a convex polytope whose extremal points are deterministic, effectively bipartite causal correlations <cit.>—which, according to Definition <ref>, define deterministic 2-causal N-partite correlations.Eq. (<ref>) then implies that the set of 2-causal correlations is the convex hull of all such polytopes for each nonempty strict subsetof ; it is thus itself a convex polytope, whose extremal points are indeed deterministic 2-causal correlations. As any fully causal correlation is 2-causal, but not vice versa, the fully causal polytope is a strict subset of what we shall call the 2-causal polytope (see Fig. <ref>).Every vertex of the 2-causal polytope corresponds to a deterministic function α⃗ that assigns a list of outputs a⃗ = α⃗(x⃗) to the list of inputs x⃗, such that the corresponding probability distribution P_α⃗^det(a⃗|x⃗) = δ_a⃗, α⃗(x⃗) is 2-causal, and thus satisfies Eq. (<ref>). Since P_α⃗^det(a⃗|x⃗) can only take values 0 or 1, there is only one term in the sum in Eq. (<ref>), and it can be written such that there is a single (nonempty) strict subsetthat acts first.That is, α⃗ is such that the outputs a⃗_ of the parties inare determined exclusively by their inputs x⃗_, while the outputs a⃗_\ of the remaining parties are determined by all inputs x⃗.The facets of the 2-causal polytope are linear inequalities that are satisfied by all 2-causal correlations; we shall call these 2-causal inequalities (see Fig. <ref>). § ANALYSIS OF THE TRIPARTITE “LAZY SCENARIO”In this section we analyse in detail, as an illustration, the polytope of 2-causal correlations for the simplest nontrivial scenario with more than two parties. In Ref. <cit.> it was shown that this scenario is the so-called tripartite “lazy scenario”, in which each party A_k receives a binary input x_k, has a single constant output for one of the inputs, and a binary output for the other. By convention we consider that for each k, on input x_k=0 the output is always a_k=0, while for x_k=1 we take a_k∈{0,1}. The set of fully causal correlations was completely characterised for this scenario in Ref. <cit.>, which will furthermore permit us to compare the noncausal and genuinely tripartite noncausal correlations in this concrete example.As is standard (and as we did in the introduction), we will denote here the three parties by A, B, C, their inputs x, y, z, and their outputs a, b and c. Furthermore, we will denote the complete tripartite probability distribution by P_ABC [i.e., P_ABC(abc|xyz) P(abc|xyz)] and the marginal distributions for the indicated parties by P_AB, P_A, etc. [e.g., P_AB(ab|xyz)=∑_c P_ABC(abc|xyz)].§.§ Characterisation of the polytope of 2-causal correlations §.§.§ Complete characterisation We characterise the polytope of 2-causal correlations in much the same way as the polytope of fully causal correlations was characterised in Ref. <cit.>, where we refer the reader for a more in-depth presentation. Specifically, the vertices of the polytope are found by enumerating all deterministic 2-causal probability distributions P_ABC, i.e., those which admit a decomposition of the form (<ref>) with (because they are deterministic) a single term in the sum (corresponding to a single group of parties acting first). One finds that there are 1 520 such distributions, and thus vertices.In order to determine the facets of the polytope, which in turn correspond to tight 2-causal inequalities, a parametrisation of the 19-dimensional polytope must be fixed and the convex hull problem solved. We use the same parametrisation as in Ref. <cit.>, and again use cdd <cit.> to compute the facets of the polytope. We find that the polytope has 21 154 facets, each corresponding to a 2-causal inequality, the violation of which would certify genuinely tripartite noncausality. Many inequalities, however, can be obtained from others by either relabelling outputs or permuting parties, and as a result it is natural to group the inequalities into equivalence classes, or “families”, of inequalities. Taking this into account, we find that there are 476 families of facet-inducing 2-causal inequalities, 3 of which are trivial, as they simply correspond to positivity constraints on the probabilities (and are thus satisfied by any valid probability distribution). While the 2-causal inequalities all detect genuinely N-partite noncausality, it is interesting to note that all except 22 of them can be saturated by fully causal correlations (and all but 37 even by correlations compatible with a fixed causal order).We provide the complete list of these inequalities, organised by their symmetries and the types of distribution required to saturate them, in the Supplementary Material <cit.>, and will analyse in more detail a few particularly interesting examples in what follows. First, however, it is interesting to note that only 2 of the 473 nontrivial facets are also facets of the (fully) causal polytope for this scenario (one of which is Eq. (<ref>) analysed below), and hence the vast majority of facet-inducing inequalities of the causal polytope do not single out genuinely tripartite noncausal correlations. Moreover, none of the 2-causal inequalities we obtain here differ from facet-inducing fully causal inequalities only in their bound, and, except for the aforementioned cases, our 2-causal inequalities thus represent novel inequalities. §.§.§ Three interesting inequalities Of the nontrivial 2-causal inequalities, those that display certain symmetries between the parties are particularly interesting since they tend to have comparatively simple forms and often permit natural interpretations (e.g., as causal games <cit.>).For example, three nontrivial families of 2-causal inequalities have forms (i.e., certain versions of the inequality within the corresponding equivalence class) that are completely symmetric under permutations of the parties. One of these is the inequalityI_1= [ P_A(1|100) + P_B(1|010) + P_C(1|001) ] + [ P_AB(11|110) + P_BC(11|011) + P_AC(11|101) ] - P_ABC(111|111)≥ 0,which can be naturally expressed as a causal game.Indeed, it can be rewritten asP( ãb̃c̃ = x y z )≤ 3/4,where ã=1 if x=0, ã=a if x=1(i.e., ã=xa⊕ x ⊕ 1, where ⊕ denotes addition modulo 2),and similarly for b̃ and c̃, and where it is implicitly assumed that all inputs occur with the same probability. This can be interpreted as a game in which the goal is to collaborate such that the product of the nontrivial outputs (i.e., those corresponding to an input 1) is equal to the product of the inputs, and where the former product is taken to be 1 if all inputs are 0 and there are therefore no nontrivial outputs (in which case the game will always be lost). The probability of success for this game can be no greater than 3/4 if the parties share a 2-causal correlation. This bound can easily be saturated by a deterministic, even fully causal, distribution: if every party always outputs 0 then the parties will win the game in all cases, except when the inputs are all 0 or all 1.Another party-permutation-symmetric 2-causal inequality is the following:I_2 = 1 + 2 [ P_A(1|100) + P_B(1|010) + P_C(1|001) ] - [ P_AB(11|110) + P_BC(11|011) + P_AC(11|101) ]≥ 0,whose interpretation can be made clearer by rewriting it asP_A(1|100) + P_B(1|010) - P_AB(11|110) +P_B(1|010) + P_C(1|001) - P_BC(11|011) +P_A(1|100) + P_C(1|001) - P_AC(11|101)≥ -1.The left-hand side of this inequality is simply the sum of three terms corresponding to conditional “lazy guess your neighbour's input” () inequalities <cit.>, one for each pair of parties (conditioned on the remaining party having input 0), while the negative bound on the right-hand side accounts for the fact that any pair of parties that are grouped together in a bipartition may maximally violate theinequality between them (and thus reach the minimum algebraic bound -1). This inequality can be interpreted as a “scored game” (as opposed to a “win-or-lose game”) in which each pair of parties scores one point if they win their respective bipartitegame and the third party's input is 0, and where the goal of the game is to maximise the total score, given by the sum of all three pairs' individual scores.The best average score (when the inputs are uniformly distributed) for a 2-causal correlation is 5/4, corresponding to the 2-causal bounds of 0 in Eq. (<ref>) and -1 in Eq. (<ref>). [The bound of these inequalities, and the best average score of the corresponding game, can be reached by a 2-causal strategy in which one party, say A, has a fixed causal order with respect to the other two parties grouped together, who share a correlation maximally violating the correspondinginequality. For example, the distribution P(abc|xyz)=δ_a,0 δ_b,yz δ_c,yz, where δ is the Kronecker delta function, is compatible with the order A ≺{B,C} (or with {B,C}≺ A) and saturates Eqs. (<ref>) and (<ref>).]It is also clear from the form of Eq. (<ref>) that for fully causal correlations the left-hand side is lower-bounded by 0.This inequality is thus amongst the 22 facet-inducing 2-causal inequalities that cannot be saturated by fully causal distributions.In addition to the inequalities that are symmetric under any permutation of the parties, there are four further nontrivial families containing 2-causal inequalities which are symmetric under cyclic exchanges of parties. One interesting such example is the following:I_3= 2 + [ P_A(1|100) + P_B(1|010) + P_C(1|001) ]- [ P_A(1|101) + P_B(1|110) + P_C(1|011) ]≥ 0.This inequality can again be interpreted as a causal game in the form (where we again implicitly assume a uniform distribution of inputs for all parties)P(x(y ⊕ 1)(a ⊕ z)=y(z ⊕ 1)(b ⊕ x)=z(x ⊕ 1)(c ⊕ y)=0)≤ 7/8,where the goal of the game is for each party, whenever they receive the input 1 and their right-hand neighbour has the input 0, to output the input of their left-hand neighbour (with C being considered, in a circular manner, to be to the left of A). [The bound of 7/8 on the probability of success can, for instance, be reached by the fully causal (and hence 2-causal) distribution P(abc|xyz)=δ_a,x δ_b,xy δ_c,yz, compatible with the order A≺ B ≺ C, which wins the game in all cases except when (x,y,z)=(1,0,0).]This inequality is of additional interest as it is one of the two nontrivial inequalities which is also a facet of the standard causal polytope for this scenario. (The second such inequality, which lacks the symmetry of this one, is presented in the Supplementary Material <cit.>.)§.§ Violations of 2-causal inequalities by process matrix correlations One of the major sources of interest in causal inequalities has been the potential to violate them in more general frameworks, in which causal restrictions are weakened. There has been a particular interest in one such model, the process matrix formalism, in which quantum mechanics is considered to hold locally for each party, but no global causal order between the parties is assumed <cit.>. In this framework, the (possibly noncausal) interactions between the parties are described by a process matrix W, which, along with a description of the operations performed by the parties, allows the correlations P(a⃗|x⃗) to be calculated.It is well-known that process matrix correlations can violate causal inequalities <cit.>, although the physical realisability of such processes remains an open question <cit.>. In Ref. <cit.> it was shown that all the nontrivial fully causal inequalities for the tripartite lazy scenario can be violated by process matrices. However, for most inequalities violation was found to be possible using process matrices W^{A,B}≺ C that are compatible with C acting last, which means the correlations they produced were necessarily 2-causal. It is therefore interesting to see whether process matrices are capable of violating 2-causal inequalities in general, and thus of exhibiting genuinely N-partite noncausality. We will not present the process matrix formalism here, and instead simply summarise our findings; we refer the reader to Refs. <cit.> for further details on the technical formalism.Following the same approach as in Refs. <cit.> we looked for violations of the 2-causal inequalities. Specifically, we focused on two-dimensional (qubit) systems and applied the same “see-saw” algorithm to iteratively perform semidefinite convex optimisation over the process matrix and the instruments defining the operations of the parties.As a result, we were able to find process matrices violating all but 2 of the 473 nontrivial families of tight 2-causal inequalities (including Eqs. (<ref>) and (<ref>) above) using qubits, and in all cases where a violation was found, the best violation was given by the same instruments that provided similar results in Ref. <cit.>. We similarly found that 284 families of these 2-causal inequalities (including Eq. (<ref>)) could be violated by completely classical process matrices,[Incidentally, exactly the same number of families of fully causal inequalities were found to be violable with classical process matrices in Ref. <cit.>. It remains unclear whether this is merely a coincidence or the result of a deeper connection.] a phenomenon that is not present in the bipartite scenario where classical processes are necessarily causal <cit.>.While the violation of 2-causal inequalities is again rather ubiquitous, the existence of two inequalities for which we found no violation is curious. One of these inequalities is precisely Eq. (<ref>), and its decomposition in Eq. (<ref>) into threeinequalities helps provide an explanation. In particular, the seemingly best possible violation of a (conditional)inequality using qubits is approximately 0.2776 <cit.>, whereas it is clear that a process matrix violating Eq. (<ref>) must necessarily violate a conditionalinequality between one pair of parties by at least 1/3. Moreover, in Ref. <cit.> it was reported that no better violation was found using three- or four-dimensional systems, indicating that Eq. (<ref>) can similarly not be violated by such systems. It nonetheless remains unproven whether such a violation is indeed impossible, and the convex optimisation problem for three parties quickly becomes intractable for higher dimensional systems, making further numerical investigation difficult. The second inequality for which no violation was found can similarly be expressed as a sum of three different forms (i.e., relabellings) of a conditionalinequality, and a similar argument thus explains why no violation was found. Recall that, as they can be expressed as a sum of three conditionalinequalities with a negative 2-causal bound, these two 2-causal inequalities cannot be saturated by fully causal distributions; it is interesting that the remaining inequalities that require noncausal but 2-causal distributions to saturate can nonetheless be violated by process matrix correlations.§.§ Generalised 2-causal inequalities for N parties Although it quickly becomes intractable to completely characterise the 2-causal polytope for more complicated scenarios with more parties, inputs and/or outputs, as is also the case for fully causal correlations, it is nonetheless possible to generalise some of the 2-causal inequalities into inequalities that are valid for any number of parties N.The inequality (<ref>), for example, can naturally be generalised to give a 2-causal inequality valid for all N ≥ 2. [We continue to focus on the lazy scenario defined earlier for concreteness, but we note that the proofs of the generalised inequalities (<ref>) and (<ref>) in fact hold in any nontrivial scenario, of which the lazy one is the simplest example. The bounds for the corresponding causal games and whether or not the inequalities define facets will, however, generally depend on the scenario considered.] Specifically, one obtainsJ_1(N)= ∑_∅⊊⊊ P(a⃗_=1⃗ | x⃗_=1⃗,x⃗_\=0⃗)- P(a⃗=1⃗ | x⃗=1⃗)≥ 0where 1⃗=(1,…,1) and 0⃗=(0,…,0), which can be written analogously to Eq. (<ref>) as a game (again implicitly defined with uniform inputs) of the formP(Π_k ã_k = Π_k x_k)≤ 1-2^-N+1.We leave the proof of this inequality and its 2-causal bound to Appendix <ref>. It is interesting to ask if this inequality is tight (i.e., facet inducing) for all N. For N=2 it reduces to theinequality which is indeed tight, and for N=3 it was also found to be a facet. By explicitly enumerating the vertices of the 2-causal polytope for N=4 (of which there are 136 818 592) we were able to verify that J_1(4)≥ 0 is indeed also a facet, and we conjecture that this is true for all N. Note that, as for the tripartite case it is trivial to saturate the inequality for all N by considering the (fully causal) strategy where each party always outputs 0.It is also possible to generalise inequality (<ref>) to N parties—which will prove more interesting later—by considering a scored game in which every pair of parties gets one point if they win their respective bipartitegame and all other parties' inputs are 0, and the goal of the game is to maximise the total score of all pairs. If two parties belong to the same subset in a bipartition, then they can win their respectivegame perfectly, whereas they are limited by the causal bound 0 if they belong to two different groups. The 2-causal bound on the inequality is thus given by the maximum number of pairs of parties that belong to a common subset over all bipartitions, times the maximal violation of the bipartiteinequality. Specifically, we obtain the 2-causal inequalityJ_2(N)=∑_{i,j}⊂ L_N(i,j)≥ -N-12wheren2=n(n-1)/2 is a binomial coefficient andL_N(i,j) =P(a_i=1|x_ix_j=10,x⃗_\{i,j}=0⃗) + P(a_j=1|x_ix_j=01,x⃗_\{i,j}=0⃗) - P(a_ia_j=11|x_ix_j=11,x⃗_\{i,j}=0⃗).Each term L_N(i,j) defines a bipartite conditionalinequality with the causal bound L_N(i,j)≥ 0, and the minimum algebraic bound (i.e. the maximal violation) -1. The minimum algebraic bound of J_2(N) is thus -N2. The validity of inequality (<ref>) for 2-causal correlations (which corresponds to a maximal average score of (2N-1)(N-1)/2^N—compared to the maximal algebraic value of 2N(N-1)/2^N—for the corresponding game with uniform inputs) is again formally proved in Appendix <ref>.We note that in contrast to Eq. (<ref>), J_2(4)≥ -3 is not a facet of the 4-partite 2-causal polytope, and thus the inequality is not tight in general. Inequality (<ref>) can nonetheless be saturated by 2-causal correlations for any N. For example, consider ={1,…,N-1} and take the distributionP(a⃗|x⃗)= δ_a⃗_,f(x⃗_) δ_a_N,0with f(x⃗_) = x⃗_ if x⃗_ contains exactly two inputs 1, and f(x⃗_) = 0⃗ otherwise. P(a⃗|x⃗) is clearly 2-causal since it is compatible with the causal order ≺\ (indeed, also with \≺). One can then easily verify that P(a⃗|x⃗) saturates (<ref>), since all N-12 pairs of parties incan win their respective conditionalgame perfectly, and therefore contribute with a term of -1 to the sum in Eq. (<ref>). § REFINING THE DEFINITION OF GENUINELY MULTIPARTITE NONCAUSAL CORRELATIONSSo far we only discussed correlations that can or cannot arise given a definite causal order between two subsets of parties.It makes sense to consider more refined definitions that discriminate, among noncausal correlations, to what extent and in which way they represent a genuinely multipartite resource.The idea will again be to see if a given correlation can be established by letting certain groups of parties act “as one”, while retaining a definite causal order between different groups.The number and size of the groups for which this is possible can be used to give two distinct characterisations of how genuinely multipartite the observed noncausality is.§.§ -causal correlationsWe first want to characterise the correlations that can be realised when a definite causal order exists between certain groups of parties, while no constraint is imposed on the correlations within each group.Let us consider for this purpose a partition = {_1, …, _||} of —i.e., a set of || nonempty disjoint subsets _ℓ of , such that ∪_ℓ_ℓ =, see Fig. <ref>.Note that ifcontains at least two subsets, then for a given subset _ℓ⊂, \{_ℓ} also represents a partition of \_ℓ. Let us then introduce the following definition:For a given partitionof , an N-partite correlation P is said to be -causal if and only if P is causal when considered as an effective ||-partite correlation, where each subset indefines an effective party.More precisely, analogously to Definition <ref>: * For || = 1, any N-partite correlation P is -causal;* For || ≥ 2, an N-partite correlation P is -causal if and only if it can be decomposed in the form P(a⃗|x⃗) = ∑__ℓ∈ q__ℓP__ℓ(a⃗__ℓ|x⃗__ℓ) × P__ℓ,x⃗__ℓ,a⃗__ℓ(a⃗_\_ℓ|x⃗_\_ℓ) with q__ℓ≥ 0 for each _ℓ, ∑__ℓ q__ℓ = 1, where (for each _ℓ) P__ℓ(a⃗__ℓ|x⃗__ℓ) is a valid probability distribution for the parties in _ℓ and (for each _ℓ,x⃗__ℓ,a⃗__ℓ) P__ℓ,x⃗__ℓ,a⃗__ℓ(a⃗_\_ℓ|x⃗_\_ℓ) is a (\{_ℓ})-causal correlation for the remaining N-|_ℓ| parties.In the extreme case of a single-set partition = {} (|| = 1), any correlation is by definition trivially -causal;at the other extreme, for a partition ofinto N singletons (|| = N), the definitionof -causal correlations above is equivalent to that of fully causal correlations, Definition <ref> <cit.>. Between these two extreme cases, a -causal correlation identifies the situation where, with some probability, all parties within one group act before all other parties;conditioned on their inputs and outputs, another group acts second (before all remaining parties) with some probability; and so on.We emphasise that no constraint is imposed on the correlations that can be generated within each group, since we allow them to share the most general resource conceivable—in particular, there might be no definite causal order between the parties inside a group.Since the definition of -causal correlations above matches that of causal correlations for the || effective parties defined by , all basic properties of causal correlations (see Ref. <cit.>) generalise straightforwardly to -causal correlations.Note in particular that the definition captures the idea of dynamical causal order, where the causal order between certain subsets of parties inmay depend on the inputs and outputs of other subsets of parties that acted before them.The following result also follows directly from what is known about causal correlations <cit.>:For any given , the set of -causal correlations forms a convex polytope, whose (finitely many) extremal points correspond to deterministic -causal correlations. We shall call this polytope the -causal polytope; its facets define -causal inequalities. Theorem <ref> implies that any -causal correlation can be obtained as a probabilistic mixture of deterministic -causal correlations.It is useful to note that, similarly to Ref. <cit.>, deterministic -causal correlations can be interpreted in the following way: a set _t_1 of parties acts with certainty before all others, with their outputs being a deterministic function of all inputs in that set but independent of the inputs of any other parties, a⃗__t_1 = α⃗__t_1(x⃗__t_1).The inputs of the first set also determine which set comes second, _t^x⃗_2, where t^x⃗_2=t_2(x⃗__t_1), whose outputs can depend on all inputs of the first and second sets; and so on, until all the sets in the partition are ordered. As one can see, each possible vector of inputs x⃗ thus determines (in a not necessarily unique way) a given causal order for the sets of parties in .§.§ Non-inclusion relations for -causal polytopes As suggested earlier, our goal is to quantify the extent to which a noncausal resource is genuinely multipartite in terms of the number or size of the subsets one needs to consider in a partitionto make a given correlation -causal.A natural property to demand of such a quantification is that it defines nested sets of correlations: if a correlation is genuinely multipartite noncausal “to a certain degree”, it should also be contained in the sets of “less genuinely multipartite noncausal” correlations (and, eventually, the set of simply noncausal correlations). It is therefore useful, before providing the relevant definitions in the next subsections, to gather a better understanding of the inclusion relations between -causal polytopes.One might intuitively think that there should indeed be nontrivial inclusion relations among those polytopes.For example, one might think that a -causal correlation should also be '-causal if ' is a “coarse-graining” of(i.e., ' is obtained fromby grouping some of its groups to define fewer but larger subsets)—or, more generally, when ' contains fewer subsets than , i.e. |'| < ||.This, however, is not true.For example, in the tripartite case, a fully causal correlation (i.e., a -causal one for = {{A_1},{A_2},{A_3}}) compatible with the fixed order A_1 ≺ A_2 ≺ A_3, where A_2 comes between A_1 and A_3, may not be '-causal for ' = {{A_1,A_3},{A_2}}, since one cannot order A_2 with respect to {A_1,A_3} when those are taken together.In fact, no nontrivial inclusion exists among -causal polytopes, as established by the following theorem, proved in Appendix <ref>.Consider an N-partite scenario where each party has at least two possible inputs and at least two possible outputs for one value of the inputs.Given two distinct nontrivial[If one of the two partitions is trivial, say ' = {}, then the -causal polytope is of course contained in the trivial '-causal one (which contains all valid probability distributions). Note that for N=2 there is only one nontrivial partition; the theorem is thus only relevant for scenarios with N ≥ 3.] partitionsand ' ofwith ||,|'| > 1, the -causal polytope is not contained in the '-causal one, nor vice versa. One may also ask whether, for a given -causal correlation P, there always exists a partition ' with 2 ≤ |'| < || such that P is also '-causal (recall that the case |'| = 1 is trivial).The answer is negative when mixtures of different causal orders are involved: e.g., in the tripartite case with = {{A_1},{A_2},{A_3}}, a fully causal correlation of the form P = 1/6(P_A_1 ≺ A_2 ≺ A_3 + P_A_1 ≺ A_3 ≺ A_2 + P_A_2 ≺ A_1 ≺ A_3 + P_A_2 ≺ A_3 ≺ A_1 + P_A_3 ≺ A_1 ≺ A_2 + P_A_3 ≺ A_2 ≺ A_1), where each correlation in the sum is compatible with the corresponding causal order, may not be '-causal for any ' of the form ' = {{A_i, A_j}, {A_k}}_i ≠ j ≠ k, as there is always a term in P above for which A_k comes between A_i and A_j.For an explicit example one can take the correlation P above to be a mixture of 6 correlations P_,σ^det introduced in Appendix <ref>. [To see that P thus defined is indeed not '-causal for any such bipartition, first note that, by symmetry, it suffices to show it is not '-causal for '={{A_1},{A_2,A_3}}. One can readily show that all such '-causal inequalities must obey the -type inequality P_A_1(1|100)+P_A_2A_3(11|011)-P_A_1A_2A_3(111|111)≥ 0 (which, moreover, is a facet of the '-causal polytope). It is easily verified that P violates this inequality with the left-hand side obtaining the value -1/3.]The above results tell us that -causal polytopes do not really define useful classes to directly quantify how genuinely multipartite the noncausality of a correlation is.One may wonder whether considering convex hulls of -causal polytopes allows one to avoid these issues. For example, is it the case that any -causal correlation P is contained in the convex hull of all _j'-causal correlations for all partitions _j' with a fixed value of |_j'| = 𝔪' < ||? [Note that a convex combination of _j'-causal correlations for various partitions _j' with a fixed number of subsets |_j'| = 𝔪' is not necessarily '-causal for any single partition ' with the same value of |'| = 𝔪'.] For 𝔪'=1 this is trivial, and this remains true for 𝔪'=2: any -causal correlation P can be decomposed as a convex combination of _j'-causal correlations for various partitions _j' with |_j'| = 2. Eq. (<ref>) is indeed such a decomposition, with the partitions _ℓ' = {_ℓ, \_ℓ}. This is also true, for any value of 𝔪', for -causal correlations that are compatible with a fixed causal order between the subsets in(or convex mixtures thereof): indeed, such a correlation is also '-causal for any coarse-grained partition ' ofwhere consecutive subsets (as per the causal order in question, or per each causal order in a convex mixture) ofare grouped together. However, this is not true in general for 𝔪' > 2 when dynamical causal orders are involved.It is indeed possible to find a 4-partite, fully causal correlation that cannot be expressed as a convex combination of _j'-causal correlations with all |_j'| = 3; an explicit counterexample is presented in Appendix <ref>.From these observations we conclude that, although grouping parties into 𝔪 subsets seems to be a stronger constraint than grouping parties into some 𝔪' < 𝔪 subsets, the fact that a correlation is -causal for some || = 𝔪≥ 4 (or more generally, that it is a convex combination of various _j-causal correlations with all |_j| = 𝔪≥ 4) does not guarantee that it is also '-causal for some |'| = 𝔪' < 𝔪—unless 𝔪'=2 (or 𝔪' = 1, trivially)—nor that it can be decomposed as a convex combination of _j'-causal correlations with all |_j'| = 𝔪'. In particular, fully causal correlations may not be '-causal for any ' with 2 < |'| < N, or convex combinations of such '-causal correlations. This remark motivates the definitions in the next subsection. §.§ M-causal correlations §.§.§ Definition and characterisation With the previous discussion in mind, we propose the following definition, as a first refinement between the definitions of fully causal and 2-causal correlations.An N-partite correlation is said to be M-causal (for 1 ≤ M ≤ N) if and only if it is a convex combination of -causal correlations, for various partitionsofinto || ≥ M subsets.More explicitly: P is M-causal if and only if it can be decomposed asP(a⃗|x⃗) = ∑_: || ≥ M q_P_(a⃗|x⃗),where the sum is over all partitionsofinto M subsets or more, with q_≥ 0 for each , ∑_ q_ = 1, and where each P_(a⃗|x⃗) is a -causal correlation. For M=1, any correlation is trivially 1-causal, since for = {} any correlation is -causal.For M=N, the definition of M-causal correlations above is equivalent to that of fully causal correlations, Definition <ref> <cit.>.For M=2, the above definition is equivalent to that of 2-causal correlations as introduced through Definition <ref>.To see this, recall first (from the discussion in the previous subsection), that any -causal correlation with || ≥ 2 can be written as a convex combination of some '-causal correlations, for various bipartitions ' with |'| = 2.It follows that, for M=2, it would be equivalent to have the condition || = 2 instead of || ≥ 2 in Definition <ref> of M-causal correlations.Definition <ref> is then recovered when writing the bipartitions in the decomposition as = {, \}, using Eq. (<ref>) from the definition of -causal correlations, and rearranging the terms in the decomposition.Hence, Definition <ref> is in fact equivalent to saying that 2-causal correlations are those that can be written as a convex mixture of -causal correlations, for different partitionsofinto || ≥ 2 subsets, thus justifying further our definition of genuinely N-partite noncausal correlations as those that cannot be written as such a convex mixture (or equivalently, those that are not M-causal for any M>1). Note that since we used the constraint || ≥ M rather than || = M in Eq. (<ref>), [Replacing the condition || ≥ M by || = M in Definition <ref> for arbitrary M, we could define “(=M)-causal correlations”, which would be distinct from M-causal correlations for 2<M<N.We would also have that “(=M)-causal correlations” form a convex polytope; however, the various “(=M)-causal polytopes” would not necessarily be included in one another for distinct values of M, as discussed in the previous subsection.]our definition establishes a hierarchy of correlations as desired, with M-causal ⇒ M'-causal if M ≥ M'.With the above definition of M-causal correlations, we have the following: For any given value of M (with 1 ≤ M ≤ N), the set of M-causal correlations forms a convex polytope, whose (finitely many) extremal points correspond to deterministic -causal correlations, for all possible partitionswith || ≥ M—that is, deterministic M-causal correlations.According to Eq. (<ref>), the set of M-causal correlations is the convex hull of the polytopes of -causal correlations with || ≥ M. Since there is a finite number of such polytopes, the set of M-causal correlations is itself a convex polytope; its extremal points are those of the various -causal polytopes with || ≥ M, namely deterministic -causal correlations (see Theorem <ref>). We thus obtain a family of convex polytopes—which we shall call M-causal polytopes—included in one another, see Fig. <ref>.The facets of these polytopes are M-causal inequalities, which define a hierarchy of criteria: e.g., if all M-causal inequalities are satisfied, then so are all M'-causal inequalities if M' ≤ M—or equivalently: if some M'-causal inequality is violated, then some M-causal inequality must also be violated if M ≥ M'. Given a correlation P, one can in principle test to which set it belongs. The largest M for which P is M-causal can be used as a measure of how genuinely multipartite its noncausality is: it means that P can be obtained as a convex combination of -causal correlations with all || ≥ M, but not with all || > M—indeed, if M<N then P violates some M'-causal inequality for any M'>M (with M' ≤ N). If that M is 1, P is a genuinely N-partite noncausal correlation; if it is N, then P is fully causal, hence it displays no noncausality (genuinely multipartite or not). §.§.§ A family of M-causal inequalities The general N-partite 2-causal inequality (<ref>) can easily be modified to give an M-causal inequality that is valid—although not tight in general, as observed before—for all N and M (with 1 ≤ M ≤ N), simply by changing the bound. Indeed, this bound is derived from the largest possible number of pairs of parties that can be in a single subset of a given partition, and this can easily be recalculated for M-subset partitions rather than bipartitions. We thus obtain thatJ_2(N)=∑_{i,j}⊂ L_N(i,j) ≥ -N-M+12for any M-causal correlation. This updated bound is proved in Appendix <ref>.As for Eq. (<ref>) it is easy to see that, for any N,M, there are M-causal correlations saturating the inequality (<ref>). Consider, for instance, the partition ={_1,…,_M} ofwith _1={1,…,N-M+1} and _ℓ={N-M+ℓ} for 2≤ℓ≤ M, and take the distributionP(a⃗|x⃗)= δ_a⃗__1,f(x⃗__1) δ_a⃗_\_1,0⃗,where we use the same function f as in Eq. (<ref>).Analogous reasoning shows that this correlation indeed reaches the bound (<ref>).Since this (reachable) lower bound is different for each possible value of M, this implies, in particular, that (for the N-partite lazy scenario) all the inclusions N-causal ⊂ (N-1)-causal ⊂⋯⊂ 3-causal ⊂ 2-causal in the hierarchy of M-causal polytopes are strict. In fact, redas for inequalities (<ref>) and (<ref>) (see Footnote <ref>), the proof of Eq. (<ref>) holds in any nontrivial scenario (with arbitrarily many inputs and outputs), of which the lazy scenario is the simplest example for all N. Moreover, one can saturate it in such scenarios by trivially extending the M-causal correlation (<ref>) (e.g., by producing a constant output on all other inputs) and thus these inclusions are strict in general. §.§ Size-S-causal correlations In the previous subsection we used the number of subsets needed in a partition to quantify how genuinely multipartite the noncausality of a correlation is.Here we present an alternative quantification, based on the size of the biggest subset in a partition, rather than the number of subsets.Intuitively, the bigger the subsets in a partitionneeded to reproduce a correlation, the more genuinely multipartite noncausal the corresponding -causal correlations are.However, the discussion of Sec. <ref> implies that, as was the case with M-causal correlations, it is not sufficient to simply ask whether a given correlation is -causal for some partitionwith subsets of a particular size. We therefore focus on classes of correlations that can be written as mixtures of -causal ones whose largest subset is not larger than some number S.For convenience, we introduce the notation𝔰()max_∈||.We then take the following definition:An N-partite correlation is said to be size-S-causal (for 1 ≤ S ≤ N) if and only if it is a convex combination of -causal correlations, for various partitionswhose subsets are no larger than S.More explicitly: P is size-S-causal if and only if it can be decomposed asP(a⃗|x⃗) = ∑_:𝔰() ≤ S q_P_(a⃗|x⃗),where the sum is over all partitionsofwith no subset of size larger than S, with q_≥ 0 for each , ∑_ q_ = 1, and where each P_(a⃗|x⃗) is a -causal correlation. Any N-partite correlation is trivially size-N-causal, while size-1-causal correlations coincide with fully causal correlations. Furthermore, noting that 𝔰() ≤ N-1 if and only if || ≥ 2, we see that the set of size-(N-1)-causal correlations coincides with that of 2-causal correlations.Hence, the definition of size-S-causal correlations is another possible generalisation of that of 2-causal ones. From this new perspective, 2-causal correlations can be seen as those that can be realised using (probabilistic mixtures of) noncausal resources available to groups of parties of size N-1 or less.This further strengthens the definition of 2-causal correlations as the largest set of correlations that do not possess genuinely N-partite noncausality.Without repeating in full detail, it is clear that size-S-causal correlations define a structure similar to that of M-causal correlations: for each S, size-S-causal correlations define size-S-causal polytopes whose vertices are deterministic size-S-causal correlations and whose facets define size-S-causal inequalities.For S ≤ S', all size-S-causal correlations are also size-S'-causal, so that the various size-S-causal polytopes are included in one another.The lowest S for which a correlation is size-S-causal also provides a measure of how genuinely multipartite the corresponding noncausal resource is, distinct to that defined by M-causal correlations. It is also possible here to generalise inequality (<ref>) to size-S-causal correlations by changing the bound. As proven in Appendix <ref>, we thus obtain the size-S-causal inequalityJ_2(N)≥ - ⌊N/S⌋S2 - N - ⌊N/S⌋ S2(where ⌊ x ⌋ denotes the largest integer smaller than or equal to x). Although, once again, this inequality is not tight in the sense that it does not define a facet of the size-S-causal polytope, its lower bound can be saturated by a size-S-causal correlation for each value of S, for instance by considering the partition = {_1,…,_⌊N/S⌋ (,_⌊N/S⌋+1)} ofinto ⌊N/S⌋ groups of S parties, and (if N is not a multiple of S) a last group with the remaining N - ⌊N/S⌋ S parties, and by taking the deterministic correlationP(a⃗|x⃗)= ∏_ℓδ_a⃗__ℓ,f(x⃗__ℓ)(with again the same function f as in Eq. (<ref>)).Since the (reachable) lower bounds in Eq. (<ref>) are different for all possible values of S, this implies, again, that all the inclusions size-1-causal ⊂ size-2-causal ⊂⋯⊂ size-(N-1)-causal in the hierarchy of size-S-causal polytopes are strict in general.§.§ Comparing the polytopes of M-causal and size-S-causal correlationsA relation between M-causal and size-S-causal correlations can be established through the following, straightforwardly verifiable, inequalities:|| - 1 +𝔰() ≤ N ≤ ||𝔰().From these it follows that, for N parties: * If a correlation is M-causal, then it is size-S-causal for all S ≥ N-M+1. * If a correlation is size-S-causal, then it is M-causal for all M ≤⌈N/S⌉ (where ⌈ x ⌉ denotes the smallest integer larger than or equal to x). It is furthermore possible to show that the inclusion relations between M-causal and size-S-causal polytopes implied by Theorem <ref> are complete, in the sense that no other inclusion exists that is not implied by the theorem. We prove this in Appendix <ref>. Together with the respective inclusion relations of each hierarchy separately, this result thus fully characterises the inclusion relations of all the classes of noncausal correlations that we introduced; the situation is illustrated in Fig. <ref> for the 6-partite case as an example. § DISCUSSION The possibility that nature might allow for correlations incompatible with a definite causal order opens exciting questions.It has been suggested that such correlations might arise in the context of quantum theories of gravity <cit.> or in space-time geometries with closed time-like curves <cit.>, although these possibilities, like that of observing noncausal correlations in laboratory experiments, are as yet unverified.Motivated by the fact that noncausal resources exhibit interesting new features in multipartite scenarios <cit.>, we aimed here to clarify when noncausal correlations can be considered to be a genuinely multipartite resource.In addressing this task, we first proposed a criterion to decide whether a given correlation shared by N parties is “genuinely N-partite noncausal”—i.e., its noncausality is indeed a genuinely N-partite resource—or not. We then refined our approach into two distinct criteria quantifying the extent to which the noncausality of a correlation is a genuinely multipartite resource. Both criteria are based around asking whether the correlation under consideration is compatible with certain subsets being grouped together—which are thus able to share arbitrary noncausal resources—and with a well-defined causal order existing between these groups of parties. The first criterion is based on the largest number M of such subsets that can be causally ordered while reproducing the correlation in question: the smaller M, the more genuinely multipartite the noncausality exhibited by the correlation. If M=1, then no subset of parties has a well-defined causal relation with any other, and the correlation is genuinely N-partite noncausal. The second criterion instead looks at how large the subsets that can be causally ordered are: if an N-partite correlation can be reproduced with subsets containing no more than S ≤ N parties, then S-partite noncausal resources are sufficient to reproduce the correlation. Thus, the larger S required, the more genuinely multipartite the correlation. If S=N, then again the correlation is genuinely N-partite noncausal. Although these two criteria define different classes of correlations in general, they coincide on the edge cases and thus lead to exactly the same definition of genuinely N-partite noncausal correlations, adding support to the robustness of our definition. It nonetheless remains to be seen as to which measure of genuine multipartiteness is the most appropriate (or, in what situations one is more pertinent than the other). All the classes of correlations we introduced through these criteria conveniently form polytopes, whose vertices are deterministic correlations and whose facets define different classes of inequalities.Of particular interest are the “2-causal” correlations, which are the most general correlations that are not genuinely N-partite noncausal.We completely characterised the 2-causal polytope for the simplest nontrivial tripartite scenario and found that almost all of the 473 nontrivial classes of 2-causal inequalities can be violated by process matrix correlations.However, we were unable to find any violation for 2 of those inequalities; this stands in contrast to previous studies of causal inequalities, where violations with process matrices were always found[At least for standard causal inequalities that bound probabilities directly; for entropic causal inequalities, which only provide a relaxed characterisation of the set of causal correlations, no violations were found so far <cit.>. It would nevertheless also be interesting to investigate how genuinely multipartite noncausality can be characterised with the entropic approach.] <cit.>. Although it remains to be confirmed whether this is simply a failure of the search method we used, we provided some intuition why such a violation would in fact be a surprise.Our definition of genuinely N-partite noncausality is analogous to the corresponding notion for nonlocality originally due to Svetlichny <cit.>.It is known, however, that that notion harbours some issues: for example, it is not robust under local operations, a necessary requirement for an operational resource theory of nonlocality <cit.>.In order to overcome these issues, additional constraints must be imposed on the correlations shared by subsets of parties when defining correlations that are not genuinely multipartite nonlocal. In the case of noncausality, however, there appears to be no clear reason to impose any additional such constraints.For nonlocal resources, issues arise in particular from the possibility that different parties might access the resource at different times, with an earlier party then communicating with a later one.This type of issue is not pertinent for noncausal resources, where the causal order (be it definite or indefinite) between parties is determined by the resource itself, and additional communication beyond what the resource specifies seems to fall outside the relevant framework. More generally, however, an operational framework and understanding of the relevant “free operations” for noncausal resources remains to be properly developed.Finally, in this paper we only considered correlations from a fully theory- and device-independent perspective; it would be interesting to develop similar notions within specific physical theories like the process matrix framework, where quantum theory holds locally for each party.Process matrices that cannot be realised with a definite causal order are called causally nonseparable <cit.>, and it would be interesting to study a notion of genuinely multipartite causal nonseparability.It should, however, be noted that different possible notions of multipartite causal (non)separability have been proposed <cit.>, so a better understanding of their significance would be necessary in order to extend the notions we have developed here to that framework. § ACKNOWLEDGEMENTS A.A.A., J.W. and C.B. acknowledge support through the `Retour Post-Doctorants' program (ANR-13-PDOC-0026) of the French National Research Agency.F.C. acknowledges support through an Australian Research Council Discovery Early Career Researcher Award (DE170100712).This publication was made possible through the support of a grant from the John Templeton Foundation.The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation.F.C. acknowledges the traditional owners of the land on which the University of Queensland is situated, the Turrbal and Jagera people. apsrev41Control apsrev4-1_modified§ PROOF OF THE GENERALISED 2-CAUSAL INEQUALITIES AND THEIR BOUNDS§.§ Proof of inequality (<ref>) To prove that Eq. (<ref>) is a valid 2-causal inequality for all N, it suffices to show that it holds for all deterministic 2-causal correlations. For a nonempty strict subsetof , let P^det(a⃗|x⃗)=P^det(a⃗_|x⃗_)P^det_x⃗_,a⃗_(a⃗_\|x⃗_\) be an arbitrary deterministic correlation compatible with the causal order ≺\. Then, since P^det(a⃗_|x⃗)=P^det(a⃗_|x⃗_), it follows thatP^det(a⃗_=1⃗ | x⃗_=1⃗,x⃗_\=0⃗)= P^det(a⃗_=1⃗ | x⃗=1⃗)≥ P^det(a⃗=1⃗ | x⃗=1⃗)and hence P^det(a⃗_=1⃗ | x⃗_=1⃗,x⃗_\=0⃗) - P^det(a⃗=1⃗ | x⃗=1⃗) ≥ 0. Since J_1(N) is then obtained by adding some more nonnegative terms P^det(a⃗_'=1⃗ | ⋯) ≥ 0, this proves the validity of Eq. (<ref>) for any 2-causal correlation.§.§ Proof of inequalities (<ref>), (<ref>) and (<ref>) for M-causal and size-S-causal correlations The M-causal inequality (<ref>) and the size-S-causal inequality (<ref>) are defined as different bounds for the expression J_2(N)=∑_{i,j}⊂ L_N(i,j), with the summands defined in Eq. (<ref>), while the 2-causal inequality (<ref>) coincides with the particular cases M=2 and S=N-1. We shall first prove a bound for J_2(N) that holds for -causal correlations, for any partition , and then use this bound to derive the corresponding M-causal, size-S-causal (and consequently the 2-causal) bounds.Firstly, let us note that the observation made at the end of Sec. <ref> that the response function determining the outputs of a deterministic -causal correlation can be seen as processing deterministically one input after another and consequently defining a (dynamical) causal order between the subsets in , also implies the following result (which will be used below and in the subsequent appendices): For a deterministic -causal correlation P, given two subsets _ℓ and _m in , the vector of inputs x⃗_\ (_ℓ∪_m) for the parties that are neither in _ℓ nor in _m determines a (not necessarily unique) causal order between _ℓ and _m, _ℓ≺_m or _m ≺_ℓ.More technically: for any x⃗_\ℓ mx⃗_\ (_ℓ∪_m), a deterministic -causal correlation P satisfies either P(a⃗__ℓ|x⃗_\ℓ m,x⃗_A_ℓ,x⃗__m) = P(a⃗__ℓ|x⃗_\ℓ m,x⃗_A_ℓ,x⃗__m^ ') for all x⃗_A_ℓ, x⃗__m, x⃗__m^ ', a⃗__ℓ, or P(a⃗__m|x⃗_\ℓ m,x⃗_A_ℓ,x⃗__m) = P(a⃗__m|x⃗_\ℓ m,x⃗_A_ℓ^ ',x⃗__m) for all x⃗_A_m, x⃗_A_ℓ^ ',x⃗__m,a⃗__m—i.e., in short, either P(a⃗__ℓ|x⃗) = P(a⃗__ℓ|x⃗_\_m) or P(a⃗__m|x⃗) = P(a⃗__m|x⃗_\_ℓ).To derive a -causal bound for J_2(N) for a given partition ={_1,…,_||}, it is sufficient to find a bound that holds for any deterministic -causal correlation P. We will bound J_2(N) by bounding each individual term L_N(i,j) in the sum. There are two cases to be considered: whether i) the parties A_i and A_j are in different subsets of , i.e. i∈_ℓ, j∈_m with ℓ≠ m; or ii) both parties are in the same subset: i,j∈_ℓ. i) According to Proposition <ref>, the inputs x⃗_\ (_ℓ∪_m) = 0⃗ imply either the order _ℓ≺_m, or _m ≺_ℓ for P. In the first case,P(a_i=1|x_ix_j=10,x⃗_\{i,j}=0⃗)= P(a_i=1|x_ix_j=11,x⃗_\{i,j}=0⃗),which impliesP(a_i=1|x_ix_j=10,x⃗_\{i,j}=0⃗)- P(a_ia_j=11|x_ix_j=11,x⃗_\{i,j}=0⃗) ≥ 0,and therefore (after adding a nonnegative term) L_N(i,j)≥ 0. An analogous argument shows that L_N(i,j)≥ 0 also in the case that one has _m ≺_ℓ for P when x⃗_\ (_ℓ∪_m) = 0⃗. ii) If the parties A_i and A_j belong to the same subset _ℓ, they can share arbitrary correlations and thus win the (conditional)game perfectly. In that case we have L_N(i,j)≥ -1, which is the minimum algebraic bound.Combining the two cases, we thus have, for any -causal correlation,J_2(N) =∑_{i,j}⊂ L_N(i,j) ≥∑_{i,j}∈_ℓ for some _ℓ∈ (-1) =- ∑__ℓ∈|_ℓ|2 L(). In order to prove the M-causal bound (<ref>), we shall now prove that among all partitionscontaining a fixed number 𝔪 of subsets, the quantity L() defined above is minimal whenconsists of 𝔪-1 singletons, and one subset containing the remaining N-𝔪+1 parties. Assume for the sake of contradiction that this is not the case, so that the minimum is obtained for a partitionthat contains at least two subsets _ℓ and _m that are not singletons, forwhich we assume |_ℓ| ≥ |_m|(≥ 2).Let then k ∈_m, and define the partition ' obtained fromby replacing _ℓ and _m by _ℓ' = _ℓ∪{k} and _m' = _m \{k}, respectively (note that the assumption that |_m| ≥ 2 ensures that _m' remains nonempty).One then hasL(')-L()= - [ |_ℓ'|2 + |_m'|2] + [ |_ℓ|2 + |_m|2]= - |_ℓ| + |_m| - 1<0,in contradiction with the assumption thatminimised L.For a given N it then follows thatmin_:||=𝔪L()=-N-𝔪+12,and therefore, from Eq. (<ref>),J_2(N) ≥ -N-||+12.Finally, we note that || ≥ M implies -N-||+12≥ -N-M+12, which concludes the proof that Eq. (<ref>) holds for all M-causal correlations.In order now to prove the bound (<ref>) for size-S-causal correlations, we show that among all partitionswith 𝔰()≤ S, L() from Eq. (<ref>) is minimised for the partition containing ⌊N/S⌋ groups of S parties, and (if N is not a multiple of S) a last group with the remaining N - ⌊N/S⌋ S parties—for which L() is indeed equal to the right-hand side of Eq. (<ref>).Assume again for the sake of contradiction that this is not the case, so that the minimum is obtained for a partitioncontaining at least two subsets _ℓ and _m of less than S parties, for which we take |_m|≤|_ℓ| < S. If |_m|>1, one can follow the same reasoning as in the proof of the M-causal bound above: take k∈_m and consider the partition ' obtained by replacing _ℓ and _m by _ℓ'=_ℓ∪{k} and _m'=_m∖{k}, respectively. Note that since we assumed |_ℓ|<S, we have |_ℓ'|≤ S and 𝔰(')≤ S. Eq. (<ref>) then holds again, in contradiction with the assumption thatminimised L. In the case when |_m|=1, consider instead the partition ' formed by merging _ℓ and _m into a new subset _ℓ'=_ℓ∪_m (so that |_ℓ'|=|_ℓ|+1 ≤ S and we still have 𝔰(')≤ S). We then haveL(')-L()= - |_ℓ'|2 +|_ℓ|2= - |_ℓ|<0,again in contradiction with the assumption thatminimised L, which concludes the proof that Eq. (<ref>) holds for all size-S-causal correlations. § SEPARATION OF -CAUSAL POLYTOPESIn this appendix we shall prove Theorem <ref>, which states that there are no nontrivial inclusions among -causal polytopes. Before that, we start by introducing a useful family of deterministic -causal correlations.§.§ A family of deterministic -causal correlationsThe N-partite correlations P_,σ^det we introduce here are defined for a given partition = {_1, …, _||} ofand a given permutation σ of its || elements.We consider the lazy scenario, where each party has a binary input x_k = 0,1 with a fixed output a_k = 0 for x_k = 0, and a binary output a_k = 0,1 for x_k = 1. For each subset ∈ and a vector of inputs x⃗, we define the bitz_∏_k∈ x_k.We then define the deterministic response function α⃗_,σ such that, for each party A_k belonging to a subset _ℓ of , we have(α⃗_,σ(x⃗))_k∏_m: σ(m) ≤σ(ℓ) z__m.The correlation P_,σ^det is then defined asP_,σ^det(a⃗|x⃗)δ_a⃗,α⃗_,σ(x⃗). In other words, each party A_k in some subset _ℓ∈ outputs the product of the inputs of all parties that came before itself according to the partitionand the causal order _σ(1)≺_σ(2)≺⋯≺_σ(||) defined by the permutation σ, including all parties in the same subset _ℓ.Clearly the correlation P_,σ^det is compatible with this fixed causal order, and is therefore -causal; as it is deterministic, it corresponds to a vertex of the -causal polytope.Note that each party outputs a_k=0 whenever x_k=0, as required in the lazy scenario.The correlations P_,σ^det can also straightforwardly be generalised to more complex scenarios with more inputs and outputs, by simply never outputting the other possible outputs, and, e.g., always outputting 0 for any other possible input. Hence, the proofs below, which use P_,σ^det as an explicit example, apply to any scenario where each party has at least two possible inputs, and at least two possible outputs for one of their inputs.§.§ Proof of Theorem <ref>Coming back to the theorem, we shall prove that the -causal polytope is not contained in the '-causal one by exhibiting a -causal correlation (from the family introduced above) that is not '-causal.The proof that the '-causal polytope is not contained in the -causal one then follows by symmetry.We distinguish two cases, whether i) ' is a coarse-graining ofor ii) ' is not a coarse-graining of . i) If ' (with |'|>1) is a coarse-graining of , then one can find two subsets _ℓ and _ℓ' inthat are grouped together in some subset _ℓℓ'' in ', and a third subset _m inthat is contained in a different subset _m' of '. Let σ be a permutation ofwhich defines a causal order between its elements such that _ℓ≺_m ≺_ℓ'. The correlation P_,σ^det as defined in Eq. (<ref>) is then -causal, but not '-causal. Intuitively, this is because we cannot order _ℓℓ'' (in which _ℓ and _ℓ' are grouped together) against _m' (which contains _m). More specifically, for x⃗_\ (_ℓ∪_ℓ'∪_m) = 1⃗ (so that in particular, x⃗_\ (_ℓℓ'' ∪_m') = 1⃗), the response function α⃗_,σ defined in Eq. (<ref>) gives a_k=(α⃗_,σ(x⃗))_k = z__ℓz__mz__ℓ' if k ∈_ℓ' and a_k=(α⃗_,σ(x⃗))_k = z__ℓz__m if k ∈_m.Hence, P(a⃗__ℓℓ''|x⃗) depends nontrivially on x⃗__m' (via z__m) while P(a⃗__m'|x⃗) depends nontrivially on x⃗__ℓℓ'' (via z__ℓ).According to Proposition <ref>, this implies that P_,σ^det indeed cannot be '-causal. ii) If ' is not a coarse-graining of , then one can find two parties A_i, A_j that belong to the same subset _ij of , but belong to two distinct subsets of ', i.e. A_i ∈_i', A_j ∈_j'. Let now σ be any permutation of . The correlation P_,σ^det as defined in Eq. (<ref>) is then -causal, but not '-causal. Intuitively, this is because the parties A_i and A_j cannot be separated in the definition of P_,σ^det. More specifically, for x⃗_\{i,j} = 1⃗ (so that in particular, x⃗_\ (_i' ∪_j') = 1⃗), the response function α⃗_,σ gives a_k=(α⃗_,σ(x⃗))_k = z__ij =x_i x_j for both k=i and k=j.Hence, P(a⃗__i'|x⃗) depends nontrivially on x⃗__j' (via x_j) while P(a⃗__j'|x⃗) depends nontrivially on x⃗__i' (via x_i).According to Proposition <ref>, this implies that P_,σ^det indeed cannot be '-causal. § A 4-PARTITE FULLY CAUSAL CORRELATION WITH DYNAMICAL ORDER THAT IS NOT A CONVEX MIXTURE OF _J'-CAUSAL CORRELATIONS WITH |_J'| = 3.We provide here an explicit counterexample to the question raised at the end of Sec. <ref>, of whether a -causal correlation can always be written as a convex combination of _j'-causal correlations for various partitions _j' with a fixed number of subsets |_j'| = 𝔪' < ||.As we noted, such a counterexample requires 𝔪' ≥ 3 (and hence || ≥ 4), as well as a dynamical causal order. Consider thus a 4-partite scenario, with parties A, B_1, B_2 and B_3.Party A receives as input a 6-valued variable x (and has no output);A's input determines the causal order of the three subsequent parties B_k (see Fig. <ref>), with each possible value of x corresponding to one of the six possible permutations, denoted by σ_x. For parties B_k we consider the lazy scenario, with inputs y_k ∈{0,1} and outputs b_k = 0if y_k=0, b_k ∈{0,1} if y_k=1.We then define the deterministic correlation P^det by the response functionsb_σ_x(1)= 0,b_σ_x(2)= y_σ_x(1)y_σ_x(2),b_σ_x(3)= y_σ_x(2)y_σ_x(3). While the correlation P^det thus obtained is fully causal (i.e., it is -causal for the “full partition”such that ||=N=4), it is not '-causal for any 3-subset partition ' of {A,B_1,B_2,B_3}—which also implies, since P^det is deterministic, that it is not decomposable as a convex combination of '-causal correlations for various 3-subset partitions ' either. Indeed, such a ' would contain (2 singletons and) a pair of parties grouped together, {A,B_i} or {B_i,B_j}.Consider the first case first: as P^det is deterministic, and the outputs of all parties B_k depend on x, any '-causal correlation should be compatible with the subset {A,B_i} being first, with therefore b_i independent of y_k for k≠ i; this, however, cannot be because, for every i=1,2,3, we can find x such that i=σ_x(2), so that b_i= y_σ_x(1) y_i, which depends on y_k with k = σ_x(1)≠ i.In the second case where ' = {{A},{B_i,B_j},{B_k}}, according to Proposition <ref>, a deterministic '-causal correlation must be such that for each given value of x one must either have that b_i and b_j should be independent of y_k, or that b_k is independent of y_i and y_j; this is however not satisfied for the value of x such that σ_x(1)=i, σ_x(2)=k, σ_x(3)=j.In short, for any pair of parties there exists some input x of party A for which a third party must act in between the said pair, so that this pair of parties cannot be causally ordered with the other two (singletons of) parties.This shows that the correlation P^det defined above is indeed not '-causal for any 3-subset partition '—and as said above, being deterministic, it is not a convex mixture of '-causal partitions for various such partitions ' either. § PROOF OF COMPLETENESS OF THEOREM <REF> In order to prove that Theorem <ref> completely characterises the possible inclusions between M-causal and size-S-causal polytopes, we first prove the following lemma regarding non-inclusions between -causal polytopes (which is perhaps of interest in and of itself).Given a partitionand a set of partitions {_1',…,_r'}, none of which is a coarse-graining of , the convex hull of the _j'-causal polytopes, j=1,…,r, does not contain the -causal polytope. It suffices here to show that, if no partition among _1',…,_r' is a coarse-graining of a partition , it is possible to find a deterministic -causal correlation that is not _j'-causal for any j=1,…,r. The given correlation being deterministic, this will indeed imply that it is also not a convex mixture of _j'-causal correlations. We can again take the correlation P_,σ^det defined in Eq. (<ref>), for any choice of the permutation σ. Recall that for this correlation the output of each party depends nontrivially on the inputs of all parties in the same subset. As already established for case ii) in Appendix <ref>, no such correlation is _j'-causal for any partition _j' that is not a coarse-graining of , which proves the result. Note that the assumption that none of the partitions _j' is a coarse-graining ofis crucial in the above proof, and the conclusion of the theorem does not necessarily hold otherwise: as noted in Sec. <ref> already, Eq. (<ref>) indeed shows that a -causal correlation, with = {_ℓ}_ℓ, can be written as a convex combination of _ℓ'-causal correlations, with the partitions _ℓ' = {_ℓ, \_ℓ} being coarse-grainings of .Let us now prove the completeness of Theorem <ref>. To this end, let us consider first a partitionwith || = M that consists of M-1 singletons and an (N-M+1)-partite subset.Such a partition saturates the first inequality in Eq. (<ref>), i.e., it satisfies 𝔰() = N-M+1.Let us then take S < N-M+1.The size-S-causal polytope is, by definition, the convex hull of all _j'-causal polytopes for all partitions _j' with 𝔰(_j') ≤ S.None of these partitions can be a coarse-graining of , as this would imply (since coarse-graining can only increase the size of the largest subset in a partition) 𝔰(_j') ≥𝔰() = N-M+1 > S, in contradiction with 𝔰(_j') ≤ S.But then Lemma <ref> shows that the -causal polytope is not contained in the size-S-causal polytope, and (since || = M) this thus implies that the M-causal polytope is not contained in the size-S-causal polytope. Similarly, consider a partitionwith 𝔰() = S, that consists of ⌊N/S⌋ groups of S parties and, if N is not a multiple of S, a final group containing the remaining N -⌊N/S⌋ S parties.Such a partition thus contains || = ⌈N/S⌉ subsets.Let us now take M > ⌈N/S⌉.The M-causal polytope is, again by definition, the convex hull of all _j'-causal polytopes for all partitions _j' with |_j'| ≥ M.None of these partitions can be a coarse-graining of , as this would imply (since coarse-graining can only decrease the number of subsets in a partition) |_j'| ≤ || = ⌈N/S⌉ < M, in contradiction with |_j'| ≥ M.Lemma <ref> then again shows that the -causal polytope is not contained in the M-causal polytope, and (since 𝔰() = S) this then implies that the size-S-causal polytope is not contained in the M-causal polytope, which completes the proof.Finally, let us also note that since no partition ' with |'| ≥ M' > M is a coarse-graining of any partitionwith || = M, and since no partition ' with 𝔰(') ≤ S' < S is a coarse-graining of any partitionwith 𝔰() = S, invoking Lemma <ref> also provides a proof (as an alternative to our use of the families of M-causal and size-S-causal inequalities (<ref>) and (<ref>) before) that all inclusions among M-causal and among size-S-causal polytopes are strict.
http://arxiv.org/abs/1708.07663v2
{ "authors": [ "Alastair A. Abbott", "Julian Wechs", "Fabio Costa", "Cyril Branciard" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20170825092255", "title": "Genuinely Multipartite Noncausality" }
– Dedicated to Jürgen Scheurle in gratitude and friendship – Stabilized rapid oscillations in a delay equation: Feedback control by a small resonant delayBernold Fiedler*Isabelle Schneider*version of December 30, 2023 ===============================================================================================empty*Institut für MathematikFreie Universität BerlinArnimallee 3/7 14195 Berlin, Germanyplain romanWe study scalar delay equations ẋ (t) = λ f(x(t-1)) + b^-1 (x(t) + x(t -p/2))with odd nonlinearity f, real nonzero parameters λ, b, and two positive time delays 1, p/2. We assume supercritical Hopf bifurcation from x ≡ 0 in the well-understood single-delay case b = ∞. Normalizing f' (0)=1, branches of constant minimal period p_k = 2π/ω_k are known to bifurcate from eigenvalues iω_k = i(k+12)π at λ_k = (-1)^k+1ω_k, for any nonnegative integer k. The unstable dimension of these rapidly oscillating periodic solutions is k, at the local branch k. We obtain stabilization of such branches, for arbitrarily large unstable dimension k, and for, necessarily, delicately narrow regions 𝒫 of scalar control amplitudes b < 0. For p:= p_k the branch k of constant period p_k persists as a solution, for any b≠ 0. Indeed the delayed feedback term controlled by b vanishes on branch k: the feedback control is noninvasive there. Following an idea of <cit.>, we seek parameter regions 𝒫 = (b_k,b_k) of controls b ≠ 0 such that the branch k becomes stable, locally at Hopf bifurcation. We determine rigorous expansions for 𝒫 in the limit of large k. Our analysis is based on a 2-scale covering lift for the slow and rapid frequencies involved. These results complement earlier results by <cit.> which required control terms b^-1 (x(t-ϑ) + x(t-ϑ -p/2))with a third delay ϑ near 1. arabic § INTRODUCTION AND MAIN RESULTequationsection figuresectionIn an ODE setting, delayed feedback control is frequently studied for systems like 𝐱̇(t) = 𝐅(𝐱 (t)) + β (𝐱 (t) - 𝐱(t-τ))with 𝐱∈ℝ^N, smooth nonlinearities 𝐅, and suitable N × N matrices β mediating the feedback. If the uncontrolled system β = 0 possesses a periodic orbit 𝐱_*(t) of (not necessarily minimal) period p > 0, then 𝐱_*(t) remains a solution of (<ref>) for time delays τ = p and any control matrix β. In this sense, the delayed feedback control is noninvasive on 𝐱_*(t). The linearized and nonlinear stability or instability of 𝐱_*(t), however, may well be affected by the control term β. The above idea was first proposed by Pyragas, see <cit.>. It has gained significant popularity in the applied literature since then, with currently around 3000 publications listed. See <cit.> and <cit.> for more recent surveys. In fact, no previous knowledge of the nonlinearity 𝐅 is required to attempt this procedure, or any of its many variants. One fundamental disadvantage of the Pyragas method (<ref>), from a theoretical perspective, is the replacement of the ODE 𝐱̇ = 𝐅(𝐱) in finite-dimensional phase space X = ℝ^N by the infinite-dimensional dynamical system (<ref>) in a history phase space like𝐱(t+ · ) ∈ X = C^0 ([-τ,0], ℝ^N). On the other hand, the very existence of a periodic solution 𝐱(t), for vanishing control β = 0, requires N ≥ 2. In <cit.> we therefore started to explore the Pyragas method of delayed feedback control, in a slightly modified form, for the very simplistic scalar case ẋ(t) = λ f (x(t-1)) + b^-1 (x(t-ϑ) + x(t-ϑ - p/2)) .We consider nonzero real parameters λ,b and positive delays ϑ, p/2. The case b = 0 will only appear as a formal limit β = ±∞ of infinite feedback amplitudes. The identical cases b = ±∞ account for vanishing feedback β=0 and correspond to the scalar pure delay equation ẋ(t) = λ f (x(t-1))with |λ | normalizing the remaining delay τ to unity. See <cit.> for an early analysis of a specific equation of this type, equivalent to the delayed logistic equation.Throughout the paper we assume f ∈ C^3 to be odd, with normalized first derivative at f(0) = 0: f(-x) = -f (x), f'(0) = 1, f”'(0) < 0 . The characteristic equation for complex eigenvalues μ of the linearization of (<ref>) at parameter λ and the trivial equilibrium x ≡ 0 then reads μ = λ e^-μ + b^-1 (e^-ϑμ + e^- (ϑ+ p/2) μ) .See <cit.> for an analysis of odd periodic solutions x_k(t) of the pure delay equation (<ref>) with constant minimal period p_k := 2 π/ ω_k, ω_k := (k+12) π .The periodic solutions originate by Hopf bifurcation from imaginary eigenvalues ±i ω_k at x = 0 for parameters λ = λ_k := (-1)^k+1ω_k .Here k ∈ℕ_0 is any nonnegative integer. For k >0, these periodic solutions are called rapidly oscillating because their minimal period p_k is at most 4/3. Slowly oscillating periodic solutions, in contrast, have minimal periods exceeding 2. For example, p_0=4. See <cit.> for a survey of related results. In particular see <cit.> for secondary bifurcations from these primary branches.The case ẋ(t) = g(x(t),x(t-1)) of the general scalar delay equation with a single time lag has attracted considerable attention; see for example <cit.> and the many references there. After early observations by Myshkis <cit.>, Mallet-Paret <cit.> has discovered a discrete Lyapunov functional for g with monotone delayed feedback. The global consequences of this additional structure are enormous; see <cit.>. For example, all rapidly oscillating periodic solutions are known to be unstable. More recent developments study this scalar equation with state dependent delays, where the time delay 1 is not constant but depends on the history x(t+ ·) of the solution itself; see for example <cit.>. An excellent survey article on the above developments for general scalar delay equations with a single delay is <cit.>. But let us return to the simple setting (<ref>) – (<ref>) of a pure delay equation. Pioneering analysis by <cit.> reduces the quest for periodic solutions near all Hopf bifurcations (<ref>) to a planar Hamiltonian ODE system. This is due to an odd-symmetry x_k(t+p_k/2) = -x_k(t)at half minimal period p_k, for all real t. See also <cit.> for complete details, and <cit.> for a survey on the Kaplan-Yorke idea. Remarkably, global solution branches of constant minimal period p_k emanate from each λ = λ_k towards λ of larger absolute value, in the soft spring case of strictly decreasing secant slopes x ↦ f(x)/x, for x >0. In particular all Hopf bifurcations are locally nondegenerate and quadratically supercritical under the sign assumption f”'(0) < 0 of (<ref>). See fig. <ref> for a bifurcation diagram.At supercritical Hopf bifurcation it is easy to determine the unstable dimension E, i.e. the total algebraic multiplicity of Floquet multipliers outside the complex unit circle, for the emanating local branch of periodic orbits. It coincides with the total algebraic multiplicity E = E (λ_k) = kof the eigenvalues μ with strictly positive real part for the characteristic equation (<ref>) at the Hopf point λ = λ_k. See for example <cit.>. Henceforth we skip the trivial case k=0, which leads to the bifurcation of stable, slowly oscillating solutions. Let k∈ℕ be any strictly positive integer. The local (and global) Hopf branches (λ, x_k(t)) which bifurcate at λ = λ_k,x=0 inherit constant period p=p_k and odd-symmetry (<ref>).Therefore our modified Pyragas control scheme (<ref>) with p:= p_k is noninvasive on the Hopf branches with this symmetry. For earlier applications of Pyragas control at half minimal periods in the presence of some involutive symmetry, although not in our present delay context, we refer to <cit.>. It is our main objective to stabilize all bifurcating periodic orbits, for arbitrarily large unstable dimensions k∈ℕ, by suitable Pyragas controls (<ref>). For odd k, in particular, this again refutes the purported “odd number limitation” of Pyragas control <cit.>.We define a Pyragas region 𝒫 to be a connected component of real control parameters b ≠ 0 and ϑ≥ 0 for which the periodic solutions x_k(t) emanating by local Hopf bifurcation from λ=λ_k,x ≡ 0 become linearly asymptotically stable for sufficiently small amplitudes. Therefore, the boundaries of Pyragas regions are either certain curves where zero eigenvalues μ = 0 occur, or else are Hopf curves characterized by purely imaginary eigenvalues μ =iω̃ of the characteristic equation (<ref>).With this definition we can now formulate the main result of the previous paper <cit.>. See fig. <ref> for an illustration of Hopf curves in the cases k = 9 and k =10. <cit.> Consider the system (<ref>) of delayed feedback control for the scalar pure delay equation (<ref>). Let assumptions (<ref>) of oddness and normalization hold for the soft spring nonlinearity f ∈ C^3. Then the following assertions hold for large enough k ≥ k_0. There exist Pyragas regions 𝒫 = 𝒫_k^+∪ 𝒫_k^- composed of two disjoint open sets 𝒫_k^±≠∅. Each region 𝒫_k^ι, ι = ±, is bounded by the horizontal zero line b =b_k := -2/λ_k =(-1)^k · 2/ω_k and three other analytic curves γ_k^0 and γ_k, ±^ι, all mutually transverse. The zero line (<ref>) indicates a zero eigenvalue μ of the characteristic equation (<ref>) of (<ref>) at x ≡ 0. The other curves indicate additional purely imaginary eigenvalues μ. Define ε:= 1/ω_k. Then an approximation of the Pyragas regions (ϑ, b) ∈𝒫_k^ι, ι = ±, up to error terms of order ε^3, is given by two parallelograms. One exact horizontal boundary is b=b_k := (-1)^k2 ε; see (<ref>). The other horizontal boundary γ_k^0 is approximated by b = (-1)^k2 ε + b_k^ι ε^2 + … .The sides γ_k, ±^ι are given, up to order ε^3, by the parallel slanted lines through the four points at b=b_k, ϑ_k, ±^ι = 1 - (π2 - ι q) ε±Θ_k+2 ε^2 + … ,with slopes σ_k^ι. The offsets q, Θ_k+2,b_k^ι and the slopes σ_k^ι of the Pyragas parallelograms are given by q=arccos(2/π) = 0.88 … , Θ_k+2=π (√((π2)^2-1) - 2q) = - 1.73 … , b_k^ι= (π+2 Φ_k^ι ) cos Φ_k^ι= 1.49 … + (-1)^k ι· 1.02 … , σ_k^ι= 2(-1)^k+1ι√((π2)^2-1)= (-1)^k+1ι· 2.42 …Here we have used the abbreviations Φ_k^ι = (-1)^k ι arcsinqIn particular the areas | 𝒫_k^± | of the Pyragas regions are of very small order ε^4. The relative areas are approximately reciprocal, lim_k ↦∞ |𝒫_k^+| / |𝒫_k^-| = lim_k ↦∞ b_k^+ / b_k^- = { 5.37… foreven k,0.19 … forodd k . .See <cit.> for full details and proofs. The above stabilization result for rapidly oscillating periodic solutions requires two additional delays in the control term of (<ref>): the half-period delay p_k/2, and a joint offset ϑ near 1, in addition to the normalized delay 1 of the reference system. In the present paper we achieve the same goal with the single delay p_k/2 in the control term, i.e. with vanishing delay offset: ϑ =0 .From now on, and for the rest of the paper, we therefore replace (<ref>) by ẋ (t) = λ f(x(t-1)) +b^-1 (x(t)+x(t-p/2)) ,with nonlinearities f ∈ C^3 which satisfy assumptions (<ref>). Our main result identifies unique nonempty Pyragas intervals 𝒫_k =(b_k, b_k) of control parameters b. The k-dimensionally unstable, rapidly oscillating periodic solutions of constant minimal period p_k=4/(2k+1) are stabilized near Hopf bifurcation at λ=λ_k,x=0, for b∈𝒫_k and all sufficiently large k∈ℕ. Via ε:= 1/ω_k = p_k/2π, we also provide ε-expansions, alias k-expansions, for the Pyragas boundaries b_k and b_k. Taylor expansions with respect to ε again amount to rapid oscillation expansions at k=∞.Consider the system (<ref>) of delayed feedback control for the scalar pure delay equation (<ref>). Let assumptions (<ref>) of oddness and normalization hold for the soft spring nonlinearity f∈ C^3. Then the following assertions hold for large enough k ≥ k_0, i.e. for small enough 0<ε:= 1/ω_k= ((k+12)π)^-1≤ε_0. The only Pyragas region of nonzero control amplitudes b is the open interval 𝒫_k:= {b_k < b< b_k } .Up to error terms of order ε^4, the lower and upper boundaries of the Pyragas interval satisfy b_k= -12π^2ε^2 -34π^3ε^3+… , b_k= -12π^2ε^2 +14π^3ε^3+…Although our proofs and expansions only address Hopf bifurcations at sufficiently large unstable dimensions k, and sufficiently rapid oscillation frequencies ω_k , numerical evidence suggests a single Pyragas interval 𝒫_k , for any k≥ 1.We do not pursue these cases here, beyond the evidence provided in figs. <ref> and <ref>.The remaining sections disentangle the elements of the proof of theorem <ref>. We give a brief outline here. For a summary of sections <ref> – <ref>, on a precise technical level, we refer to the proof of theorem <ref> in the concluding section <ref>.Section <ref>, and most of the remaining sections, address the characteristic equation (<ref>), at ϑ=0, for the linearization at the original Hopf bifurcation points λ = λ_k, x≡ 0. Elementary as this task may appear, the rapidly oscillatory terms which appear in the limit ε = 1/ω_k ↘ 0 cause substantial and worthwhile difficulties.In section <ref> we first recall some elementary results from <cit.> which address the crossing direction of an additional simple eigenvalue 0 induced by the control term. We also introduce a 2-scale lift, which artificially represents the large, rapidly oscillatory imaginary parts of b-induced Hopf eigenvalues μ = iω̃ by, both, ω̃ itself and a scaled slow frequency Ω̃ = εω̃ .Note how Ω̃ =1, ω̃ = ω_k correspond to the reference Hopf bifurcation at λ = λ_k. We observe Ω̃≠ 2m cannot be at even integer resonance. We introduce a new local scaled slow frequency Ω:= Ω̃ -Ω_m ,Ω_m := 2m+1 ,near each odd integer resonance Ω_m. Below, in fact, we will be able to focus on -1 < Ω≤ 0. Since ω≡ω̃2πrotates rapidly through S^1, for small ε, we treat the two frequencies Ω, ω as independent variables, formally. They remain related by the hashing relation Ω = ε(ω +π2 (1-(-1)^k-(-1)^m) -2π j) , -π2≤ω <32π ,j ∈ℕ, first discussed in lemma <ref>. We will be able to restrict attention to the case of odd k, in this setting.The above hashing trick, first used in <cit.>, will be of central importance in our analysis. In the limit ε↘ 0, alias h ↗∞, the hatching by the hashing lines (<ref>) fills the (Ω,ω)-cylinder, densely. Indeed, the hashing lines define steeply slanted (non-military) “barber pole” stripes of horizontal distance ε around the cylinder. The hashing trick (<ref>) allows us to consider Ω and ω as independent cylinder variables, temporarily.This eliminates ε > 0 from the characteristic equation, altogether, as follows. For the control amplitude b we proceed with the same scaling B:= 12 b ε^-1as in <cit.>. Inserting the scalings (<ref>) – (<ref>) into the characteristic equation (<ref>) for μ=iω̃ at ϑ=0, we thus arrive at the 2-scale characteristic equation 0= -ie^iω+Ω_m+Ω +(-1)^mB^-1cos (π2Ω) e^iπ2Ω . In lemmata <ref>-<ref> we solve the ε-independent (!) complex characteristic equation (<ref>) for the real variables ω = ω^± (Ω) , B= B^± (Ω) .See figs. <ref>, <ref> below for illustration. In section <ref> we observe how the imaginary parts ω̃ of unstable eigenvalues μ=μ_R+iω̃ are trapped in certain strips indexed by m,j. Instability in such a strip can be induced by Hopf bifurcation at control parameters B=B_m,j^-, and be reduced again at control parameters B^+_m,j. This involves an analysis of the crossing directions of μ, transversely to the imaginary axis μ_R=0, as B increases through B^±_m,j. Seefig. <ref> and theorem <ref>.In corollary <ref> we conclude the absence of any region of Pyragas stabilization for B>0. Corollary <ref> summarizes the results of section <ref>: we reduce the proof of theorem <ref> to the three inequalities (<ref>) – (<ref>) among the Hopf parameter values B^±_m,j. In section <ref>, we insert the solution ω =ω^± (Ω) from (<ref>) into the hashing (<ref>). Inverting the resulting maps Ω↦ε= ε(Ω), uniformly for bounded m ≤ m_0, j ≤ j_0, we obtain ε-expansions Ω = Ω^±_m,j(ε) , ω = ω^±_m,j(ε) , B = B^±_m,j(ε) for the frequencies ω̃≡ω and the control amplitudes b=2Bε of the resulting control-induced Hopf bifurcations. In particular the crossing directions of the induced imaginary Hopf pairs with respect to b sum up such that b = 2ε B_0,1^+(ε) ,b =2ε B_1,1^-(ε) provide the boundaries of the Pyragas region 𝒫 claimed in theorem <ref>. We illustrate the relative location of B^±_m,j, in view of the crucial inequalities required in corollary <ref>, at the end of section <ref>; see also fig <ref>. It remains to show, however, that the candidate interval b ∈ (b,b) does not suffer any destabilization, due to any other Hopf points B_m,j^±. This turns out to be equivalent to the estimates B_m, j_m+1^+ < B_0,1^+and B_m,j_m^- >B_1,1^-at j_m:= [(m+1)/2]. See (<ref>), (<ref>) and corollary <ref> again. For bounded m ≤ m_0, these estimates are suggested by the explicit expansions (<ref>). In section <ref> we begin to settle the delicate case of large m, Ω_m by expansions with respect to δ:= Ω_m^-1 = 1/(2m+1) .Here our second small parameter δ > 0 expands the odd integer resonance regions around Ω = Ω_m = 2m+1, for large m, in much the same way as our first small parameter ε expanded the discrete parameter k, for large k ≥ k_0, which enumerated the original Hopf bifurcations of more and more rapidly oscillating periodic solutions with higher and higher unstable dimension. This time, we solve the characteristic equation (<ref>) to obtain expansions Ω = Ω^±(δ, ω) , B = B^±(δ, ω)with respect to δ, uniformly in |ω | ≤π /2. Here Ω^-, B^- refer to the case j=j_m and Ω^+, B^+ refer to j=j_m+1. In section <ref>, the hashing relation (<ref>) then provides a δ-expansion for ε^± = ε^± (δ, ω) .Inserting this into the already established expansions (<ref>) for b, b, and comparing the results, for small δ, we obtain B^+(δ, ω ) < B_0,1^+(ε^+(δ, ω)) and B^-(δ, ω )> B_1,1^-(ε^-(δ,ω))as claimed in (<ref>). Well, nontrivial differences only appear at order δ^3 and after additional linearization with respect to ω, at ω = ±π2. The proof of theorem <ref> only involves some discussion of a characteristic equation with two exponential terms of different scales. Nevertheless, the elementary ingredients to the proof turn out to be surprisingly involved. Therefore we summarize the various elements of the proof, as scattered across sections <ref> – <ref>, in our final section <ref>. Acknowledgments. For many helpful comments and suggestions, as well as most of the figures, we are much indebted to Alejandro López Nieto. Delightful discussions were provided by several participants of the conference in honor of Jürgen Scheurle, and in particular by P.S. Krishnaprasad. We are particularly grateful for the lucid remarks of our referees, which helped us improve the somewhat messy presentation. Ulrike Geiger typeset the original manuscript, with expertise and diligence. The authors have been supported by the CRC 910 “Control of Self-Organizing Nonlinear Systems: Theoretical Methods and Concepts of Application” of the Deutsche Forschungs­gemeinschaft.§ THE 2-SCALE CHARACTERISTIC EQUATION The characteristic equation (<ref>) of the delay equation (<ref>) with vanishing time shift ϑ = 0 reads μ = -(-1)^k ε^-1 e^-μ + b^-1 (1+e^-πεμ) ,at Hopf bifurcation parameter λ = λ_k=(-1)^k+1ω_k, minimal period p_k=2π / ω_k, and with the abbreviation ε = ω_k^-1 = 1/((k+12)π)for k∈ℕ. We decompose the eigenvalue μ =μ_R +iω̃ into real and imaginary parts and define the auxiliary slow frequency Ω̃:= εω̃; see (<ref>). For the convenient choice of ω :≡ω̃2π , -12π≤ω < 32π ,we obtain the crucially important 2-scale characteristic equation 0= χ (ε, δ, ω, Ω, B, μ_R):== -εμ_R + i Ω̃- (-1)^k e^-μ_R+iω̃ -iBsin(π2Ω) e^-πεμ_R+iπ2Ω + 12B(1-e^-πεμ_R),by some elementary arithmetic and with the abbreviations Ω̃ := εω̃ ; B := 12bε^-1 ; δ := Ω_m^-1 = 1/(2m+1) ; Ω := Ω̃-Ω_m .In particular we have utilized the complex conjugate of (<ref>). Evidently, the solutions of (<ref>) for even parity of k are trivially obtained from the solutions for odd parity if we replace ω by ω +π2π. For later use we note the relation sin( π2Ω ) = -(-1)^m cos(π2Ω̃) ;see also (<ref>).For interpretation we recall that Ω̃=1 indicates ω̃=ω_k, i.e. a 1:1 resonance ω̃=ω_k of the imaginary part i ω̃ of μ, under feedback control, with the original Hopf eigenvalue μ=i ω_k at parameter λ=λ_k.Similarly, Ω̃=Ω_m=(2m+1) indicates an odd integer (2m+1):1 resonance. The parameters ε, δ make k,m look continuous, respectively, and replace them eventually.For complex nonreal eigenvalues μ, the imaginary part ω̃ = Ω̃/ε can be taken positive, without loss, and we may assume -1 < Ω≤ 0 ,form=0 , -2 <Ω≤ 0 , form∈ℕ .We also note that χ is real analytic in all variables, for B0.Since μ_R=0, for purely imaginary eigenvalues, the characteristic function χ of (<ref>) simplifies and becomes χ_0 (δ, ω, Ω, B):= -i χ(ε, δ, ω, Ω,μ_R=0)= Ω̃+(-1)^k i e^iω- 1Bsin (π2Ω) e^iπ2Ω ,in the Hopf case.Note how ε has disappeared from the characteristic equation (<ref>), in (<ref>), at the price of a hidden hashing relation between ω and Ω; see lemma <ref> below. In the present section we collect some elementary facts about the 2-scale characteristic equation (<ref>) – (<ref>). Real eigenvalues μ = μ_R, i.e. the case ω̃ = Ω̃ =0, are addressed in lemma <ref>. As a corollary we eventually obtain how B has to be negative in any Pyragas region; see corollaries <ref> and <ref>. With a brief interlude on hashing in lemma <ref>, we embark on our discussion of purely imaginary eigenvalues μ_R =0. In lemma <ref> we show how to eliminate any two of the three variables ω, Ω, B from the resulting ε-independent 2-scale characteristic equation 0 = χ_0(δ, ω, Ω, B) .In particular we identify a quadratic loop Q(δ, Ω, B) =0 ,by elimination of ω, such that purely imaginary eigenvalues μ_R=0 can occur only if (<ref>) is satisfied. In section <ref> we will observe how positive real parts, μ_R>0, can only occur inside the loop, and negative real parts, i.e. linear stability as required in Pyragas regions, are confined to the exterior. Via hashing, this leads to the definition of crucial pairs of Hopf parameter values B_m,j^- < B_m,j^+ <0such that the loss of stability caused at B_m,j^-, by a pair of purely imaginary eigenvalues, is recovered when the control parameter B<0 increases further to pass the matching Hopf value B_m,j^+.The characteristic equation (<ref>) possesses a real zero eigenvalue, μ = μ_R=0, if and only if B = B_0=(-1)^k .The zero eigenvalue μ is algebraically simple, and its continuation μ = μ(B) satisfies sign Reμ'(B_0) = (-1)^k =B_0 ,i.e. μ(B) increases towards larger |B|.For real eigenvalues μ = μ_R, ω=Ω̃=0, and Ω=-1, the characteristic equation (<ref>) reads εμ_R= -(-1)^k e^-μ_R+ 12B(1+e^-πεμ_R) .Inserting μ_R=0 proves claim (<ref>). Partial differentiation with respect to μ_R shows simplicity of μ_R=0 at B= B_0=(-1)^k. Implicit differentiation with respect to B at B= B_0, μ_R=0 shows εμ'_R= (-1)^kμ'_R +12(-1)^k ·(-πε)μ'_R-1 ,i.e.μ'_R(1-(π2+B_0)ε) = B_0 .For k∈ℕ, ε=((k+12)π)^-1, the coefficient of μ'_R is positive, and the lemma is proved.Of course we may solve the real characteristic equation (<ref>) for B=B(μ_R), explicitly, to obtain B=B(μ_R)=1/2·1+exp (-πεμ_R)/εμ_R+(-1)^kexp (-μ_R) .For odd k, vanishing denominator indicates the unique positive real eigenvalue μ = μ_R of the original problem (<ref>) without control. For even k, the denominator is positive for all real μ_R, because ε=((k+12 )π)^-1 <e for k∈ℕ_0. Moreover, πε≤2/3 for all k ∈ℕ implies lim B(μ_R)=0 for μ_R →±∞. Let 1 < B_max := max B(μ_R) <∞denote the maximum over μ_R ∈ℝ, for even k. Indeed B_max> 1=B_0=B(0), by lemma <ref>. This allows us to determine the even/odd parity of the total algebraic count E(B) of all eigenvalues μ, real or complex, with strictly positive real part.We write E(B) ≡ 0(mod2) or E(B) ≡even, for even parity of E(B),and analogously E(B) ≡ 1(mod2) ≡odd, for odd parity.Let k∈ℕ be odd. Then the unstable parities (mod 2) are given by E(B) ≡{ odd- ∞ < B <-1 ,even for - 1≤ B<0 ,odd0 ≤ B<+∞ . .For even k∈ℕ, the unstable parities (mod 2) are given by E(B) ≡{ even- ∞ < B <0 ,odd for 0<B<1 ,even1≤ B< +∞ . .Pyragas regions E(B)=0 require even parity 0(mod 2), of course. For even k, they also require B> B_max >1 ,in case B >0.Since nonreal complex eigenvalues occur in complex conjugate pairs, the real eigenvalues alone determine the parity. For odd k and at B = ±∞, i.e. at vanishing control, instability by a simple positive real eigenvalue follows from the vanishing denominator in (<ref>). Lemma <ref> then implies claim (<ref>). For even k, real eigenvalues are absent if B <0 or B>B_max; see (<ref>), (<ref>). At B=B_max>1, a pair of complex eigenvalues merges and forms a positive double real eigenvalue. Decreasing B further, one of these two positive eigenvalues becomes negative, at B=B_0=1, and the other real eigenvalue remains positive and simple; see lemma <ref>. This proves claims (<ref>), (<ref>), and the corollary.We study the case of purely imaginary nonzero eigenvalues μ = iω̃≠ 0, μ_R=0 next. The 2-scale characteristic equation (<ref>) then simplifies to (<ref>), (<ref>), i.e. χ_0 (δ, ω,Ω, B) := -i χ(ε, δ, ω,Ω, B, μ_R=0)=0 .Strictly speaking, however, the frequency ω̃ and the slow frequency Ω̃ are still related by the linear hashing relation Ω̃ = εω̃; see (<ref>) – (<ref>). We clarify this relation next. Consider ω̃>0, Ω = Ω̃-Ω_m with Ω_m=2m+1, and -1<Ω≤0 for m=0, but -2<Ω≤0 for m∈ℕ. Then the hashing relation Ω̃ = εω̃, with ε= ω_k^-1, is equivalent to Ω = ε(ω +π2(1-(-1)^k-(-1)^m)-2π j) .Here the representative ω≡ω̃ (mod 2π) is chosen such that -π2≤ω <32π ,j∈ℕ is chosen such thatj= { 0 for k,mboth even ,1 otherwise , .holds at Ω =0, and ω̃ = ω+2π(km+[(k+1)/2]+[(m+1)/2]-j) ≡ω2π . The hashing relations Ω̃=εω̃ and (<ref>) are both affine linear in ω̃, ω with slope ε. To show their equivalence, via Ω̃=Ω +Ω_m in (<ref>) and definition (<ref>) of ω, we only have to check (<ref>) at εω̃=Ω̃ = Ω_m and Ω =0. Indeed we obtain ω: = ω̃-2π(km+[(k+1)/2]+[(m+1)/2]-j)== ε^-1·Ω_m-π(2km+2[(k+1)/2]+2[(m+1)/2])+2π j== (k+12)π·(2m+1) -π(2km+k+m)+2π j-=(k+12)π·(2m+1) -π((2[(k+1)/2]-k)+(2[(m+1)/2]-m))== π2+2π j-π(12(1-(-1)^k) +12(1-(-1)^m))== 2π j -π2(1-(-1)^k-(-1)^m) ,as required by (<ref>). The choice of j in (<ref>) ensures the ranges (<ref>) for Ω < 0 near Ω=0. This proves the lemma.The Hopf points B∈ℝ, where purely imaginary eigenvalues μ =iω̃>0 arise, are therefore defined by the system of the 2-scale characteristic equation (<ref>) and the hashing (<ref>), in the precise sense of lemma <ref>. For the moment we “forget” hashing and address the complex 2-scale equation (<ref>) first, in its own right. See also the bottom rows of figs. <ref> and <ref>.The 2-scale characteristic equation χ_0(δ, ω, Ω, B)=0 for purely imaginary eigenvalues, i.e. equation (<ref>), (<ref>), is equivalent to the system 0 =H(δ, Ω,ω) :=Ω̃sin (π2Ω)-(-1)^k cos(ω-π2Ω)==(-1)^m+1(Ω̃cos (π2Ω̃) -(-1)^ksin(ω-π2Ω̃)) B = (-1)^k sin^2(π2Ω)/cosω = (-1)^k cos^2(π2Ω̃)/cosωwith parameter δ =1/(2m+1), m∈ℕ_0. As always, the case of even k results from odd k by addition of π to ω2π. Here we have also used the previous notation Ω̃ = Ω +Ω_m = Ω+(2m+1)=Ω +1/δ ;see (<ref>). Eliminating ω∈ (-π2, 3π2) we obtain the quadratic relation 0=Q(δ,Ω,B):= (Ω̃^2-1)B^2 +Ω̃sin (πΩ̃)B +cos^2(π2Ω̃) .The discriminant D of (<ref>) is given by D= cos^2(π2Ω̃) · (1-(Ω̃cos (π2Ω̃))^2) ,and the explicit solutions B=B^± of (<ref>) are B^± = (Ω̃^2-1)^-1 (-12Ω̃sin (πΩ̃)±√(D)) .For m∈ℕ and Ω_m=2m+1, let Ω̃_m∈ (2m,Ω_m) and Ω̃_m^max∈ (Ω_m, 2m+2) denote the unique solutions Ω̃ of D=0, i.e. of Ω̃· (-1)^mcos (π2Ω̃)=1 ,in the respective intervals. In terms of Ω = Ω̃-Ω_m and 0<δ = Ω_m^-1 this defines unique solution branches (Ω, B^±) of (<ref>) with B^+<0 <B^- , for 0=:Ω̃_0<Ω̃<Ω_0=1 ,m=0; B^±<0, forΩ̃_m<Ω̃<Ω_m ,m≥ 1 ; 0 <B^± , forΩ_m<Ω̃<Ω̃_m^max ,m≥ 1 .For vanishing real part μ_R=0, the 2-scale characteristic equation (<ref>) simplifies to 0=i χ_0(δ,ω,Ω,B) = iΩ̃-(-1)^k e^iω -iBsin(π2Ω) e^iπ2Ω ;see (<ref>), (<ref>). To prove (<ref>), we multiply by exp(-iπ2Ω) and take real parts. To prove (<ref>) we take real parts directly. To eliminate ω, as in (<ref>), we solve χ_0=0 in (<ref>) for the only term exp (iω) which contains ω, and calculate the square of the absolute values of both sides. The remaining claims (<ref>) – (<ref>) concerning the quadratic relation (<ref>) are plain high school calculus. This proves the lemma.In the following analysis we will skip the third case m≥ 1, B^±>0, Ω_m<Ω̃<Ω̃_m^max of (<ref>) which is completely analogous to the second case Ω̃_m<Ω̃<Ω_m, B^± <0. Indeed that third case will turn out to be irrelevant anyway, in section <ref>; see corollary <ref>.The 2-scale relation (<ref>) between slow and fast frequencies Ω and ω can be solved for Ω = Ω(δ,ω), implicitly, and for the inverse function ω = ω(δ,Ω), explicitly: ω = ω^±:≡π2Ω±arccos (-Ω̃sin (π2Ω)) 2π ,for k odd. Even k require addition of π2π. For m=0, where δ=1 and 0<Ω̃ = Ω +1<1, both functions ω^± have strictly positive and bounded derivatives with respect to Ω̃ or Ω, equivalently, in the interior domain. For odd k, their boundary values and ranges are, accordingly, ω^+ ∈ [0,12π] , with ω^+=0, 12π atΩ̃ =0,1 ; ω^- ∈ [π,32π] , with ω^-=π, 32π atΩ̃ =0,1 .The ranges and boundary values are interchanged for even k. See the top row of fig. <ref>. Let m≥ 1, where 0<δ = 1/(2m+1)≤ 1/3, and consider Ω̃_m ≤Ω̃≤ 2m+1 = Ω_m; see (<ref>). Then we observe ranges ω∈ [-π2,π2] , for odd k≥ 1 , ω∈ [π2,32π] , for even k≥ 2 .Moreover, the implicit inverse function Ω = Ω(δ,ω) is strictly piecewise monotone in ω with unique local and global minimum -1<Ω̃_m -Ω_m =: Ω_m ≤Ω≤ 0and boundary values Ω =0 at ω≡±π2. The minimal value Ω_m occurs at ω = ω = π2Ω_m fork odd , ω = ω = π +π2Ω_m fork even .The two explicit branches ω = ω^±(δ,Ω)possess strictly nonzero, but only locally bounded, derivatives with respect to Ω̃ or Ω, equivalently, in the domainΩ_m < Ω≤ 0.They merge at Ω=Ω_m, where the discriminant of the quadratic B-relation (<ref>) vanishes. See the two upper rows of fig. <ref>.As always, we may consider odd k, without loss. The explicit solutions ω = ω^± of (<ref>) follow directly from the first line of the 2-scale equation (<ref>). Recall that nonnegative discriminants D in (<ref>), (<ref>) require |Ω̃sin (π2Ω)|=|Ω̃cos (π2Ω̃)|≤1; see also (<ref>). Hence -1≤Ω≤0 implies -1≤Ω̃sin (π2Ω)≤ 0.In particular arccos(- Ω̃sin (π2Ω) )∈ [0,π2], for all m ∈ℕ_0. This proves claim (<ref>) and the range claims (<ref>), (<ref>). Moreover the functions ω=ω^±(Ω) are differentiable with bounded derivatives, except for the vertical tangent at the discriminant loci Ω=Ω_m, m ≥1.For the inverse function Ω=Ω(ω), we study the monotonicityclaims, for all m,and the minimizer claims, for m≥1, next. Here we suppress δ, for a while. Recall H(Ω,ω)=0, from (<ref>). With the abbreviations S :=sin(π2Ω) = -(-1)^mcos (π2Ω̃) , C :=cos(π2Ω) = (-1)^msin (π2Ω̃) , s :=sin(ω-π2Ω), c:= cos (ω-π2Ω) ,H and its partial derivatives, for odd k, are H= Ω̃S+c ,H_Ω=S+π2Ω̃C+π2s ,H_ω=-s.Elementaryarguments show that H=0 is a regular value of H. Let Ω̇ denote the derivative of Ω(ω). By implicit differentiation of H(Ω, ω)=0 with respect to ω, we obtain-H_ΩΩ̇ =H_ω .Suppose Ω̇ =0 at ω=ω. Then 0= H_ω =-s=-sin (ω-π2Ω)at Ω = Ω(ω) implies ω =ω≡π2Ωπ. Insertion into H(Ω,ω)=0 implies ± 1= -c= Ω̃S =-(-1)^mΩ̃cos(π2Ω̃) ,using (<ref>). Since S<0<Ω̃, we obtain c=+1. This implies m≥1 and Ω̃ = Ω̃_mas defined in (<ref>). For m=0, in fact, 0≤Ω̃≤ 1 prevents any solution of (<ref>). This proves the strong monotonicity claims for m=0, and completes the proof of lemma <ref>. The functions B^±=B^±(δ, Ω) for the control parameter B in (<ref>), (<ref>),(<ref>) have the following properties.For m=0, where δ=1 and 0<Ω̃= Ω+1<1, we have B^±(δ,Ω)=(-1)^k sin^2(π2Ω)/cosω^±(δ, Ω).Here ω^± have been defined in (<ref>). Moreover B^+ increases strictly with respect to Ω, or Ω̃, B^+<0<B^- for 0<Ω̃<1 ;B^+=-1,0 for Ω̃=0,1 ;B^-=1, π/2 for Ω̃=0,1 .See the bottom row of fig. <ref>. Let m≥ 1, where 0<δ <1/(2m+1)≤ 1/3 and Ω̃_m≤Ω̃ < 2m+1=Ω_m; see (<ref>). Then B^-<B^+<0for Ω̃_m<Ω̃< Ω_m ;B^± = -12(Ω̃_m^2-1)^-1Ω_msin(πΩ̃_m) , for Ω̃ =Ω̃_m ;B^±=0 , for Ω̃= Ω_m .In terms of (<ref>), the branches B^± are parametrized over ω, instead of Ω, as follows: B^+= B(δ,ω) , for ω >ω:= π2Ω_m ;B^-= B(δ,ω) , for ω <ω:= π2Ω_m .Moreover B^+ is strictly increasing with respect to ω, or Ω alias Ω̃.For B^- and m≥ 1 we encounter a zero derivative ∂_ω B^- at ω= ω_*, Ω= Ω_* if, and only if, 0=d(Ω,ω):=sin^2(π2Ω)sinω+ π2cosωsin(ω-πΩ) .Critical points, and in particular the minimum, of ω↦ B (and of B^-) occur at certain ω= ω_*, Ω=Ω_* which satisfy πΩ_* < ω_* < 12πΩ_* <0 .In particular ∂_ω B^-<0 for -π2<ω≤πΩ_*. Moreover B>-1.Again we consider odd k, without loss. We suppress the parameter δ and differentiate (<ref>) with respect to ω, implicitly, analogously to the proof of lemma <ref>: Ḃ= - (π2sin(πΩ)cosω·Ω̇+sin^2(π2Ω)sinω)/cos^2ω .We invoke lemma <ref>. Consider m=0, first. Then the sign of the right hand side of (<ref>) is -sign cosω^± = ∓ 1by (<ref>). These signs agree with our definition (<ref>) of B^±, for m=0, because 0<Ω̃<1 implies B^+<B^-, there. More specifically, we have shown B^+<0<B^-. To settle monotonicity of B^+, for all m, we keep considering odd k, without loss. We aim to show Ḃ^+>0 in the interior domain of definition. We rewrite (<ref>) as Ḃ^+= -S (π Ccosω·Ω̇+Ssinω)/cos^2ω ,with C>0>S for -1<Ω <0; see (<ref>) for this notation. We also recall |ω |<π2, cosω >0, Ω̇>0 and s=sin(ω-π2Ω)>0, for B^+ and odd k; see the proof of lemma <ref>. Therefore (<ref>) implies Ḃ^+>0, if sinω≤ 0. It remains to show interior positivity of Ḃ^+ for sinω>0, i.e. for 0<ω <π2.Suppose Ḃ=0. We derive the relation (<ref>) at such a zero, first. We differentiate χ_0=0 in (<ref>), (<ref>) with respect to ω, implicitly, to obtain 0=Ω̇ +e^iω- π2B e^iπΩ Ω̇ .We have used the assumption Ḃ =0 here. We multiply (<ref>) by the complex conjugate coefficient of Ω̇ and take imaginary parts to eliminate the derivativeΩ̇: 0= Im(e^iω(1-π2Be^-iπΩ))== sinω-π2Bsin(ω-πΩ) .Substitution of B from (<ref>) and multiplication by the resulting denominator sin^2(π2Ω) proves claim (<ref>).We can now prove interior positivity Ḃ^+>0 for the remaining case 0<ω<π2. SupposeḂ^+=0, indirectly.We then usetrigonometric addition and the abbreviations of (<ref>) to rewrite (<ref>) as 0=d(Ω,ω)=S^2sinω + π2cosω (-Sc+Cs).To reach a contradiction, we check positivity of each individual term. Our assumption 0<ω<π2 implies sinω>0, cosω>0. Moreover, -1<Ω<0 implies S^2>0, C>0. By subtraction, we also obtain s>0 because 0<ω-π2Ω<π. It only remains to check positivity of -Sc. Indeed H=0 in (<ref>), (<ref>) implies -Sc = Ω̃ S^2 >0, since S^2>0 and Ω̃>0. Hence all terms on the right hand side of (<ref>) are strictly positive. This contradiction establishes Ḃ^+>0 in all cases.To prove claim (<ref>), we first observe that ω <π2Ω_m < π2Ω<0 holds all along the ω^--branch, because -π2<ω<ω = π2Ω_m defines the domain of B^-; see lemma <ref>. To show ω_* > πΩ_* at Ḃ=0, indirectly, suppose -π2<ω_* ≤πΩ_*<0. We then claim d(Ω_*,ω_*)<0. Indeed d(Ω_*,ω_*)= S_*^2sinω_*+ π2cosω_*sin(ω_*-πΩ_*)≤≤ S_*^2sinω_* <0 .This contradiction proves (<ref>).It remains to prove B>-1, for m≥1.By continuity of B, and because B^±=0 at Ω̃=Ω_m=2m+1, it is sufficient to show B^-≠-1, indirectly. Suppose B^-=-1.Then S^2=cos(ω), by (<ref>). Therefore (<ref>) implies0=H/S = Ω̃+cos(ω-π2Ω)/S= = Ω̃+(C cosω+S sinω)/S= = Ω̃+CS + sinω≥Ω̃-2>0 ,since Ω̃ > Ω_m-1=2m≥ 2. This proves B>-1, and completes the proof of the lemma. § CONTROL-INDUCED HOPF BIFURCATION In absence of control, i.e. in the limit b=2ε B→±∞, the original delay equation (<ref>) possesses a trivial simple Hopf eigenvalue μ=iω̃= iω_k=iε^-1of the characteristic equation (<ref>), at the original parameter λ = λ_k=-(-1)^k/ε .The (scaled) control parameter B induces further purely imaginary Hopf eigenvalues μ=μ_R+iω̃, μ_R=0, of the 2-scale characteristic equation (<ref>). We fix and suppress δ=Ω_m^-1 in this section and rewrite (<ref>) – (<ref>) as 0= ψ(μ,ε,B): = -ε Bμ-(-1)^k B e^-μ+ 12(1+e^-πεμ)==-ε Bμ_R-iBΩ̃- (-1)^k B e^-μ_R-iω̃+ 12(1+e^-πεμ_R-iπΩ̃) .As before we have abbreviated Ω̃=Ω_m+Ω = δ^-1+Ω here, and μ=μ_R+iω̃ with ω̃≡ω2π, εω̃=Ω̃. We also recall the hashing relation (<ref>), i.e. 0= h(ω̃, Ω,ε)= -Ω̃+εω̃= -Ω+ε(ω+π2(1-(-1)^k-(-1)^m)-2π j) ,for-π2≤ω<32π; see lemma <ref>. In other words, Hopf bifurcation is governed by the three real equations (<ref>), (<ref>) for vanishing (Ψ,h) ∈ℂ×ℝ, in the five not quite independent real variables (μ, Ω, ε,B) ∈ℂ×ℝ^3. In trapping lemma <ref> we observe absence of nontrivial eigenvalues μ = μ_R+iω̃ with imaginary parts ω̃≡±π22π. This traps imaginary parts in eigenvalue strips: an old and efficient idea already present in <cit.>. It establishes the crucial sequences of Hopf bifurcations at scaled control parameters B=B_m,j^± ,where m,j label specific strips of the Hopf eigenvalues μ =iω̃, with Ω_m-1 < Ω̃ = εω̃<Ω_m=2m+1.We also observe how eigenvalues μ cannot appear from, or disappear towards, Re μ=+∞. Proposition <ref> examines eigenvalues at vanishing control b=2Bε=±∞ to establish simplicity of eigenvalues, in each strip. With some estimates for Jacobian determinants involving Ψ and h, in proposition <ref>, we establish the transverse crossing directions of the simple Hopf eigenvalues μ, as B increases through B_m,j^± <0; see the central crossing theorem <ref> of the present section. In fact we observe a gain of stability, i.e.decrease of the unstable dimensions E=E(B) by 2, at B=B_m,j^+<0, and destabilization at B_m,j^-<0. Corollary <ref> concludes that Pyragas stabilization is impossible, for B>0. Corollary <ref> concludes that unstable eigenvalues in the complex (m,j)-strips are present if, and only if, B_m,j^- < B< B_m,j^+ .Corollary <ref> studies the case m=0 of slow frequencies 0<Ω̃ = εω̃<1, as well as the simplest case m=1. It concludes stability of the strip m=0, j=1 for B_0,1^+<B<0, but instability of the strip m=1, j=1 for B_1,1^-<B<0. With the orderings of B_m,j with respect to j, for each fixed m≤ 1, as collected in proposition <ref> we arrive at the conclusion of the present section, in corollary <ref>: the region P:= { B | B_0,1^+< B<B_1,1^-} is a nonempty Pyragas region, provided that B_m,j_m+1^+ <B_0,1^+< B_1,1^-<B_m,j_m^-holds for j_m:= [(m+1)/2]and all m≥1. The delicate ordering (<ref>) will only be established in sections <ref> and <ref> below. For any fixed ε >0, consider strictly complex eigenvalues μ = μ_R +iω̃, i.e. solutions μ∈ℂ∖ℝ of the characteristic equation (<ref>). (i) Assume μ_R ≥ 0>B .Then the only eigenvalues μ=μ_R+iω̃ such that 0<ω̃ = Im μ≡π2πare the trivial eigenvalues μ = iω_k̃=i(k̃+12)π at ε= ω_k̃^-1, where k̃∈ℕ_0 has the same parity as k. (ii) Assume ε=ω_k^-1, B≠ 0 .Then the only eigenvalue μ =μ_R+iω̃ such that ω̃ = ω_k=(k+12) πis the algebraically simple eigenvalue μ=iω_k. (iii) Fix ε= ω_k^-1, for some k∈ℕ_0, and fix any constant K >1. Consider any sequence of (scaled) control parameters B_n and nontrivial eigenvalues μ_n=μ_R,n+iω̃_n ≠ iω_k such that 0≤μ_R,n , 1/K≤ω̃_n≤ K , and B_n→ 0 .Then |μ_n| remains bounded. Moreover, for any convergent subsequence μ_n there exists a positive integer m such that εlimμ_n = limε iω̃_n = lim i Ω̃_n = iΩ_m=i(2m+1) ,and lim ω̃_n(mod2π)≡3π2for k+m even, 3 π2for k+modd.To prove claim (i), suppose μ=μ_R+iω̃ with ω̃=ω_k̃:= (k̃+12)π, for some nonnegative integer k̃. We have to conclude μ_R =0 and ε= ω_k̃^-1. Abbreviating εω̃ =: Ω̃, we decompose the characteristic equation (<ref>) into real and imaginary parts at ω̃ = ω_k̃ to obtain -ε Bμ_R+ 12 (1+ e^-πεμ_Rcos(πΩ̃))=0 ; -BΩ̃ +(-1)^k+k̃Be^-μ_R- 12e^-πεμ_Rsin(πΩ̃)=0 .The real part (<ref>) can be solved for Bμ_R as 0≥ Bμ_R= 12ε^-1(1+e^-πεμ_Rcos (πΩ̃))≥ 0 .Indeed, the right inequality follows because we have assumed μ_R ≥ 0 in (<ref>), and the left inequality follows from our assumption B<0. In particular we conclude μ_R =0andcos(πΩ̃)=-1 .Insertion of μ_R=0 and cos (πΩ̃)=-1, sin(πΩ̃)=0 in the imaginary part (<ref>) of the characteristic equation(<ref>) then implies 0> BΩ̃ =(-1)^k+k̃ B e^-μ_R=(-1)^k+k̃B .This proves 1 = Ω̃ = εω_k̃ and k≡k̃ (mod 2), as claimed. To prove claim (ii), we first note Ω̃ = εω̃ = εω_k=1 , exp (-iω̃)=-(-1)^ki ,by assumptions (<ref>), (<ref>). For the imaginary part (<ref>) of the characteristic equation (<ref>) at μ= μ_R+iω̃ this implies 0= -B+Be^-μ_R ,i.e. μ_R=0. To show algebraic simplicity of the resulting trivial eigenvalue μ = iω_k, we differentiate the right hand side of the characteristic equation (<ref>) with respect to μ, there. A vanishing derivative would require 0=-ε B-iB+π2ε .This contradiction proves algebraic simplicity of the trivial Hopf eigenvalue μ = ± iω_k at ε= ω_k^-1, for any B. To prove claim (iii), we rewrite the characteristic equation (<ref>) in the form b_n(εμ_n+(-1)^kexp(-μ_n))=1 + exp(-πεμ_n).To show |μ_n| remains bounded, indirectly, we first suppose |μ_n|→∞ ,for some subsequence. In (<ref>) we have assumed bounded imaginary parts ω̃_n = Im μ_n. Therefore (<ref>), (<ref>), and (<ref>) imply μ_R,n = Re μ_n → + ∞ .From (<ref>) we then obtain, more precisely, ε limb_nμ_n = 1 .Taking imaginary parts of (<ref>) and passing to a convergent subsequence of ω̃_n, we also obtain lim ω̃_n= limμ_nε b_nμ_n Im (-(-1)^k b_ne^-μ_n+e^-πεμ_n)== limμ_n Im (-(-1)^k b_n e^-μ_n+e^-πεμ_n)=0 .This contradicts our lower bound (<ref>) on ω̃_n. Therefore the sequence |μ_n| remains uniformly bounded.Next we divide the original unscaled characteristic equation (<ref>) for μ = μ_n by εμ_n-i≠0, at fixed ε= ω_k^-1, to obtain b_nεμ_n+(-1)^kexp (-μ_n)/εμ_n-i= ε1+exp (-πεμ_n)/εμ_n-i .Both sides extend to entire functions of μ_n. Since (<ref>) is entire, and |μ_n| remain bounded, b_n → 0 then implies ε lim 1+exp (-πεμ_n)/εμ_n-i=0 .The denominator cancels the simple zero εμ_n = iΩ_0=i of the numerator. The remaining zeros εμ_n=i Ω_m=i (2m+1) of the numerator prove claim (<ref>), and the trapping lemma.We now recall the location of eigenvalues μ of the characteristic equation (<ref>) in the limit B→±∞ of vanishing control. This is well-known material; see e.g. <cit.>. We include a short proof for the convenience of the reader.Let ε > 0. Consider eigenvalues μ∈ℂ at vanishing control B =±∞, i.e. solutions of 0= εμ +(-1)^ke^-μ .Then the following claims (i) – (iv) hold true. (i) If 0< Im μ≡π2π, then Re μ_R=0 , μ=iω_k̃ =i(k̃+1/2)πandε= ω_k̃^-1 ,where k̃∈ℕ_0 has the same parity as k. (ii) At ε=1/ω_k the eigenvalue μ=iω_k is algebraically simple. The local continuation μ=μ(ε) satisfies d/dεμ(0) = -1/ε(1+ε^2)(1+iε) .(iii) For 1/ω_k<ε < 1/ω_k-2 the nontrivial complex eigenvalues μ∈ℂ∖ℝ with Re μ≥ 0, Im μ>0 are given by [k/2] algebraically simple eigenvalues μ_0,j, j=1, …, [k/2], one in each strip 0< Re μ_0,j ; ω_k-2jπ < Im μ_0,j < ω_k -2jπ +π2 .(iv) For even k, there do not exist real eigenvalues μ≥ 0.If k is odd, the only real eigenvalue μ≥ 0 is the algebraically simple eigenvalue defined by the unique positive solution of εμ = e^-μ . Let μ = μ_R+iω̃. To prove claim (i), we assume ω̃= ω_k̃. We take real parts of the complex characteristic equation (<ref>) to see that cosω̃=0 implies μ_R=0. Taking imaginary parts, 0= εω_k̃-(-1)^k e^-μ_Rsinω_k̃== εω_k̃-(-1)^k+k̃shows the remaining claims of (i). Claim (ii) follows by implicit differentiation of (<ref>) with respect to ε: 0=μ+(ε-(-1)^ke^-μ)μ'where μ' abbreviates the implicit derivative with respect to ε. Inserting ε = ω_k^-1 and μ=iω_k proves claim (ii). Claim (iii) follows by global continuation of the simple Hopf eigenvalues μ=μ(ε) with respect to decreasing ε, alias increasing |λ |. We may proceed by induction on k. For large ε, i.e. for small λ alias small rescaled delay, already <cit.> observed Re μ→-∞ for all complex eigenvalues. At ε= ω_k̃^-1, with k̃ of the same parity as k and j:= [k̃/2]+1, a simple Hopf eigenvalue μ= μ_0,j:= iω_k̃ appears on the imaginary axis. By property (ii) it progresses, locally, for decreasing ε, into the strip (<ref>). By property (i) that simple eigenvalue μ_0,j(ε) can never leave that trapping strip again, because Re μ_0,j>0 remains bounded above for ε>0 bounded below. By standard complex analysis, therefore, each μ_0,j(ε) remains simple and continues globally in its strip, for 0<ε<ω_k̃^-1. The last value k̃ encountered for ε> ω_k^-1 is k̃=k-2. This proves claim (iii). Claim (iv) on real eigenvalues μ has been addressed in lemma <ref> already. This proves the proposition.The following proposition collects the partial derivatives of the characteristic function ψ (μ, ε,B):= -ε B μ-(-1)^k Be^-μ +12 (1+e^-πεμ)introduced in (<ref>).The partial derivatives of ψ = ψ(μ,ε,B) satisfy ψ_μ = -ε B +(-1)^k Be^-μ- π2ε e^-πεμ ; ψ_ε = -μ(B +π2 e^-πεμ) ; Bψ_B= -ε B μ-(-1)^kBe^-μ= ψ-12(1+e^-πεμ) .At imaginary eigenvalues μ= iω̃, where ψ (μ,ε;B)=0, and with the abbreviation Ω̃:= εω̃, we also obtain the Jacobian determinant -B ψ_(ε, B) = ω̃· (B+π2)·cos^2(π2Ω̃) . The calculations of (<ref>) – (<ref>) are trivial. For ψ=0 in (<ref>) we also obtain ψ_B= -12B(1+e^-πεμ) .Since ψ∈ℂ≅ℝ^2, the Jacobian ψ_(ε, B) can be written abstractly as ψ_(ε, B) = [ Re ψ_ε Re ψ_B; Im ψ_ε Im ψ_B ] = Im (ψ_ε·ψ_B) .Insertion of (<ref>), (<ref>), ψ=0 at imaginary eigenvalues μ =iω̃ = iε^-1Ω̃ provides the Jacobian determinant -Bψ_(ε, B) = Im (ψ_ε· (-Bψ_B)) == 12ω̃ Im ((Bi+π2 i e^iπΩ̃) (1+e^-iπΩ̃))== 12ω̃ (B+π2) (1+cos (πΩ̃)) .This proves (<ref>) and the proposition.With these lengthy preparations we can now address transverse crossings at the relevant Hopf eigenvalues, from two viewpoints. Fix ε= ω_k^-1 and consider nontrivial Hopf eigenvalues μ= iω̃, 0<ω̃≠ω_k=(k+12)π, at control parameter B≠ 0. Our viewpoint above was to study, equivalently, ψ(μ,ε, B) =0for ψ defined in (<ref>); see (<ref>). In section <ref> our viewpoint was slightly different; see (<ref>) and lemma <ref>. Hashing with the shifted slow frequency Ω = Ω̃ -Ω_m = εω̃-(2m+1)= ε(ω +π2(1-(-1)^k-(-1)^m)-2π j) ,-π2≤ω< 32π, ω≡ω̃2π, equivalently, we wrote the characteristic equation for the eigenvalue μ=iω̃ as 0= χ_0 (δ, ω, Ω, B)= Ω̃+(-1)^ki e^iω- 1Bsin(π2Ω) e^iπ2Ω ;see (<ref>) – (<ref>). In lemma <ref> we have described the solutions of (<ref>) by functions Ω = Ω(ω) ,for any fixed δ = Ω_m^-1 and nonnegative integers m. That description was completed with B= B^± (Ω) = B^± (ω, Ω(ω)) as B=(-1)^ksin^2(π2Ω)/cosω= (-1)^kcos^2 (π2Ω̃)/cosω ;see (<ref>) and lemmata <ref>, <ref>. We now combine these results, for ε= ω_k^-1, with the hashing (<ref>) to define the Hopf frequencies ω_m,j^±∈ (-π2, 32π) as the intersections of (<ref>) with (<ref>), i.e. ε(ω_m,j^± + π2(1-(-1)^k-(-1)^m)-2π j)= Ω(ω_m,j^±) ,for j=1, …, j_m^max. Here and below we restrict attention to the case Ω̃_m≤Ω̃≤Ω_m, i.e. Ω_m≤Ω≤ 0.Indeed, the opposite case of m≥ 1 and B^± >0 in (<ref>) will turn out irrelevant in corollary <ref> below.We have to comment on the precise meaning of (<ref>), in view of lemma <ref>. Consider the case m=0 first; see fig. <ref>. Since the derivatives of the two branches ω=ω^±(Ω) in (<ref>) are bounded, their intersections with the near-vertical hashing lines ω̃=Ω̃/ε of slope 1/ε are transverse, for 0<ε≤ε_0 small enough.This provides two intersections ω=ω_0,j^±, one pair for each j, as indicated.The cases m≥ 1 of fig. <ref> are slightly more involved.The strictly decreasing lower branch ω = ω^-(Ω),Ω_m < Ω < 0, is characterized by -π2 < ω = ω^-(Ω) < ω = π2Ω_m < 0 .This decreasing branch provides unique, transverse intersections ω = ω_m,j^- with the increasing hashing lines. The strictly increasing upper branch ω = ω^+(Ω),Ω_m < Ω < 0, on the other hand, may exhibit non-transverse, and even multiple, intersections ω∈ω_m,j^+ with the near-vertical hashing lines, for near-minimal Ω≳Ω_m. To simplify our presentation, mostly, we will think of intersection points ω = ω_m,j^+, rather than intersection sets ω∈ω_m,j^+. Of course we will proceed with the appropriate care to address the general set case whenever necessary. Eventually, we will be able to exclude cases where the minimal intersection ω_m,j^- of the hashing line (m,j) belongs to the upper branch ω^+(Ω); see lemma <ref> below.With these cautioning remarks in mind, we may proceed, mostly, with the additional requirement ω_m,j^- <ω_m,j^+ ,for m≥ 1. The only exception may arise by a tangency of the hashing, at maximal j=j_m, m≥ 1, where ω_m,j^-=minω_m,j^+ .From lemma <ref>, and in particular from (<ref>), (<ref>) we also recall the boundary values for the functions ω^±(Ω) at Ω = Ω_m and Ω = 0. Let us keep in mind how ω_m,j^± come with their shifted and slow variants ω̃_m,j^±, Ω̃_m,j^±,Ω_m,j^±as an entourage; see (<ref>). These also define the control parameters B =B_m,j^± =(-1)^kcos^2(π2Ω̃_m,j^±)/ cosω_m,j^±where the nontrivial, control-induced Hopf bifurcations with eigenvalues μ = ± i ω̃_m,j^± actually occur.With the above notation, the following holds, at ε= ω_k^-1. (i) The values B_0,j^± of the control parameter enumerate all nontrivial Hopf bifurcations with eigenvalues μ = ± iω̃ of frequencies 0<ω̃<ω_k=(k+12)π. (ii) The values B_m,j^± with m≥ 1 enumerate all nontrivial Hopf bifurcations with eigenvalue frequencies ω̃ > ω_k and strictly negative control parameter B<0. (iii) All enumerated Hopf eigenvalues are algebraically simple. (iv) The local continuations μ = μ_m,j^-(B) of all enumerated Hopf eigenvalues iω_m,j^- = μ_m,j^-(B_m,j^-) cross the imaginary axis transversely with d/dBRe μ_m,j^-(B) > 0at B=B_m,j^-. At B = max B_m,j^+ , generated by the frequencies ω_m,j^+ , that unstable eigenvalue recovers stability, at the latest.The proof of claims (i) and (ii) follows from our detailed analysis of the characteristic equation χ_0=0 in section <ref>; see in particular lemma <ref>. To prove simplicity of Hopf eigenvalues, (iii), we partially substitute the explicit expression (<ref>) for B into ψ_μ of (<ref>), and take real parts: Re ψ_μ = -ε B+(-1)^kBcosω̃- π2εcos(2π2Ω̃)== -ε B+cos^2(π2Ω̃)- π2ε(2cos^2(π2Ω̃)-1)== ε(π2-B)+ (1-πε)cos^2(π2Ω̃)>0at πε=πω_k^-1=1/(k+12), k≥ 1, and for all enumerated B. Indeed, by lemma <ref>, the only exception to B≤π2 arises for B=π2 at the excluded trivial eigenvalue with frequency ω̃=Ω̃/ε=1/ε =ω_k. This proves simplicity claim (iii). For m≥1, our proof of the remaining crossing and (de)stabilization claims (iv) will be based on the following three ingredients. We will first invoke the implicit function theorem to show that the local continuation map (ε, B) ↦μ=μ(ε,B)is an orientation preserving diffeomorphism, near ε=ω_k^-1 and the enumerated Hopf eigenvalues μ=iω_m,j^± at B=B_m,j^±. In a second step we will then show ε_ω(iω_m,j^-)<0for the partial derivatives, with respect to ω, of the local inverse function (ε,B)=(ε(μ), B(μ)) to (<ref>), at μ=iω_m,j^-. The third ingredient describes the necessary adaptations at ω_m,j^+.We first show how claims (<ref>) and (<ref>) imply the crossing directionRe μ_B>0 ,for the partial derivative of μ(ε, B) with respect to B at μ=iω_m,j^-. Indeed consider the oriented Hopf curveω̃→(ε(iω̃), B(iω̃)) in the (ε,B) plane. See fig. <ref>. By the orientation preserving transformation (<ref>), the region Re μ<0 lies to the left of the Hopf curve. By (<ref>), the tangent to the Hopf curve at ω=ω_m,j^- points strictly to the left of the vertical B-axis at the fixed value ε=ε_k. Therefore, the B-axis crosses the Hopf curve transversely, at ω=ω_m,j^-, and into the unstable region Re μ>0, for increasing B. Thus the diffeomorphism (<ref>) implies the crossing direction (<ref>).The cases B=B_m,j^+ can be treated analogously, with a little extra care. In the case of a single transverse crossing of the hashing line (m,j) with the upper branch ω=ω^+(Ω), at ω=ω_m,j^+, we now have ε_ω(iω_m,j^+)>0.Therefore, the tangent to the Hopf curve at ω=ω_m,j^+ now points strictly to the right of the vertical B-axis at the fixed value ε=ε_k. The previous arguments then show Re μ_B<0, i.e. stabilization towards increasing B. In case of multiple crossings, possibly involving tangents, we can prove stabilization from the last crossing (or tangency) onwards, as claimed in (iv), via generic approximation by an odd number of transverse crossings of the hashing line with the upper branch ω=ω^+(Ω).Put simply, destabilization occurs whenever the increasing hashing lines in the top rows of fig. <ref> enter the interior region of the 2-scale relation Ω = Ω(δ,ω), and stabilization ensues as soon as the hashing lines leave towards the exterior region; see lemma <ref>.To prove claim (iv) for m≥1 it therefore remains to verify claims (<ref>) and (<ref>). We will address the analogous, but simpler, case m=0 at the end of the proof.To verify the orientation claim (<ref>) we invoke the implicit function theorem for ψ(μ,ε, B)=0; see (<ref>), (<ref>). Indeed the Jacobian determinants, which determine the local orientations, satisfy ψ_μ·μ_(ε,B)= (-ψ_(ε,B))= (ψ_(ε,B)) .Our enumeration of cases B=B_m,j^± for m≥ 1 above has skipped any positive B^± of lemma <ref>, (<ref>). By lemma <ref> we know -1<B^-≤0. Therefore proposition <ref>, (<ref>) asserts strict positivity of (ψ_(ε, B)) in (<ref>). The enumerated Hopf eigenvalues μ=iω_m,j^± are simple zeros of the complex analytic characteristic function ψ∈ℂ, by claim (iii). Therefore ψ_μ on the left is also strictly positive, by the Cauchy-Riemann equations. This proves strict positivity of μ_(ε,B) and establishes the orientation claim (<ref>). To determine the signs of the tangent partial derivatives ε_ω, as claimed in (<ref>), we recall the definition ε = Ω̃/ω̃ = Ω̃(ω̃)/ω̃ ,where Ω̃(ω̃)=Ω_m+Ω(ω) follows from the ε-independent characteristic equation χ_0(δ, ω, Ω, B)=0 at ω≡ω̃2π, for fixed δ= Ω_m^-1. See (<ref>), (<ref>) and lemmata <ref>, <ref>. Straightforward differentiation of (<ref>) with respect to ω̃>0 or ω yields ε_ω = (Ω̇-Ω̃/ω̃)/ω̃ =(Ω̇-ε)/ω̃ ,in the notation of lemma <ref>, where Ω̇ =dΩ/dω=dΩ̃/dω̃. The definition of ω_m,j^- as the unique intersection of the hashing line (m,j) with the lower branch ω=ω^-(Ω) in (<ref>),and (<ref>), (<ref>) show that (<ref>) implies the sign of ε_ω claimed in (<ref>). See the two top rows of fig. <ref>. The case of a single transverse crossing at ω=ω_m,j^+ leads to ε_ω(iω_m,j^+)>0, analogously. The required adaptations for multiple and/or non-transverse crossings have been described above.It remains to address the case m=0, 0<Ω̃<1. Consider ω_0,j^+, B_0,j^+<0 first; see lemmata <ref> – <ref>. Here each crossing ω=ω_m,j^± is transverse and unique.Transformation (<ref>) remains orientation preserving, by (<ref>) and (<ref>), verbatim as for m≥ 1. Furthermore (<ref>) implies ε_ω >0, for small enough 0<ε<ε_0, by an upper bound on the positive derivatives 0<1/Ω̇=d/dΩω^± (Ω) < 1/ε_0in lemma <ref>. This shows claim (iv), (<ref>) at B=B_0,j^+<0, for m=0. To show claim (iv), (<ref>) at B=B_0,j^->0, for m=0, we first note that (<ref>), (<ref>) now imply orientation reversal in (<ref>), because ψ_(ε,B) <0 .The argument (<ref>), however, remains intact at ω^-(Ω). This shows how the sign reversal claimed in (iv), (<ref>) remains valid for m=0, proving the lemma.Figs. <ref> and <ref> already summarized our results, so far, separately for m=0 and for m≥ 1, B<0. Consider the case m=0 first, i.e. slow Hopf frequencies 0<Ω̃ <Ω_0=1, alias -1<Ω=Ω̃-Ω<0. The branch ω = ω^+(Ω) of Hopf frequencies ω≡ω̃2π provides k'=[k/2]Hopf bifurcations at control parameters B= B_0,j^+, j=1,…,k'. Hashing and strong monotonicity of B=B^+(Ω), lemma <ref>, imply -1<Ω_0,k'^+ < … < Ω_0,2^+<Ω_0,1^+<1 ; -1<B_0,k'^+<… <B_0,2^+<B_0,1^+<0 .By lemma <ref>, unstable eigenvalues μ=μ_R+iω̃ cannot cross any of the lines ω̃≡π2π, for B<0. By proposition <ref>, each of the k'=[k/2] resulting strips Re μ>0 , 0<ω_k-2jπ<ω̃<ω_k -2jπ+π ,j=1,…,k', contains exactly one simple eigenvalue μ_0,j inherited from B=-∞. By theorem <ref> and analytic continuation, this simple eigenvalue persists as B increases, until it disappears into Re μ<0 by simple transverse Hopf bifurcation at B=B_0,j^+ ,ω̃= ω_0,j^+ .Indeed ω̃=ω_0,j^+ belongs to the same strip (<ref>), for each j=1,… , k'. For even k=2k', this eliminates all unstable eigenvalues generated at B=-∞, once B_0,1^+<B<0 .For odd k=2k'+1, the same statement remains true, because -1<B<0 renders the additional real eigenvalue stable; see lemma <ref> and corollary <ref>. Note -1<B_0,1^+ here, by lemma <ref>; see also fig. <ref>. These remarks prove the following corollary.Let ε=1/ω_k and assume B<0. Then μ_R<0 for any eigenvalue μ=μ_R +iω̃ with 0≤ω̃<ω_k, if and only if (<ref>) holds. We study B>0 next. In corollary <ref> we have already observed instability, by parity due to the presence of a real eigenvalue μ>0, in case k was odd. Let us therefore consider even k=2k'. At ε=1/ω_k and B=+∞ we encounter the same k' unstable simple complex eigenvalues μ_0,j , one in each of the k' strips (<ref>), as before. This time, however, only k'-1 simple transverse Hopf bifurcations at B=B_0,j^->0 offer their assistance for stabilization by decreasing B>0. Indeed B_0,j^- cancels the instability of μ_0,j+1, for j=1, … ,k'-1, but μ_0,k' remains unstable for all B>0. This proves the following corollary.Let ε =1/ω_k and assume B>0. Then there exists an unstable eigenvalue μ, i.e. Re μ>0. For odd k, the unstable eigenvalue can be taken to be real. For even k ≥ 2, the unstable eigenvalue μ = μ_R +iω̃ can be taken to be strictly complex with 0<ω_k-2π<ω̃<ω_k-π .In particular, there does not exist any region of Pyragas stabilization (near Hopf bifurcation) for control parameters B>0. Henceforth we restrict attention to the remaining case B<0. We fix ε=1/ω_k. All unstable real eigenvalues, or complex eigenvalues μ =μ_R +iω̃ with 0<ω̃ <ω_k come from B=-∞ and have been taken care of in corollary <ref>. By the trapping lemma <ref> for imaginary parts, all remaining changes of stability must arise from the simple transverse Hopf bifurcations at B=B_m,j^± , m≥ 1 ,as enumerated in theorem <ref>. Note how the imaginary parts ω̃ >0 of any unstable eigenvalues μ=μ_R+iω̃ induced by these Hopf bifurcations are confined to the disjoint strips ω̃ = ω +2π(km+[(k+1)/2]+ [(m+1)/2]-j)where π2<ω <3π2 for even k, and -π2<ω <π2 for odd k. This follows from hashing (<ref>), (<ref>) at Hopf bifurcation frequencies ω̃ = ω̃_m,j^±, and persists with instability of μ, by trapping lemma <ref> of imaginary parts. In particular, the strips are disjoint, for different (m,j), and B_m,j^± generate frequencies ω̃_m,j^± which remain in the same strip. See also fig. <ref>. This proves the following corollary.Let 0<ε =1/ω_k≤ε_0 be sufficiently small, and assume B<0. Then there exists an unstable eigenvalue μ, i.e. Re μ >0, if at least one of the following conditions holds: B<B_0,1^+<0 ,or B_m,j^-<B< min B_m,j^+≤0,for some m≥ 1 and some j enumerated in theorem <ref>. Here we define B_m,1^+:=0 for odd m. We have implicitly excepted hashing tangencies Ω_m,j^-= minΩ̃_m,j^+ in equation (<ref>). Indeed this case corresponds to a nontransverse Hopf point, at (scaled) frequency Ω̃_m,j^- = minΩ̃_m,j^+, from the stable side.This does not contribute to the strict unstable dimension E(B).Hence we may assume Ω̃_m,j^- < Ω̃_m,j^+. Then hashing (<ref>) implies B_m,j^-<B_m,j^+. Indeed this follows from strong monotonicity of B^+(Ω) in case Ω_m,j^±= Ω(ω_m,j^±) with ω_m,j^±≥ω; see lemma <ref>. If ω_m,j^-<ω<ω_m,j^+, then we reach the same conclusion, again by strong monotonicity of B^+(Ω̃) and lemma <ref>: B_m,j^-=B^-(Ω_m,j^-)<B^+(Ω_m,j^-)< B^+(Ω_m,j^+)=B_m,j^+ .More systematically, these arguments are collected in the following proposition.Let ε=1/ω_k≤ε_0 be sufficiently small. For any m ≥ 1 consider the enumeration of Hopf bifurcation parameters B_m,j^± of theorem <ref>. Then B_m,j^- < B_m,j^+ <0 ,except at a possible hashing tangency Ω_m,j^-= minΩ_m,j^+. Moreover the series B_m,j^+ decreases strictly monotonically in j=1,…, j_m^max, for each fixed m≥ 1.Claim (<ref>) has been proved in (<ref>). Strict monotonicity of B_m,j^+= B^+(Ω_m,j^+) in j follows from strict monotonicity of Ω̃_m,j^+=ω̃_m,j^+/ε in the j-strips (<ref>) and from strict monotonicity of Ω↦ B^+(Ω) in lemma <ref>. This proves the proposition.We summarize the results of this section in a final corollary. Define j_m:= [(m+1)/2] ,for integer m≥ 1. In the following sections we will show, for small enough 0<ε =1/ω_k≤ε_0, that B_0,1^+ is unique, and max B_m,j_m+1^+ <B_0,1^+<0for j_m+1≤ j_m^max ,i.e. as long as Ω_m,j_m+1^+ exists. The maximum is taken over all m≥ 1. See (<ref>) for the delimiter j=1, … , j_m^max of the enumeration B_m,j^±. On the other hand, we will also show B_0,1^+<B_1,1^-<0 , and B_1,1^-≤ B_m,j^-<0 ,for all m≥ 1and j=1, … , min{ j_m,j_m^max} .This identifies the Pyragas region 𝒫 as follows.Let 0<ε=1/ω_k≤ε_0 be chosen small enough and assume the orderings (<ref>) – (<ref>), for all m≥ 1. Then the nonempty Pyragas region 𝒫 = { B<0 | B_0,1^+<B<B_1,1^-}is the only region of control parameters B=12b/ε in the delay equation (<ref>), such that Pyragas stabilization succeeds for the Hopf bifurcation at λ = λ_k=(-1)^k+1ε^-1.By corollary <ref>, instability prevails for all B>0. By corollary <ref>, instability holds for B<B_0,1^+, and for B_1,1^- <B<0. It therefore remains to show that (B_m,j^-, max B_m,j^+)∩ (B_0,1^+, B_1,1^-)= ∅ ,for all m≥ 1 and j=1, … , j_m^max. Strong monotonicity of B_m,j^+ with respect to j, as in proposition <ref>, and assumption (<ref>) imply max B_m,j^+ ≤max B_m,j_m+1^+ < B_0,1^+for all m≥ 1 and j>j_m. This establishes claim (<ref>) for j_m <j≤ j_m^max, provided that j_m <j_m^max. It remains to show claim (<ref>) for 1≤ j≤min{ j_m, j_m^max}. In this case we invoke assumption (<ref>) to conclude an empty intersection (<ref>), again. Since the Pyragas region (<ref>) is nonempty, by assumption (<ref>), this proves the corollary. § LOCALLY UNIFORM EXPANSIONS IN Ε = 1/Ω_K To locate and understand the Pyragas region 𝒫 = { B | B_0,1^+ < B < B_1,1^-} ,in the limit ε= 1/ω_k → 0 of Hopf bifurcations with large unstable dimensions k, it remains to establish the precise locations of the control induced Hopf bifurcations B_m,j^± relative to the gap (<ref>). See assumptions (<ref>) – (<ref>) of corollary <ref>. In the present section we accomplish this task, by expansions with respect to small ε, for arbitrarily bounded m=1,… , m_0, alias δ≥δ_0:= 1/Ω_m_0=1/(2m_0+1).To be precise, we first fix any m_0∈ℕ, alias δ_0>0. In section <ref>, we will choose δ_0 sufficiently small. In the present section we will then consider 0<ε≤ε_0=ε_0(δ_0) small enough for certain ε-expansions of B_m,j^± to hold, uniformly for all m≤ m_0 and j≤ j_m+1, j_m=[(m+1)/2]. Note how the derivatives of ω^±(Ω) remain bounded in the relevant region; see (<ref>). Hence ω_m,j^± and B_m,j^± are defined uniquely by transverse intersections of the 2-scale characteristic equation (<ref>) with the hashing lines ω̃ = Ω̃/ε of (<ref>), (<ref>), in the present section.In particular, the ε-expansions of B_0,1^+ and B_1,1^- will establish the expansions (<ref>) of b_k = 2ε B_0,1^+ and b_k=2ε B_1,1^-; see (<ref>). The limit δ→ 0 of large m →∞ requires a different approach, and will therefore be deferred to the next section. Our strategy has been outlined in (<ref>) – (<ref>). We first solve the ε-independent 2-scale characteristic equation H(Ω,ω)=0 of (<ref>) for ω = ω (Ω) explicitly: ω = ω^±(Ω)= π2Ω±arccos (-Ω̃sin (π2Ω)) ;see lemma <ref>. Here and below we only consider odd k, without loss of generality. Even k add π to ω^±. Recall |ω | ≤π2, for odd k. We also recall Ω̃ =1/δ +Ω and 0≤ -Ω̃sin (π2Ω)≤ 1 ,by the discriminant condition (<ref>). Equivalently 0≥Ω≥Ω_m .Note bounded derivatives of ω(Ω), locally uniformly for 0≥Ω > Ω_m. We suppress explicit dependence on δ, viz. m, in the present section. Next we insert (<ref>) into the hashing relation (<ref>) of lemma <ref>: Ω = ε (ω^± (Ω) -a π)with the abbreviation a=a_m,j=2j-1+12(-1)^m .Here we have used that k is odd; the relevant modifications for even k cancel in (<ref>) and below. By the implicit function theorem we can solve (<ref>) for Ω = Ω(ε, a)= Ω_m,j^± (ε) ,uniquely, for small enough 0 <ε≤ε_0 = ε_0(δ_0). Inserting the result into ω^± of (<ref>) and B of (<ref>) we obtain expansions ω = ω_m,j^±(ε):= ω^± (Ω_m,j^± (ε)) , B=B_m,j^±(ε):= -sin^2(π2Ω_m,j^±(ε))/ cosω_m,j^±(ε) .We collect these straightforward expansions in the following proposition. For any fixed δ_0=1/Ω_m_0 >0 consider 0< ε≤ε_0 = ε_0(δ_0) small enough. We abbreviate the coefficientsα^±_m,j := 4j + (-1)^m-2∓1Then the expansions for Ω_m,j^±, ω_m,^±, and B_m,j^± with respect to ε are Ω_m,j^-(ε) = α^-_m,j(-π2ε+ 2m(π2ε)^2+… ) ; Ω_m,j^+(ε) = α^+_m,j(-π2ε- 2(m+1)(π2ε)^2+… ) ;   ω_m,j^-(ε) = π2(-1+2m α^-_m,j π2ε+ … ) ; ω_m,j^+(ε) = π2(+1-2(m+1) α^+_m,j π2ε+ … ) ;   B_m,j^-(ε)= π4m α^-_m,j (-π2ε+ 12m(4m^2-α^-_m,j)(π2ε)^2+… ) ; B_m,j^+(ε)= π4(m+1) α^+_m,j (-π2ε- 12(m+1)(4(m+1)^2+α^+_m,j)(π2ε)^2+… ) .The expansions for B^±, Ω^± hold for even and odd k, alike. The expansions for ω^± are given for odd k. For even k, we have to add π; see lemma <ref>. We omit the obvious and tedious calculations. In the setting of proposition <ref> we obtain the following expansions and inequalities, for 0< ε→ 0: B_1,1^-= -(π2)^2ε+ (π2)^3 ε^2+… ; B_0,1^+= -(π2)^2ε-3 (π2)^3 ε^2+… ; 0>B_m,1^-> B_m,2^- > B_m,3^-> …For m ≥ 1 and j_m:= [(m+1)/2] we obtain the coefficients and expansions α^-_m,j_m = 2m, α^+_m,j_m+1 = 2(m+1) ; B_m,j_m^-= -(π2)^2ε+ (2m-1)(π2)^3 ε^2+… ; B_m,j_m+1^+= -(π2)^2ε- (2m+3)(π2)^3 ε^2+… The proof follows from proposition <ref>, by explicit evaluation. The assumptions (<ref>) – (<ref>) of corollary <ref> are satisfied for all 1 ≤ m≤ m_0 and sufficiently small 0<ε≤ε_0. In particular, expansions (<ref>), (<ref>) determine the ε-expansions (<ref>) of the boundaries b_k, b_k of the Pyragas region, in our main theorem <ref>.For 1 ≤ m≤ m_0, claim (<ref>) follows by comparison of the expansion (<ref>) for B_m,j_m+1^+ with expansion (<ref>) for B_0,1^+. Likewise (<ref>), (<ref>), and (<ref>), in this order, imply B_1,1^- ≤ B_m,j_m^- < B_m,j_m-1^- < … < B_m,1^- .This proves claim (<ref>). The remaining claim B_0,1^+ <B_1,1^- of (<ref>) is immediate by comparison of the respective expansions (<ref>) and (<ref>). This proves the corollary. For m≥ 1, there is an amusing characterization of the critical index j=j_m=[(m+1)/2] as a Pyragas switch index, in our expansions. In fact our ε-expansions (<ref>) easily imply B_m,j+1^+< B_m,j^-for j=1, … , j_m ,B_m,j+1^+> B_m,j^-for j>j_m .The interpretation is easy. In the open interval B∈ I_m,j:= (B_m,j^-, B_m,j^+), consider the Hopf-induced unstable eigenvalue μ =μ_R+iω̃, μ_R>0, with frequency ω̃ trapped in the disjoint intervals designated by m,j; see lemma <ref>, and (<ref>), (<ref>) with |ω| ≤π2. Then (<ref>) states that successive instability intervals I_m,j and I_m,j+1 open a stability window in between, for j ≤ j_m, but overlap for j>j_m. See fig. <ref>. In view of (<ref>), therefore, the stability window B_m,j_m+1^+ < B < B_m,j_m^-is the very first stability window encountered, between the instability intervals I_m,j, m≥ 1, as j decreases from j_m^max to 1, i.e. asB<0 increases towards zero from absent control at B =-∞. The instability inherited from B =-∞, on the other hand, is only compensated for once B_0,1^+<B<0, by the stabilizing series of Hopf bifurcations in the m=0 series at B= B_0,j^+. The instability interval I_1,1 =(B_1,1^-, B_1,1^+) which starts from B_1,1^- >B_0,1^+, however, extends all the way to B_1,1^+=0. Therefore any stability windows between the intervals I_m,j of instability, for m≥ 1 and j from j_m+1 down to 1, remain ineffective.The only exception is the first such gap (<ref>) which, somewhat miraculously, contains the Pyragas region 𝒫 of (<ref>) by (<ref>), (<ref>), as established above. We are somewhat amazed how all these first stability windows align, simultaneously for all resonance orders m, at the same first order location B =-(π2)^2ε + … ,to contribute to Pyragas stabilization from B=B_0,1^+ to B=B_1,1^- by a second order effect. We will see next how such gaps also arise in the remaining limit δ→ 0 of large m→ +∞.§ ASYMPTOTIC EXPANSIONS FOR LARGE Ω_𝐦 = 1/Δ In the previous section we have shown how the series of destabilizing Hopf intervals B∈ I_m,j:= (B_m,j^-, B_m,j^+) skip the Pyragas candidate 𝒫=(B_0,1^+, B_1,1^-) ,for bounded values m≤ m_0, accordingly bounded j≤ j_m^max, and small enough ε=1/ω_k ≤ε_0(δ_0). In other words, we have established assumptions (<ref>) – (<ref>) of corollary <ref>, for arbitrarily bounded m≤ m_0 and 0<ε≤ε_0(δ_0). In the present section we will complete this analysis, for large m>m_0.In section <ref>, we had fixed m,j, to study ε-expansions of B_m,j^±. We also noticed the central role of j=j_m:= [(m+1)/2] ,where the Pyragas switch (<ref>) actually occurs, between B_m,j_m+1^+ and B_m,j_m^-. This suggests a somewhat delicate parametrization of the relevant expansions by ε and δ:= 1/Ω_m = 1/(2m+1), both tending to zero in a region δ≥δ (ε). Instead, we choose a parametrization of the problem by a rectangular region of (δ, ω). The ε-independent relations Ω^±= Ω(δ, ω),B^±= B(δ, ω) will provide expansions with respect to small δ. At j=j_m for B_m,j_m^-, and at j=j_m+1 for B_m,j_m+1^+, we will also obtain expansions for ε = ε(δ, ω) ,from the hashing relation (<ref>). In other words, we determine ε such that B_m,j_m^- and B_m,j_m+1^+ arise at the frequency parameter ω, for some small δ. The ε-expansions for the Pyragas boundary B_0,1^+(ε), B_1,1^-(ε), in section <ref>, did not depend on δ.They will allow us to compare the resulting locations, now, uniformly for small 0<δ≤δ_0. This will prove our main theorem, via corollary <ref>. We address the general case in lemma <ref>. The limits ω→±π2, for B_m,j^±, will be considered in lemma <ref>. These results address the cases where B_m,j_m^-, B_m,j_m+1^+ actually exist, and ω_m,j_m^-<πΩ_m,j_m^-. The remaining cases where j_m ≥ j_m^max are prepared by expansions for min B, in proposition <ref>, and are resolved in lemma <ref>. As in section <ref>, we may restrict our attention to odd k, |ω |≤π2. See also (<ref>), (<ref>). We mostly replace m by δ = 1/Ω_m= 1/(2m+1) and think of 0<δ≤δ_0 and 0<ε≤ε_0(δ_0) as small continuous, rather than discrete, real variables in all expansions. For example 2j_m:= 2[(m+1)/2]=12(δ^-1-(-1)^m) , a_m,j_m := 2j_m-1+12(-1)^m = 12(α^±_m,j_m±1) = 12δ^-1-1 ,for the Pyragas switch index j=j_m of (<ref>), (<ref>) and the associated shift a=a_m,j_m in the hashing (<ref>), (<ref>). See also (<ref>). Our δ-expansions are based on section <ref>. From lemmata <ref>, <ref> we recall how the 2-scale characteristic equation in the form 0= (δ^-1+Ω) sin (π2Ω) + cos (ω-π2Ω) , B= -sin^2(π2Ω)/cosω ,of (<ref>), (<ref>) gives rise to unique functions Ω= Ω(δ, ω), B=B(δ, ω), successively. Insertion into the hashing ε = Ω/(ω-π a_m,j_m)with (<ref>), (<ref>) then provides ε = ε^-(δ, ω), such that we encounter B=B_m,j_m^-(δ, ω) atε= ε^-(δ, ω) ,for -π2<ω≤π2Ω_m. Similarly, the hashing ε = Ω/(ω-π a_m,j_m+1) ,with a_m,j_m+1 = a_m,j_m+2 in (<ref>), provides ε = ε^+(δ, ω) such that we encounter B= B_m,j_m+1^+(δ, ω)atε= ε^+(δ, ω)for π2Ω_m ≤ω <π2. The established ε-expansions of the Pyragas boundaries B_0,1^+(ε), B_1,1^-(ε) of corollary <ref>, (<ref>) and (<ref>), finally, at ε= ε^±(δ, ω), respectively, allow us to compare these δ-expansions as required in assumptions (<ref>) and (<ref>) of corollary <ref>.Note how any possible nonuniqueness within the sets B_m,j^+ is remedied by our parametrization: we neither claim nor require injectivityof ω↦ε_m,j^+(δ,ω), for fixed δ.For odd k, and uniformly in |ω|≤π2, we obtain the following expansions with respect to small δ: Ω(δ,ω)= -cπ/2(δ-sδ^2+… ) B(δ,ω) = -c(δ^2-2sδ^3+… ) ε^±(δ,ω) = (π2)^-2c(δ^2+ (ωπ/2∓ 2-s)δ^3+…) B_0,1^+(δ,ω) = -c(δ^2+(ωπ/2-2-s)δ^3+… ) B_1,1^-(δ, ω) = -c(δ^2+(ωπ/2+2-s)δ^3+… )Here we use the abbreviations c=cosω, s=sinω. Omitted terms are of the first omitted integer order in δ.To see why we obtain a uniform expansion of Ω, we rewrite (<ref>) in the equivalent form sin (π2Ω)= -ccos(π2Ω)δ / (1+δ(Ω+s)) .For 0≤δ≤δ_0, the implicit function theorem provides Ω= Ω(δ,ω), uniformly in ω. Note the important uniform prefactor c=cosω in Ω = (Ω/sin(π2Ω)) ·sin (π2Ω)=-cδ·(… ) ,because Ω/sin(π2Ω) is regular nonzero. To obtain the specific expansion (<ref>) for Ω we solve (<ref>) for δ, expand for Ω at Ω =0, and invert the resulting series for δ = -sin(π2Ω)/(cos (ω -π2Ω) +Ωsin (π2Ω))= -π/2cΩ +… . To obtain expansion (<ref>) for B=-sin^2(π2Ω)/c we insert (<ref>) into (<ref>) and observe cancellation of the denominator c. Indeed the numerator picks up a factor c^2 from expansion (<ref>). This proves (<ref>). Of course (<ref>) applies to all B_m,j^± = B(δ, ω), identically, at the appropriate values ω. The expansions (<ref>) for ε^± follow from the hashings (<ref>) – (<ref>), if we replace a_m,j_m and a_m,j_m+1 = a_m,j_m+2 by their appropriate δ-dependent values 12δ^-1± 1 from (<ref>). The prefactor c remains inherited from Ω. To prove claims (<ref>), (<ref>) it only remains to plug (<ref>) into the ε-expansions (<ref>) and (<ref>) for B_1,1^- and B_0,1^+. Note how the new term of order δ^3, which roughly speaking corresponds to ε^3/2, arises from the leading term -(π2)^2ε^± +… alone. This proves the lemma. Let 0<δ≤δ_0 be small enough and consider odd k. Let Ω, B, ε^±, B_0,1^+, B_1,1^- be expanded as in lemma <ref>. Then we obtain the inequalities B_1,1^-(δ, ω)<B(δ,ω) < B_0,1^+(δ, ω) ,for all 0<δ≤δ_0, |ω|<π2. For j_m+1≤ j_m^max and at ε= ε^+(δ,ω) we conclude, in particular, B_m,j_m+1^+ < B_0,1^+(δ,ω) ,i.e. assumption (<ref>) of corollary <ref> holds.Likewise, for j_m≤ j_m^max and at ε= ε^-(δ, ω) we conclude B_1,1^-(δ,ω)< B_m,j_m^- .Under the additional assumption -π2 <ω_* ≤πΩ(δ,ω_*) ,we can also assert -π2 <ω≤πΩ(δ,ω), for all -π2≤ω≤ω_*. Moreover ω↦ B(δ, ω)<0is then strictly decreasing, for fixed δ and all -π2≤ω≤ω_*.More specifically, suppose the additional assumption (<ref>) holds at Ω(δ,ω_*)= Ω_m,j_*^-, ω_* = ω^-(Ω_m,j_*^-), for some 1≤ j_*≤ j_m^max, m≥ 1. Then B_m,j_*^-<… < B_m,1^-≤ 0holds at ε= ε(δ,ω_*), for all 1≤ j≤ j_*. For the above ranges of ε,δ, this establishes the Pyragas region 𝒫.Claim (<ref>) may look somewhat paradoxical, at first sight. To prove (<ref>), nevertheless, we first divide (<ref>) by c=cosω >0 and compare with expansions (<ref>), (<ref>), (<ref>) of lemma <ref>. For |ω| <π2-η_0 bounded away from π2, claim (<ref>) becomes equivalent to the obvious inequalities ωπ/2 +2-s >-2s>ωπ/2-2-s , i.e. ωπ/2+2 >-s>ωπ/2-2 .Equality holds for ω =-π2, on the left, and for ω =+π2, on the right. Therefore η_0=0 does not seem an option for proving inequality (<ref>), uniformly for small δ, at first. However, we obtain uniform expansions for the partial derivatives ∂_ω B, ∂_ω B_0,1^+, ∂_ω B_1,1^- as well, by differentiation of the coefficients of the δ-expansions (<ref>), (<ref>), (<ref>). At ω =-π2 we obtain ∂_ω((B-B_1,1^-)/c)= ∂_ω (s+ωπ/2+2)δ^3 +… = 1π/2δ^3+… >0 , and at ω = +π2, similarly,∂_ω ((B_0,1^+-B)/c)= ∂_ω(-ωπ/2+2-s)δ^3+… = -1π/2δ^3+… <0 .This establishes positivity of the differences for all |ω|<π2, uniformly for all 0<δ≤δ_0, and settles claim (<ref>). To prove claim (<ref>) we invoke B(δ, ω) < B_0,1^+(δ, ω) from (<ref>). In fact B_m,j_m^- = B(δ, ω) , B_m,j_m+1^+= B(δ,ω) ,in our parametrization, for any m≥ 1. Note how (<ref>) refers to possibly different ε =ε^±(δ,ω) given by (<ref>), (<ref>) and (<ref>), (<ref>), respectively. Indeed ε has been eliminated in the 2-scale characteristic equations (<ref>), (<ref>), and only enters after the (m,j)-dependent hashings (<ref>), (<ref>). Thus (<ref>) holds by definition of ε^+(δ, ω). The arguments for claim (<ref>) via ε^-(δ,ω) are completely analogous. To complete the proof we only have to establish the strict monotonicity of ω↦ B(δ, ω) and of j ↦ B_m,j^-, j=1, … ,j_* as claimed in (<ref>), (<ref>). We invoke lemmata <ref> and <ref>. By lemma <ref> we have strictly decreasing dependence Ω↦ω^-(δ, Ω), for fixed δ >0. By lemma <ref>, assumption (<ref>) implies that ω↦ B(δ,ω) decreases strictly, as long as ω≤πΩ(δ,ω). By monotonicity of Ω, we remain in this region, for all smaller ω≥ -π2, once we are ever inside. This proves claim (<ref>). Specifically, suppose assumption (<ref>) holds for ω_* = ω^-(δ, Ω_m,j_*^-), and hence for all -π2≤ω≤ω_*=ω^-(δ,Ω_m,j_*^-). Then conclusion (<ref>) holds for all -π2≤ω≤ω_*=ω^-(δ,Ω_m,j_*^-). Hashing (<ref>) implies strict monotonicity of the slow frequencies, Ω_m,j_*^-< … < Ω_m,1^-≤ 0 .In particular (<ref>) successively implies ω_*=ω^-(Ω_m,j_*^-)> … > ω^-(Ω_m,1^-)≥ -π2 , B(δ,ω_*) = B_m,j_*^-< … < B_m,1^-≤0 ,at ε= ε(δ,ω_*).This proves claim (<ref>). For the above ranges of ε,δ, the inequalities (<ref>) and (<ref>), (<ref>) also validate assumptions (<ref>), (<ref>) of corollary <ref>, respectively. This establishes the Pyragas region 𝒫, in the above ranges of ε,δ, and the lemma is proved.We now address the remaining segment of the piecewise strictly monotone curve ω↦Ω(δ, ω), where πΩ (δ, ω) ≤ω≤π2Ω (δ, ω)<0 .By lemma <ref>, this segment contains B_min(δ) = min_|ω |≤π2 B(δ, ω) . Let k be odd. Uniformly, for ω satisfying (<ref>), we have the expansions Ω (δ, ω)= -1π/2δ +𝒪(δ^3) ; B(δ, ω)= -δ^2 +𝒪(δ^4) . In particular this implies B_min(δ)= -δ^2+𝒪(δ^4) . We invoke expansions (<ref>) and (<ref>) of lemma <ref>. Indeed (<ref>) and (<ref>) imply -2cδ + … = πΩ (δ, ω) ≤ω≤π2Ω(δ, ω)= -cδ +…with c=cosω. In particular ω = 𝒪(δ), uniformly in the region (<ref>). Hence the δ-expansion (<ref>) with s= sinω implies claim (<ref>). Similarly, (<ref>) and ω =𝒪(δ) imply (<ref>). Claim (<ref>) follows from the uniform expansion (<ref>) by the remark preceding the proposition, and the proof is complete. Let 0<δ≤δ_0 be small enough and consider odd k. Let (Ω_*,ω_*) denote the unique intersection point of the line ω=πΩ with the 2-scale curve Ω = Ω(δ, ω). Assume that the hashing line Ω = ε(ω -π a_m,j_m) ,of j_m:= [(m+1)/2] intersects the line πΩ = ω, ω≤ 0, to the left of (Ω_*, ω_*). Then B_1,1^-(ε) < B_min (δ) <0 . We recall a_m,j_m=12δ^-1 -1; see (<ref>). Insertion into hashing (<ref>) provides the intersection with πΩ = ω at ε = Ω/(ω-π a_m,j_m)= 1πΩ/ (Ω-12δ^-1+1)== -1π/2δΩ/(1-2δ-2δΩ)≥ -1π/2δΩ(δ,ω_*)/ (1-2δ-2δΩ(δ, ω_*))== (π2)^-2δ^2/(1-2δ)+𝒪(δ^4)= (π2)^-2(δ^2+2δ^3+𝒪(δ^4)) .Here we have used intersection to the left of (Ω_*, ω_*), i.e. Ω≤Ω_* = Ω(δ,ω_*)< 0, and expansion (<ref>) at (Ω_*, ω_*). The ε-expansion (<ref>) of corollary <ref> for B_1,1^-= B_1,1^-(ε), the estimate ε = ε^- = 𝒪(δ^2) of (<ref>), and comparison with (<ref>) via (<ref>) then yield B_1,1^- (ε) = -(π2)^2ε+ (π2)^3 ε^2+𝒪(ε^3) ≤ ≤ - δ^2 -2δ^3+𝒪(δ^4)<B_min(δ)<0 .This proves claim (<ref>) and the lemma.§ PROOF OF THEOREM 1.2Our proof of theorem <ref> is based on just a detailed stability analysis of the 2-delay characteristic equation(<ref>), with ϑ=0, at the Hopf bifurcation points λ = λ_k := (-1)^k+1ω_k of (<ref>). Emphasis is on control-induced eigenvalues μ = i ω̃ in the limit ε→ 0 of large frequencies 1/ε = ω_k = (k+12)π. We summarize the proof; see figs. <ref> and <ref> for an illustration. We only address the case of large odd k; the case of even k is analogous.In section <ref> we have studied the 2-scale characteristic equation (<ref>), (<ref>) which eliminates ε> 0. Instead, imaginary eigenvalues μ = ± iω̃ have been represented by a slow Hopf frequency Ω̃ = εω̃, in addition to ω̃ itself. The case of real eigenvalues μ, and their crossing at μ=0 due to the scaled control parameter b=2ε B, was treated in lemma <ref> and corollary <ref>. The hashing Ω̃= εω̃ was detailed, and normalized to Ω=Ω̃-(2m+1), ω≡ω̃ (mod 2 π), in lemma <ref>. In lemma <ref> we rewrote the characteristic equation in normalized frequencies Ω, ω instead of Ω̃, ω̃. Lemma <ref> studied the resulting fundamental nonlinear 2-scale relation between Ω and ω. The resulting control parameters B were addressed in lemma <ref>. Emphasis there was on monotonicity properties with respect to ω. Section <ref> has been devoted to the resulting nontrivial control-induced Hopf bifurcations, at B=B_m,j^±, in contrast to the spectrum inherited from the uncontrolled case. Here m indicates the proximity of an m:1 resonance of ω̃ with ω_k. The detailed analysis included simplicity of Hopf eigenvalues, and their transverse crossing directions; see theorem <ref>. The analysis culminated in corollaries <ref> and <ref>. In corollary <ref> we established the absence of any Pyragas control, for control parameters B>0. The central corollary <ref> established the control region B∈𝒫=(B_0,1^+, B_1,1^-)as the only Pyragas region, under assumptions (<ref>) – (<ref>) of certain inequalities among the control-induced Hopf parameter values B=B_m,j^±. Section <ref> collected ε-expansions for B_m,j^±, and auxiliary quantities like the fast and slow normalized Hopf frequencies ω_m,j^± and Ω_m,j^±, see proposition <ref>. These Taylor expansions in ε amount to expansions in the limit of arbitrarily large unstable dimensions k, and arbitrarily rapid oscillation frequencies ω_k=(k+12)π, of the original Hopf bifurcations of (<ref>) in absence of any control. Corollary <ref> then established the crucial Hopf inequalities (<ref>) – (<ref>) for small enough 0<ε≤ε_0(δ_0), uniformly for bounded indices m≤ m_0 and j ≤ j_m+1 in the Hopf series B_m,j^+, as well as for B_m,j^- with j≤ j_m. Here j_m:=[(m+1)/2]. In particular this settled assumption (<ref>). At the end of this section, we observed how our proof hinged on a miraculous gap property of instability intervals B ∈ I_m,j=(B_m,j^-, B_m,j^+): the first gap occurs at j=j_m and contains the Pyragas region (<ref>). See fig. <ref>.To complete the proof, the limit of large near-resonances m>m_0, alias small δ = 1/Ω_m=1/(2m+1), was addressed in section <ref>. For odd k, lemma <ref> collected δ-expansions for Ω, B, ε^±, and B_0,1^+, B_1,1^- in terms of the (normalized) fast Hopf frequencies ω. The Taylor expansions with respect to δ address the limit of m:1 resonances, for m →∞. In lemma <ref> this settled the remaining two assumptions (<ref>) and (<ref>) of corollary <ref>, up to one exceptional case. The exceptional case was caused by our additional assumption, thus far, that all relevant Hopf points B_m,j^-,j=1, … , j_m in (<ref>) occur in the (normalized) frequency range -π2≤ω≤πΩ <0 ,i.e. at, or to the right of, the unique intersection (Ω_*, ω_*) of the 2-scale frequency relation Ω = Ω(δ,ω) with the straight line ω = πΩ in the (Ω, ω)-plane. See (<ref>), (<ref>), (<ref>). Since Hopf points themselves originate from intersections of straight hashing lines with that 2-scale frequency relation, the only remaining case was that the hashing line at j=j_m=[(m+1)/2] intersects ω = πΩ to the left of (Ω_*, ω_*). This case was addressed, and settled, in lemma <ref>. Indeed the minimal possible value B_min of control parameters induced by the 2-scale frequency relation then satisfies B_1,1^-(ε)< B_min(δ)<0 .Since B_min≤ B_m,j^- for all j, this also established the remaining assumption (<ref>) in this one remaining configuration. This completes the proof of our main theorem <ref>. 9999999999 [BeCo63]beco63 R. Bellman and K.L. Cooke. Differential-Difference Equations. Academic Press, New York, 1963.[Die&al95]dieetal95 O. Diekmann, S.A. van Gils, S.M. Verduyn-Lunel and .Delay Equations: Functional-, Complex-, and Nonlinear Analysis. App. Math. Sci. 110, Springer-Verlag, New York, 1995.[Dor89]dor89 P. Dormayer. Smooth bifurcation of symmetric periodic solutions of functional differential equations. J. Differ. Equations. 82 (1989) 109–155.[Fie&al07]fieetal07 B. Fiedler, V. Flunkert, M. Georgi, P. Hövel and E. Schöll. Refuting the odd number limitation of time-delayed feedback control.Phys. Rev. Lett. 98 (2007) 114101.[Fie&al08]fieetal08 B. Fiedler, V. Flunkert, M. Georgi, P. Hövel and E. Schöll. Beyond the odd-number limitation of time-delayed feedback control.In Handbook of Chaos Control. (E. Schöll et al., eds.), Wiley-VCH, Weinheim, (2008) 73–84. [Fie&al10]fieetal10 B. Fiedler, V. Flunkert, P. Hövel and E. Schöll. Delay stabilization of periodic orbits in coupled oscillator systems. Phil. Trans. Roy. Soc. A. 368 (2010) 319–341.[FieMP89]fiemp89 B. Fiedler and J. Mallet-Paret. Connections between Morse sets for delay differential equations. J. Reine Angew. Math. 397 (1989) 23–41.[FiOl16]fiol16 B. Fiedler and S. Oliva. Delayed feedback control of a delay equation at Hopf bifurcation. J. Dyn. Differ. Equations. 28 (2016) 1357–1391.[Hale77]hale77 J.K. Hale. Theory of Functional Differential Equations. Springer-Verlag, New York, 1977.[HaleVL93]halevl93 J.K. Hale and S.M. Verduyn-Lunel. Introduction to Functional Differential Equations. Springer-Verlag, New York, 1993.[Har&al06]haretal06 F. Hartung, T. Krisztin, H.-O. Walther and J. Wu. Functional differential equations with state-dependent delays: theory and applications. In Handbook of Differential Equations: Ordinary Differential Equations, Vol. III. (A. Cañada, P. Drábek and A. Fonda eds.),Elsevier/North-Holland, Amsterdam, (2006) 435–545.[Ju&al07]juetal07 W. Just, B. Fiedler, V. Flunkert, M. Georgi, P. Hövel and E. Schöll. Beyond the odd number limitation: A bifurcation analysis of time-delayed feedback control.Phys. Rev. E. 76 (2007) 026210.[KapYor74]kapyor74 J.L. Kaplan and J.A. Yorke. Ordinary differential equations which yield periodic solutions of differential delay equations. J. Math. Analysis Appl. 48 (1974) 317–324.[KolMysh99]kolmysh99 V. Kolmanovski and A. Myshkis. Introduction to the Theory and Applications of Functional Differential Equations. Kluwer, Dordrecht, 1999.[Kri08]kri08 T. Krisztin. Global dynamics of delay differential equations. Period. Math. Hung. 56 (2008) 83–95.[Kur71]kur71 J. Kurzweil. Small delays don't matter. In Proc. Symp. Differential Equations and Dynamical Systems, Warwick 1969(D. Chillingworth ed.),Springer-Verlag Berlin 1971, 47–49.[Lop17]lop17A. López Nieto. Heteroclinic connections in delay equations. Master's Thesis, Freie Universität Berlin, 2017. [MP88]mp88 J. Mallet-Paret. Morse decompositions for differential delay equations. J. Differ. Equations. 72 (1988) 270–315.[MPNu92]mpnu92 J. Mallet-Paret and R.D. Nussbaum. Boundary layer phenomena for differential-delay equations with state-dependent time-lags: I. Arch. Ration. Mech. Analysis 120 (1992) 99–146.[MPNu96]mpnu96 J. Mallet-Paret and R.D. Nussbaum. Boundary layer phenomena for differential-delay equations with state-dependent time-lags: II. J. Reine Angew. Math. 477 (1996) 129–197.[MPNu03]mpnu03 J. Mallet-Paret and R.D. Nussbaum. Boundary layer phenomena for differential-delay equations with state-dependent time-lags: III. J. Differ. Equations 189 (2003) 640–692.[MPNu11]mpnu11 J. Mallet-Paret and R.D. Nussbaum. Stability of periodic solutions of state-dependent delay-differential equations. J. Differ. Equations. 250 (2011) 4085–4103.[MPSe96a]mpse96a J. Mallet-Paret and G. Sell. Systems of differential delay equations: Floquet multipliers and discrete Lyapunov functions. J. Differ. Equations 125 (1996) 385–440.[MPSe96b]mpse96b J. Mallet-Paret and G. Sell. The Poincaré–Bendixson theorem for monotone cyclic feedback systems with delay. J. Differ. Equations 125 (1996) 441–489.[Mysh49]mysh49 A.D. Myshkis.General Theory of Differential Equations with Retarded Argument.AMS Translations, Ser. I, vol. 4. AMS, Providence (1962).Translated from Uspekhi Mat. Nauk (N.S.) 4 (33) (1949), 99-141.[Nu78]nu78 R.G. Nussbaum. Differential-Delay Equations with Two Time Lags. Mem. Am. Math. Soc. 205, Providence, RI, 1978.[Nu02]nu02 R.G. Nussbaum. Functional differential equations. In Handbook of Dynamical Systems, Vol. II. (B. Fiedler ed.),Elsevier/North-Holland, Amsterdam, (2002) 461–499.[Nak97]nak97 H. Nakajima. On analytical properties of delayed feedback control of chaos. Phys. Lett. A. 232 (1997) 207–210.[NakUe98]nakue98 H. Nakajima and Y. Ueda. Half-period delayed feedback control for dynamical systems with symmetries. Phys. Rev. E. 58 (1998) 1757–1763.[Pyr92]pyr92 K. Pyragas. Continuous control of chaos by self-controlling feedback. Phys. Lett. A. 170 (1992) 421–428.[Pyr12]pyr12 K. Pyragas. A twenty-year review of time-delay feedback control and recent developments. Int. Symp. Nonl. Th. Appl., Palma de Mallorca, 2012.[Wal83]wal83 H.-O. Walther. Bifurcation from periodic solutions in functional differential equations. Math. Z. 182 (1983) 269–289.[Wal95]wal95 H.-O. Walther. The 2-Dimensional Attractor ofẋ(t) = -μ x(t) + f(x(t-1)). Mem. Amer. Math. Soc. 544, Providence, RI, 1995. [Wal14]wal14 H.-O. Walther.Topics in delay differential equations. Jahresber. DMV 116 (2014), 87–114[Wri55]wri55 E.M. Wright. On a non-linear differential-difference equation. J. Reine Angew. Math. 194 (1955) 66–87.[Wu96]wu96 J. Wu. Theory and Applications of Partial Functional Differential Equations. Springer-Verlag, New York, 1996.[YuGuo14]yuguo14 J. Yu and Z. Guo. A survey on the periodic solutions to Kaplan-Yorke type delay differential equation-I. Ann. Differ. Equations. 30 (2014) 97–114.
http://arxiv.org/abs/1708.08101v2
{ "authors": [ "Bernold Fiedler", "Isabelle Schneider" ], "categories": [ "math.DS", "34K20, 34K18, 34K35, 34K13" ], "primary_category": "math.DS", "published": "20170827162222", "title": "Stabilized rapid oscillations in a delay equation: Feedback control by a small resonant delay" }
smallarray[1]=.13885em @#1@equationsection#1#1PSCatCATLAdjRAdjdiaEMopHoChTriaCTriaconeHocolimDiaHolimFin.CatOb𝐃realCoKertowTowtelTelgrKerKerFuncptcolimAbℛHomExtprSetsSetTopSp𝒟ℤℕ𝕏ℋ̋𝒰𝒱𝔽𝔻𝐓#1#1MEABKCW𝒲F𝒬𝒢fpaddAddProdGengenPresCopresCogenev#1Gr-#1123#1Mod-#1 plainmaintheoremTheoremmaincorollary[maintheorem]CorollarydefinitionmainquestionQuestion plaintheoremTheorem[section] lemma[theorem]Lemmaproposition[theorem]Propositioncorollary[theorem]Corollaryremarkremark[theorem]Remarkdefinitionexample[theorem]Exampleexercise[theorem]Exercisequestion[theorem]Questiondefinition[theorem]Definitionnotation[theorem]Notationt-Structures on stable derivators and Grothendieck hearts Manuel Saorín[The first named author was supported by the research projects from the Ministerio de Economía y Competitividad of Spain (MTM2016-77445-P) and the Fundación `Séneca' of Murcia (19880/GERM/15), both with a part of FEDER funds.] Jan Šťovíček[The second named author was supported by Neuron Fund for Support of Science.] Simone Virili[The third named author was supported by the Ministerio de Economía y Competitividad of Spain via a grant `Juan de la Cierva-formación'. He was also supported by the Fundación `Séneca' of Murcia (19880/GERM/15) with a part of FEDER funds.]=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We prove that, given any strong and stable derivator and a t-structure in its base triangulated category D, the t-structure canonically lifts to all the (coherent) diagram categories and each incoherent diagram in the heart uniquely lifts to a coherent one. We use this to show that the t-structure being compactly generated implies that the coaisle is closed under directed homotopy colimits which, in turn, implies that the heart is an (Ab.5) Abelian category. If, moreover, D is a well-generated algebraic or topological triangulated category, then the heart of any accessibly embedded (in particular, compactly generated) t-structure has a generator. As a consequence, it follows that the heart of any compactly generated t-structure of a well-generated algebraic or topological triangulated category is a Grothendieck Abelian category. t-structures, heart, derivator, Grothendieck Abelian category. 18E30, 18E40, 18E15.tocsectionIntroduction§ INTRODUCTIONt-Structures in triangulated categories were introduced by Beilinson, Bernstein and Deligne <cit.> in their study of perverse sheaves on an algebraic or analytic variety. At-structure in a triangulated category𝒟is a pair of full subcategories satisfying a suitable set of axioms (see the precise definition in Subsection <ref>) which guarantees that their intersection is an Abelian categoryℋ, called the heart of thet-structure. One then naturally defines a cohomological functorH𝒟⟶ℋ, which allows to develop an intrinsic (co)homology theory, where the homology “spaces” are objects of𝒟itself.t-Structures have been used in many branches of mathematics, with special impact in algebraic geometry, algebraic topology and representation theory of algebras.Given at-structure in a triangulated category D, and considering the induced Abelian category$̋, a natural problem consists in finding necessary and sufficient conditions on the t-structure and on the ambient category for the heart to be a “nice” Abelian category. When our triangulated category has (co)products, the category $̋ is known to be (co)complete (see <cit.>) and, using the classical hierarchy of Abelian categories due to Grothendieck <cit.>, the natural questionis the following:When is the heart $̋ a Grothendieck Abelian category? As one might expect, the real issue is to prove that$̋ has exact directed colimits. In this respect, we encounter a phenomenon which seems invisible to the triangulated category D alone, namely directed homotopy colimits and the question on whether $̋ is closed under these. To work with homotopy colimits, we need a certain enhancement of Dand we choose Grothendieck derivators, as this is in some sense the minimal homotopy theoretic framework where a well-behaved calculus of homotopy (co)limits is available. This said, we are immediately led to the second main problem of the paper: How do t-structures interact with strong and stable Grothendieck derivators? The study of Question <ref> has a long tradition in algebra. In its initial steps, the focus was almost exclusively put on the case of the so-called Happel-Reiten-Smalø introduced in <cit.> and treated in much greater generality in <cit.>. These aret-structures on a derived category( G)of an Abelian category Ginduced by a torsion pair in G. The study of conditions for the heart of the Happel-Reiten-Smaløt-structure in(𝒢), for a Grothendieck or module category𝒢, to be again a Grothendieck or a module category, has received a lot of attention in recent years (see <cit.> and <cit.>). Let us remark that the first named author with C. Parra <cit.> gave a complete answer to Question <ref> in this particular case: the heart of the Happel-Reiten-Smaløt-structure_τ, associated to a torsion pairτ=(𝒯,ℱ)in a Grothendieck Abelian category𝒢, is again a Grothendieck Abelian category if, and only if, the torsion-free classℱis closed under taking directed colimits in𝒢. When more generalt-structures are considered, the answers to Question <ref> are more scarce and they are often inspired in one or another way by tilting theory. In a sense, the classical derived Morita Theorems of Rickard <cit.> (for the bounded setting) and Keller <cit.> (for the unbounded case) can be seen as the first examples where an answer to the problem is given. Namely, ifAandBare ordinary algebrasand_BT_Ais a two-sided tilting complex (see <cit.>), then the triangulated equivalence-⊗_B^𝕃T(B)⟶̃(A)takes the canonicalt-structure(^≤ 0(B),^≥ 0(B))to the pair(T^⊥_>0,T^⊥_<0), which is then at-structure in(A), whose heart is equivalent to B. This includes the case of a classical (n-)tilting module in the sense of <cit.>. The dual of a (not necessarily classical) (n-)tiltingA-module is that of a (big) (n-)cotiltingA-moduleQ, in which case the second named author proved that(_^⊥_<0Q,_^⊥_>0Q)is at-structure in(A)whose heart is a Grothendieck Abelian category (see <cit.>). These tworesults have recently been extended to include all silting sets of compact objects and all pure-injective cosilting sets in a compactly generated triangulated category (see <cit.>, also for the used terminology). Results saying that cosiltingt-structures have Grothendieck hearts under appropriate assumptions also include <cit.> and <cit.>, whereas conditions under which thet-structure(T^⊥_>0,T^⊥_<0)obtained from a non-classical tilting moduleThas a Grothendieck heart were given in <cit.>. A common feature of the results in <cit.> is that the heart is proved to be a Grothendieck Abelian category rather indirectly, using the pure-injectivity of certain cotilting or cosilting objects and ideas from model theory of modules. Last but not least, there is a family of results which provides evidence thatt-structures generated by a set of compact objects should, under all reasonable circumstances, have Grothendieck hearts. Briefly summarizing, <cit.> establishes this result for any compactly generatedt-structure in the derived category(R)of a commutative ringRwhich is given by a left bounded filtration by supports, <cit.> gives the same result for any non-degenerate compactly generatedt-structure in an algebraic compactly generated triangulated category, and finally Bondarko establishes such a result in <cit.> for practically all triangulated categories which one encounters in practice. Our approach here is rather different from the ones above and, in some sense, much more direct. It is more in the spirit of Lurie's <cit.>, where a criterion for at-structure to have a Grothendieck heart is given in the language of∞-categories. Our aim is to reach the corresponding criterion for exactness of directed colimits in the heart faster and in a way hopefully more accessible to the readers concerned with the representation theory of associative algebras and related fields. To outline our strategy, consider at-structure = (,Σ)on a triangulated category Dwith coproducts and let=̋∩Σbe the heart of. Then we have the following well-known chain of implications:is compactlyis closed $̋ has exact generated (see p.compactly_gen_t_st)under coproducts coproducts.However,being closed under coproducts is not enough to have exact directed colimits in$̋. This follows from the main results of <cit.> (see also Example <ref> below). What we need instead is a stronger condition: we assume thatis closed under directed homotopy colimits. For instance, in the case of the derived category D = () of a Grothendieck Abelian category , the homotopy colimit is the total left derived functor _I = 𝐋_I(^I) ⟶()of the usual colimit functor _I^I →. The problem is, of course, that albeit there always exists a canonical comparison functor (^I) →()^I, it is typically far from being an equivalence. At this point, the language of derivators naturally enters the scene, as the categories of the form (^I) can be naturally assembled to form a derivator (for all the undefined terminology we refer to Section <ref>). To see this, one should recall that, when ℳ is a cofibrantly generated model category with 𝒲 its class of weak equivalences (e.g., the usual injective model structure of (), withthe class of quasi-isomorphisms), the functor category ℳ^I admits a cofibrantly generated model structure, with weak equivalences calculated level-wise, for each small category I, and the assignment I↦𝔻(I):=Ho(ℳ^I) gives a well-defined derivator (see<cit.>), i.e. a 2-functor^⟶which satisfies certain axioms, whereis the 2-category of small categories andis the 2-“category" of all categories. Furthermore,is strong and stable provided ℳ is a stable model category. The axioms in particular imply that the natural range of this 2-functor is naturally the 2-“category” of all triangulated categories, that is, 𝔻(I) is a triangulated category for each I∈. A prototypical example is precisely the assignment I ↦(^I) for a Grothendieck Abelian category . The main results of the paper will be proven in general for such a strong and stable derivator . Note also that, denoting bythe one-point category with the identity morphism only, and letting 𝒟:=() the base category of , one usually viewsas an enhancement of the triangulated category D, which is in some sense the minimalistic enhancement which allows for a well-behaved calculus of homotopy (co)limits or, more generally, homotopy Kan extensions (see <cit.>). More precisely, a homotopy colimit functor _I(I)→() is simply a left adjoint to (_I)()→(I), where _I is the unique functor I→ in . The existence of the adjoint is ensured by the axioms of a derivator and this notion of homotopy colimit is consistent with our previous definition of _I as the total left derived functor of _I. The advantage of derivators is that we now can givea precise meaning to what it means thatis closed under directed homotopy colimits but, on the other hand, we now fully hit Question <ref>. We started with a t-structure =(,Σ) in the base category (), and it is a natural question whether this t-structure lifts to the other triangulated categories (I), with I∈. Even when it does, what we are really concerned with is the relation between the heart in (I) and the I-shaped diagrams in the heart of (). Luckily, both these problems have a very natural solution that is condensed in the following theorem (for the proof see Section <ref>): Let ^→ be a strong and stable derivator, and =(,Σ) a t-structure in (). If we let _I:={X∈(I):X_i∈, ∀ i∈ I},_I:={Y∈(I):Y_i∈, ∀ i∈ I}, then _I:=(_I,Σ_I) is a t-structure in (I). Furthermore, the diagram functor _I(I)⟶()^I induces an equivalence _̋I≅^̋I between the heart _̋I of _I and the category ^̋I of diagrams of shape I in the heart of . Now we can state our main answer to the part of Question <ref> which is concerned with the exactness of directed colimits in the heart of a t-structure. This result will be proved in Section <ref>. Let ^→ be a strong stable derivator and let =(,Σ) be a t-structure in (). Then we have the implications:is compactlyis closed under $̋ has exact generateddirected homotopy colimitsdirected colimits. What remains is to give a criterion for the heart to have a generator. As it turns out, this is, unlike the (Ab.5) condition, a problem of mostly a technical nature. For most triangulated categories (or derivators) andt-structures arising in practice, the answer is affirmative. In the next theorem we give a general criterion in the setting of what one may call “accessible stable derivators”, that is, the ones associated with a stable combinatorial model category. This will be treated in Section <ref>. Let (,, B, F) be a stable combinatorial model category, D=() the triangulated homotopy category and =(,Σ) a λ-accessibly embedded t-structure in D for some infinite regular cardinal λ (e.g., D a well-generated algebraic or topological triangulated category, with a t-structure generated by a small set of objects). Then the heart =̋∩Σ ofhas a generator. If, in particular,is homotopically smashing (equivalently, λ=ℵ_0), then $̋ is a Grothendieck Abelian category. As an immediate consequence, we obtain the following corollary which provides an alternative to <cit.> in showing that the heart of a compactly generatedt-structure is, in practice, always a Grothendieck Abelian category (see also Remark <ref>). Let D=(), whereis a combinatorial stable model category. If =(𝒰,Σ𝒱) is a compactly generated t-structure in D, then the heart =̋𝒰∩Σ𝒱 is a Grothendieck Abelian category.Acknowledgement.It is a pleasure for us to thank Fritz Hörmann and Moritz Groth for helpful discussions and suggestions. We are also grateful to Rosie Laking and Gustavo Jasso forpointing out some analogies with results of Jacob Lurie in the∞-categorical setting, and to Mikhail Bondarko for telling us about his results in <cit.>. Finally, we are indebted to an anonymous referee whose numerous remarks and suggestions have helped to greatly improve the exposition and clarity of our results.§ PRELIMINARIES AND NOTATION In this first section we fix most of the notations and conventions about general category theory that will be needed in the rest of the paper. This includes basic facts about additive and Abelian categories, torsion pairs, triangulated categories,t-structures, categories with weak equivalences and model categories. All the results included in this section are known, so the proofs are omitted in favor of suitable references to the existing literature. §.§ Conventions and basic results in category theoryGiven a categoryand two objectsxandyin, we denote by(x,y):=_(x,y)the-set of all morphismx→ yin. Throughout the paper, all the subcategories will be assumed to be full, so we just say “subcategory” to mean “full subcategory”. Similarly, wegenerally drop the distinction between a subcategory⊆and a subclass⊆(), as the two things univocally determine each other. We denote bythe category of Abelian groups. Ordinals. Any ordinalλcan be viewed as a category in the following way: the objects ofλare the ordinalsα<λand, givenα,β<λ, the-setλ(α,β)is a point ifα≤β, while it is empty otherwise. Following this convention,* ={0} is the category with one object and no non-identity morphisms;* ={0→ 1} is the category with one non-identity morphism;*in general, 𝐧={0→ 1→…→ (n-1)}, for any n∈_>0. Functor categories, limits and colimits. A categoryIis said to be ( skeletally)small when (the isomorphism classes of) its objects form a set. IfandIare an arbitrary and a small category, respectively, a functorI→is said to be adiagram inof shapeI. The category of diagrams inof shapeI, and natural transformations between them, is denoted by^I. A diagramXof shapeI, is also denoted as(X_i)_i∈ I, whereX_i := X(i)for eachiinI. When every diagram of shapeIhas a limit (resp., colimit), we say thathas allI-limits (resp.,I-colimits). In this case,lim_I ^I→(resp.,_I ^I→) will denote the (I-)limit (resp., (I-)colimit) functor, which is the right (resp., left) adjoint to the constant diagram functorκ_I→^I. A particular case, very important for us,comes whenIis a directed set, viewed as a small category in the usual way. The corresponding colimit functor is the (I-)directed colimit functor_I^I→. TheI-diagrams inare usually called directed systemsof shapeI.The categoryis said to becomplete (resp.,cocomplete, bicomplete) when allI-limits (resp.,I-colimits, both) exist in, for any small categoryI. 2-Categories of categories. We denote bythe2-category of small categories and by^the2-category obtained by reversing the direction of the functors in(but letting the direction of natural transformations unchanged). Similarly, wedenote bytheof all categories. This, when taken literally, may originate some set-theoretical problems that, for our constructions, can be safely ignored: see the discussion after <cit.>.Given two natural transformationsα F⇒ G→andβ G⇒ H→, we denote byβ∘α F⇒ Htheir vertical composition, that is,(β∘α)_C:=β_C∘α_Cfor eachC∈. Furthermore, given two natural transformationsα F_1⇒ F_2→andβ G_1⇒ G_2→ℰ, we denote byβ⊛α G_1∘ F_1⇒ G_2∘ F_2their horizontal composition, that is,(β⊛α)_C:=β_F_2(C)∘ G_1(α_C)=G_2(α_C)∘β_F_1(C), for eachC∈. With a slight abuse of notation, we also letβ⊛ F_1:=β⊛𝕀_F_1andG_1⊛α:=𝕀_G_1⊛ α.Categories of adjoint functors and mates. Letandbe two categories. Given a pair of adjoint functorsL:⇄:R, we use the following compact notation LηϵR to mean thatLis left adjoint toR, with unitη𝕀_⇒ R∘ Land counitϵ L∘ R⇒𝕀_. In particular, the following relations hold𝕀_L=(ϵ⊛ L)∘( L⊛η) and𝕀_R=(R⊛ϵ) ∘ (η⊛ R)Furthermore, we let(,)(resp.,(,)) be the (full) subcategory of(,)spanned by the left (resp., right) adjoint functors.Given two categoriesand , there is an equivalence of categories Φ(,)⟶(,)^, which is constructed as follows:*for each left adjoint L fix a right adjoint Φ(L) of L (that is, L⊣Φ(L));*given two left adjoints L and L', where LηϵΦ(L) and L'η'ϵ'Φ(L'), and a natural transformation α L⇒ L' one defines Φ(α)Φ(L')⇒Φ(L) as follows:Φ(α):=(Φ L⊛ϵ')∘ (Φ L⊛α⊛Φ L')∘ (η⊛Φ L').Furthermore, the bijection Φ_L,L'(,)(L,L')→(,)(Φ(L'),Φ(L)) respects compositions and identities. In particular, a given α L⇒ L' is a natural isomorphism if and only if Φ(α) is a natural isomorphism.Given a natural transformation between left adjointsα L⇒ L', its imageΦ(α)in(,)via the equivalence described above is said to be thetotal mate of, or the natural transformationconjugated to,α. Consider now a square in, that commutes up to the natural transformationα:@C=10pt@R=6pt[dd]_L[rr]^F'[dd]^L'@[ddll]|-α[rr]_G'If there are adjunctionsLηϵR→, andL'η'ϵ'R''→', then we can consider the following pasting diagram:@C=10pt@R=6pt@<-[dd]_R[rr]^F'@<-[dd]^R'@[ddll]|-α_*[rr]_G' @C=11pt@R=2pt @/_10pt/@=[ddrr][rr]^R[dd]_L[rr]^F'[dd]^L'@/_-10pt/@=[ddrr]:=ϵαη' [rr]_G'[rr]_R''that is,α_*:=(R'G⊛ϵ)∘ (R'⊛α⊛ R)∘ (η'⊛ FR). Dually, givenadjunctionsF_!η^Fϵ^FF'→, andG_!η^Gϵ^GG'→, we can consider the following pasting:@C=10pt@R=6pt@<-[dd]_F_![rr]^L@<-[dd]^G_!@[ddll]|-α_! := '[rr]_L'' @C=11pt@R=2pt'@/_10pt/@=[ddrr][rr]^F_![dd]_F[rr]^L[dd]^G@/_-10pt/@=[ddrr]η^Fαϵ^G'[rr]_L''[rr]_G_!that is,α_!:=(ϵ^G⊛ LF_!)∘ (G_!⊛α⊛ F_!)∘ (G_!L'⊛η^F). In fact, there is a close relation betweenα_!andα_*: Let , ',and ' be categories, consider the adjunctions:LηϵR→, L'η'ϵ'R''→',F_!η^Fϵ^FF'→,G_!η^Gϵ^GG'→.and a natural transformation α L'∘ F⇒ G∘ L. Then α_*F∘ R⇒ R'∘ G is the natural transformation conjugated to α_! G_!∘ L'⇒ L∘ F_!. In particular, α_* is a natural isomorphism if, and only if, α_! is a natural isomorphism. Furthermore, the operations (-)_* and (-)_! are each other's inverse, in the sense that (α_*)_!=α=(α_!)_*.§.§ Additive categories, Abelian categories and torsion pairs Additive categories. Recall that a category is said to be additive if it is an-enriched category with a0-object and finite products and coproducts. An important property of additive categories is that, in this context, finite products and coproducts coincide. More precisely, given an additive categoryand two objectsA_1andA_2, any coproduct(A, (ϵ_i A_i→ A)_i=1,2)(resp., any product(A, (π_i A→ A_i)_i=1,2)) can be completed to a biproduct, that is, a triple(A, (ϵ_i A_i→ A)_i=1,2, (π_i A→ A_i)_i=1,2)such that the following identities hold:π_1ϵ_1=𝕀_A_1,π_2ϵ_2=𝕀_A_2,ϵ_1π_1+ϵ_2π_2=𝕀_A,π_2ϵ_1=0=π_1ϵ_2.This allows one to interpret (column-finite) matrices as morphisms between (infinite) coproducts. Letbe an additive category and consider the following sequence in :A[r]^αB[r]^βC.We say that the above sequence issplit-exact if there exist α' B→ A and β' C→ B such that the triple (B,(α,β'),(α',β)) is a biproduct of A and C. Note that a sequence Aα⟶Bβ⟶C in an additive category 𝒞 is split-exact if, and only if, either one of the following equivalent conditions holds:*it is akernel-cokernel sequence (i.e., α is the kernel of β, and β is the cokernel of α) such that α is a section (resp., β is a retraction);*there are morphisms α' B→ A and β' C→ B such that α'α =𝕀_A, ββ'=𝕀_C and 𝕀_B=αα'+β'β (i.e., the conditions βα=0=α'β' are redundant). Indeed, notice for example that βα=βαα'α=β(𝕀_B-β'β)α=(𝕀_C-ββ')βα=0. Whenα A→ A'andβ B'→ Bare isomorphisms, the following are trivial examples of split-exact sequences:@C=33pt A[r]^αA'[r]^00 0[r]^0B'[r]^βB.Letbe an additive category. *Consider the following sequence in :@C=55pt A[r]^-α:=[α_1 α_2]A'⊔ B'[r]^-β:=[β_1 β_2] B.This sequence is split-exact if β∘α=0 and both α_1 A→ A' and β_2 B'→ B are isomorphisms as, in this case, it is enough to choose α':=[ccα_1^-1 0] and β':=[c0β_2^-1]. Similarly, the sequence is split-exact if β∘α=0 and both α_2 and β_1 are isomorphisms.*If X∈ if an object such that the countable coproduct X^() of copies of X exists in , then the following sequence is split-exact:@C=130pt X^()[r]^α= [cccc 1 -1 10-1 10⋱ ⋱]X^()[r]^β=[cccc 1 1 1 ⋯] X, where α':=[ccccc 0 -1 -1 -1 ⋯0 -1 -1 ⋯ 0 -1 ⋯ 0 ⋱ ⋱] and β':=[c100⋮]. Subcategories of additive categories. Given an additive categoryand a (always full) subcategory⊆, we shall denote by_()(resp.,_()), or simply()(resp.,()) if no confusion is possible, the subcategory ofspanned by the direct summands of finite (resp., small) coproducts of objects in.We use the notation=to mean thatis closed under taking directed colimits, that is, givenF I→forIa directed set, such thatF(i)∈for alli∈ I, whenever the colimit_IFexists in the ambient category, it is an object of.A family of objectsis said to be aset of generators when the functor∐_S∈(S,-) ⟶ is faithful. An objectGis agenerator ofwhen{G}is a set of generators.(Ab.5) and Grothendieck (Abelian) categories. Letbe an Abelian category.Recall from<cit.> thatis called (Ab.5)when it is (Ab.3) (=cocomplete)and the directed colimit functor_I^I→is exact, for any directed setI. An (Ab.5) Abelian categoryhaving a set of generators (equivalently, a generator), is said to be aGrothendieck Abelian category. Such a category always has enough injectives, and any of its objects has an injective envelope (see <cit.>). Moreover, it is always a complete (and cocomplete) category (see <cit.>).Finitely presented objects. Whenis a cocomplete additive category, an objectXofis calledfinitely presented when(X, -) →preserves directed colimits, that is, for any directed setIand any diagram(Y_i)_i∈ Iin^I, the following canonical map is an isomorphism_I (X,Y_i)⟶(X,_I Y_i)where the first colimit is taken inand the second in. Whenis a Grothendieck Abelian category with a set of finitely presented generators which, in this setting, is equivalent to say that each object ofis a directed colimit of finitely presented objects, we say thatislocally finitely presented. Torsion pairs. Atorsion pair in an Abelian categoryis a pairτ = (𝒯 , ℱ)of subcategories satisfying the following two conditions:* (T, F) = 0, for all T∈𝒯 and F∈ℱ;*for any object X ofthere is a short exact sequence ε_X 0 → T_X→ X→ F_X → 0, where T_X∈𝒯 and F_X ∈ℱ.In such case,the short exact sequenceε_Xfrom (TP.2) is uniquely determined, up to a unique isomorphism, and the assignmentX↦ T_X(resp.X↦ F_X) underlies a functor→𝒯(resp.,→ℱ) which is right (resp., left) adjoint to the inclusion functor𝒯→(resp.,ℱ→). We say thatτis offinite type providedℱ=ℱ. §.§ Triangulated categories and t-structures Triangulated categories. We referto <cit.> for the precise definition of triangulated category. In particular, given a triangulated category𝒟, wealways denote byΣ𝒟→𝒟thesuspension functor, and we denote ( distinguished)triangles either byX → Y→Z+→orX→ Y → Z→Σ X. Unlike the terminology used in the abstract setting of additive categories, in the context of triangulated categories a weaker version of the notion of “set of generators” is commonly used. Namely, a set⊆𝒟is called aset of generators of𝒟if an objectXof𝒟is zero whenever𝒟(Σ^kS, X) = 0, for allS ∈andk ∈. In case𝒟has coproducts, wesay that an objectXiscompact when the functor𝒟(X,-) 𝒟→preserves coproducts. Wesay that𝒟iscompactly generated when it has a set of compact generators. Given a set𝒳of objects in𝒟and a subsetI⊆, we let𝒳^⊥_I :={Y∈𝒟:𝒟(X,Σ^iY)=0, for all X∈𝒳 and i∈ I} ^⊥_I𝒳 :={Z∈𝒟:𝒟(Z,Σ^iX)=0, for all X∈𝒳 and i∈ I}.IfI={i}for somei∈, then we let𝒳^⊥_i:=𝒳^⊥_Iand^⊥_i𝒳:=^⊥_I𝒳. Ifi=0, we even let𝒳^⊥_:=𝒳^⊥_0and^⊥_𝒳:=^⊥_0𝒳. In particular,is a set of generators if, and only if^⊥_=0.Cohomological functors. Given a triangulated category𝒟and an Abelian category, an additive functorH^0 𝒟→is said to be acohomological functor if, for any given triangleX→ Y→ Z→Σ X, the sequenceH^0(X)→ H^0(Y)→ H^0(Z)is exact in. In particular, one obtains a long exact sequence as follows:…→ H^n-1(Z)→ H^n(X)→ H^n(Y)→ H^n(Z)→ H^n+1(X)→…whereH^n := H^0 ∘Σ^n, for anyn ∈.t-Structures. A t-structure in𝒟is a pair=(, )of subcategories satisfying the following axioms:* 𝒟(U, Σ^-1W) = 0, for all U ∈ and W ∈;* Σ⊆;*for each X ∈𝒟, there are U_X ∈, V_X ∈Σ^-1 and a triangle U_X → X → V_X→Σ U_Xin .One can see that, in such case, both classes are closed under taking direct summands in𝒟, that = Σ(^⊥)and = ^⊥(Σ^-1) = ^⊥(^⊥). For this reason, wewrite at-structure as=(, Σ(^⊥))or=(, Σ), meaning that:=^⊥. We will calland^⊥theaisle and theco-aisle of thet-structure, respectively.The triangle of the above axiom (t-S.3) is uniquely determined byX, up to a unique isomorphism, and defines functorsτ_𝒟→andτ^^⊥𝒟→^⊥which are right and left adjoints to the respective inclusions. We call them theleft andright truncation functors with respect to the givent-structure. Furthermore, the above triangle will be referred to as thetruncation triangle ofXwith respect to.We say that at-structure(,Σ)is generated by a set, whenΣ=^⊥_<0(equivalently, =^⊥_≤ 0). When𝒟has coproducts, we say that thet-structure iscompactly generated whenit is generated by a setconsisting of compact objects in𝒟; in this case, we say thatis aset of compact generators of the aisleor of thet-structure. Hearts. Given at-structure=(, Σ)in a triangulated category, the subcategoryℋ := ∩Σ⊆is called theheart of thet-structure and it is an Abelian category, where the short exact sequences “are” the triangles of𝒟with the three terms in$̋. Moreover, with an obvious abuse of notation, the assignments X↦τ_U ∘τ^Σ(^⊥)(X) and X↦τ^Σ(^⊥)∘τ_U (X) define two naturally isomorphic cohomological functors H^0_𝒟→$̋ (see <cit.>). Let D be a triangulated category together with a t-structure =(,Σ) and heart :̋=∩Σ. Given a torsion pair τ=(𝒯,ℱ) in $̋ we can define a newt-structure_τ=(_τ,Σ_τ)on D, called theHappel-Reiten-Smalø tilt ofwith respect to τ (see <cit.>), where _τ:=Σ*𝒯,and_τ:=ℱ*,with the convention that, given two classes𝒳, 𝒴⊆ D,Z∈𝒳*𝒴if and only if there exists a triangleX→ Z→ Y→Σ Xin𝒟, withX∈𝒳andY∈𝒴.§.§ Categories with weak equivalences Categories with weak equivalences. Letbe a category and letbe a collection of morphisms containing all the isomorphisms in. The pair(,)is said to be acategory with weak equivalences (or arelative category) if, given two composable morphismsϕandψ, whenever two elements of{ϕ,ψ,ψϕ}belong inso does the third. The elements ofare calledweak equivalences.Given a category with weak equivalences(,), itsuniversal localization is a pair([^-1],F)consisting of a category[^-1]and afunctorF→[^-1]such thatF(ϕ)is an isomorphism for allϕ∈, and it is universal with respect to these properties, that is, ifG→is a functor such thatG(ϕ)is an isomorphism for allϕ∈, then there exists a unique functorG'[^-1]→such thatG'F=G(see <cit.>). Derived functors. Let(,),(',')be categories with weak equivalences and suppose that their universal localizations exist. A functor𝐋G[^-1]→'['^-1]together with a natural transformationα𝐋G ∘ F⇒ F'∘ Gis called thetotal left derived functor ofif the pair(𝐋G,α)is terminal between all pairs(H,β)such thatandβ H ∘ F⇒ F'∘ G. That is, given any(H,β), there is a unique natural transformationγ H ⇒𝐋Gsuch thatβ = α∘γ F. The notion oftotal right derived functor𝐑GofGis defined dually.Model categories. A model structure on a bicomplete category𝒞is a triple(, B, F)of classes of morphisms, closed under retracts, called respectively the weak equivalences, cofibrations, and fibrations, such that(,)is a category with weak equivalence and satisfying a series of axioms, for which we refer to <cit.>.A model category is then a bicomplete categoryequipped with a model structure; i.e. a quadruple(; ,B,F).The mere existence of a model structure for a category with weak equivalences allows one to give an explicit construction of the universal localization[^-1], which is traditionally called the homotopy category ofin this context, and denoted by(). Even better, model structures allow to construct and compute derived functors as well. To this end, an adjunction(F,G)⇄'between two model categories,'with model structures(, B, F)and(', B', F'), respectively, is called a Quillen adjunction if it satisfies one of the following equivalent conditions (see <cit.>): *the left adjoint F→' preserves cofibrations and trivial cofibrations;*the right adjoint G' → preserves fibrations and trivial fibrations. Given a Quillen adjunction, the total derived functors(𝐋F,𝐑G)exist and form an adjunction between()and('). Moreover,𝐋F(X)for an objectXofcan be computed by applyingFto a cofibrant replacement ofXand dually for𝐑G.In the context of algebraic examples, model structures on Abelian (even Grothendieck) categories play a prominent role. In particular, many of our examples will arise from the so-called Abelian model structures (see <cit.>). The following example allows one to encode the machinery of classical homological algebra in the framework of model categories.Given a Grothendieck Abelian category , we will denote by (), 𝒦() and () the category of (unbounded) cochain complexes of objects of , the homotopy category ofand the derived category of , respectively (see <cit.>).Recall that () is a bicomplete category. With the classof quasi-isomorphisms in (), the pair ((),) is a category with weak equivalences. Furthermore, taking ℱ be the class of all the epimorphisms with dg-injective kernels and let ℬ be the class of monomorphisms, then () with (, B, F) is a model category (see for example <cit.>, <cit.> or <cit.> for details and a proof). The homotopy category in this case is ().§ PRELIMINARIES ON DERIVATORS In this section we fix some notational conventions and we recall the basic definitions about prederivators, derivators, morphisms and2-morphisms between them. Furthermore, we collect some useful facts about pointed, additive, strong and stable derivators. All these results are probably known to experts but, in several cases, we have not been able to find suitable references in the existing literature. In those cases we have included a short proof. §.§ (Pre)Derivators Prederivators.Apre-derivator is a strict2-functor^→.All along this paper, we will use the following notational conventions:*the letterwill denote a (pre)derivator;* for any natural transformation α u⇒ v J→ I in , we will use the notation α^*:=(α) u^*⇒ v^*(I)→(J). Furthermore, we denote respectively by u_! and u_* the left and the right adjoint to u^* (whenever they exist), and call them respectively theleft andright homotopy Kan extension of u;*the letters U, V, W, X, Y, Z,will be used either for objects in the base () or for (incoherent) diagrams in (), that is, functors I→(), for some small category I;*the letters 𝒰, 𝒱, 𝒲, 𝒳, 𝒴, 𝒵,will be used for objects in some image (I) of the derivator, for I a category (possibly) different from . Such objects will be usually referred to ascoherent diagrams of shape I;*given I∈, consider the unique functor _I I→. We usually denote by _I:=(_I)_!(I)→() and _I:=(_I)_*(I)→() respectively the left and right homotopy Kan extensions of _I; these functors are called respectivelyhomotopy colimit andhomotopy limit.For a given objecti∈ I, we also denote byithe inclusioni→ Isuch that0↦ i. We obtain an evaluation functori^*(I)→(). For an object𝒳∈(I), we let𝒳_i:=i^*𝒳. Similarly, for a morphismα i→ jinI, one can interpretαas a natural transformation fromi→ Itoj→ I. In this way, to any morphismαinI, we can associateα^* i^*⇒ j^*. For an object𝒳∈(I), we let𝒳_α:=α^*_𝒳𝒳_i→𝒳_j. For anyIin, we denote by _I(I)⟶()^Ithediagram functor, such that, given𝒳∈(I),_I(𝒳) I→()is defined by_I(𝒳)(iα→ j)=(𝒳_i𝒳_α→𝒳_j). We will refer to_I(𝒳)as theunderlying (incoherent)diagram of the coherent diagram𝒳. Let ^→ be a prederivator. Given I∈, consider the unique functor _I I→, let X∈() and consider 𝒳:=_I^*X∈(I). Then the underlying diagram _I(𝒳)∈()^I is constant, that is, 𝒳_i= X for all i∈ I and, for all α i→ j in I, the map 𝒳_α𝒳_i→𝒳_j is the identity of X. The objects isomorphic to one of the form _I^*X∈(I) for some X∈() are called constant (coherent) diagrams.The 2-category of prederivators.Prederivators can be organized into a2-category, as is explained in <cit.>. Given prederivatorsand', a morphism of prederivatorsF→'consists of functorsF_I(I)→'(I), one for eachI∈, and natural equivalencesγ^F_u u^*∘ F_I≅ F_J∘ u^*, one for each functoru J→ I, subject to coherence relations (as in <cit.>, we will be sloppy and use the same symbol forγ^F_uand its inverse). Given morphisms of prederivatorsF,G→', a2-morphism (also called a natural transformation)α F⇒ Gconsists of a family of natural transformationsα_I F_I⇒ G_I, one for eachI∈, which are in a standard way compatible with the natural equivalencesγ^F_uandγ^G_uas above for all functorsuin. More details can be found in our references. Any ordinary categorygives rise to a prederivator 𝕐_^→ by the rule 𝕐_(I)=^I. Such prederivators are called represented, <cit.>, or Yoneda prederivators. The construction shows thatembeds into the 2-category of prederivators (up to usual set-theoretical issues, see <cit.> for details). Derivators.A pre-derivatoris aderivator if it satisfies the following four axioms:*if ∐_i∈ I J_i is a disjoint union of small categories, then the canonical functor (∐_I J_i) →∏_I(J_i) is an equivalence of categories;*for each I∈ andf𝒳→𝒴 a morphism in (I), f is an isomorphism if, and only if, i^*(f) i^*(𝒳) → i^*(𝒴) is an isomorphism in () for each i∈ I;*for each functor u I → J in , the functor u^* has both a left adjoint u_! and a right adjoint u_* (i.e.homotopy Kan extensions are required to exist). Before stating the last axiom (Der.4), let us introduce the following notation. Suppose we have a natural transformation:α v∘ u'⇒ u∘ win:@C=10pt@R=6pt I@<-[dd]_-u@<-[rr]^-vI'@<-[dd]^-u'@[ddll]|-α J@<-[rr]_-wJ',and let^→be a prederivator that satisfies (Der.3). Applyingto the above square, we get a natural transformationα^* (u')^*∘ v^*⇒ w^*∘ u^*@C=10pt@R=6pt(I)[dd]_-u^*[rr]^-v^*(I')[dd]^-(u')^*@[ddll]|-α^*(J)[rr]_-w^*(J').In this setting, we defineα_!:=(α^*)_! w_!∘(u')^*⇒ u^*∘ v_!andα_*:=(α^*)_* v^*∘ u_*⇒ u'_*∘ w^*, where(-)_!and(-)_*are defined right before Lemma <ref>. Hence, we deduce by that lemma thatα_!is an isomorphism if and only ifα_*is, and, furthermore, (α_!)_*=α^*=(α_*)_!.In order to properly state (Der.4), we need to start with a special case of the square (<ref>). Indeed, letu J→ Ibe a functor inand leti∈ I. We define theslice categoryu/i(resp.,i/u) whose objects are pairs(j,a u(j)→ i)(resp.,(j,a i→ u(j))) wherej∈ Jandais a morphism inI. Furthermore, a morphismf (j,a)→ (j',a')inu/i(resp., ini/u) is a morphismf j→ j'inJsuch thata=a'∘ u(f)(resp.,u(f)∘ a=a'). Then, “forgetting the second component of objects" gives well-defined functors_i u/i⟶ J and^i i/u⟶ J.We then get the following two squares inthat commute up to a natural transformation:@C=18pt@R=8pt u/i[dd]_-_u/i[rr]^-_iJ[dd]^-u@[ddll]|-γ[rr]_-iI and@C=18pt@R=8pt i/u[dd]_-_u/i[rr]^-^iJ[dd]^-u@[ddll]|-δ[rr]_-iI,whereγ_(j,a):=a u(j)→ ifor each(j,a)∈ u/iandδ_(j,b):=b i→ u(j)for each(j,b)∈ i/u. We finally have all the ingredients to properly state the axiom (Der.4): *the homotopy Kan extensions can be computed pointwise, that is, given u J→ I inand i∈ I, the following natural transformations are invertible@C=18pt@R=8pt(u/i)[dd]_-(_u/i)_!@<-[rr]^-^*_i(J)[dd]^-u_!@[ddll]|-γ_!()@<-[rr]_-i^*(I) and@C=18pt@R=8pt(i/u)[dd]_-(_i/u)_*@<-[rr]^-(^i)^*(J)[dd]^-u_*@[ddll]|-δ_*()@<-[rr]_-i^*(I).Finally, let us note that1- and2-cells in the2-category of derivators are defined exactly as for prederivators. In other words, derivators form a full sub-2-category of the2-category of prederivators.Letbe a prederivator that satisfies (Der.3), u J→ I inand i∈ I. Consider the natural transformation γ u∘_i⇒ i∘_u/i described in the above discussion. Now remember that, by (<ref>), we have that (γ_!)_*=γ^*. Expanding this equality we get:γ^*=(γ_!)_* =(_u/i^*i^*⊛ϵ_u)∘(_u/i^*⊛γ_!⊛ u^*)∘(η_u/i⊛_i^*u^*)where ϵ_u u_!u^*⇒𝕀_(I) and η_u/i𝕀_(u/i)⇒_u/i^*(_u/i)_! are the counit and unit, respectively, of the corresponding adjunctions. A similar observation can be done regarding the natural transformation δ.The Yoneda prederivator (Example <ref>) is a derivator if and only ifis a complete and cocomplete category.Given a derivatorand a small category A, we can define a prederivator ^A^→ by ^A(I):=(A× I) (and in the obvious way on functors and natural transformations). Then ^A is in fact a derivator by <cit.> and it is called the A-shift of . Moreover, given a functor u I→ J in , one obtains a morphism of derivators u^*^J→^I (see <cit.>).Let ^→ be a derivator, I∈and j∈ I. Then, by, the following triangle commutes up to a canonical isomorphism:@R=5pt(I)[rrrr]^_I()^I≅()@/_1.0pc/[uurrrr]_-⊗ j[uu]^j_!where -⊗ j is the left adjoint to the obvious “evaluation at j" functor (-)_j()^I→(). Furthermore, given X∈(), we have that (X⊗ j)(k)≅∐_I(j,k)X, for all k∈ I.The following computations will be useful later on: Let ^→ be a derivator, I a small category, J⊆ I a (full) subcategory and u J→ I the inclusion functor. Then, the following assertions hold true:*letting u_!η_uϵ_uu^*(J)⇄(I) be the induced adjunction, the following natural transformation is invertible: u^*⊛ϵ_u=(η_u⊛ u^*)^-1 u^*u_!u^*⟹̃u^*;*if k∈ J⊆ I, so that k=u∘ k→ I, the following natural transformation is invertible: k^*u^*⊛ϵ_u=k^*⊛ϵ_u k^*u_!u^*⟹̃k^*.(1). By <cit.>, as u is fully faithful, also u_! is fully faithful and, equivalently, η_u is a natural isomorphism.Then, by the triangle unit-counit identities (<ref>) we obtain that u^*⊛ϵ_u=(η_u⊛ u^*)^-1 u^*u_!u^*⟹̃u^* is a natural isomorphism. (2) follows from (1) and the fact thatis a strict 2-functor.An important special case of a full subcategory is a cosieve: it is a full subcategory J⊆ I such that whenever j∈ J and j→ k is a morphism in I, then k∈ J. In that case, we also know what k^*u_!𝒳 looks like for k∈ I∖ J. Namely, k^*u_!𝒳 is, by <cit.>, the initial object of () for every 𝒳. Thus, in the case of pointed derivators (see Subsection <ref> below), u_! is called the left extension by zero functor. Preservation of homotopy Kan extensions.Given a morphism of derivators F=(F_I,γ^F_u)⟶' andu J→ I, we have the following natural transformations: (γ_u^F)_! u_!F_J⟹ F_Iu_!and(γ_u^F)_* F_Iu_*⟹ u_*F_J. If either of these is a natural isomorphism, we say thatFpreserves homotopy left or right Kan extensions alongu, respectively. Recall from <cit.> thatFis left exact if it preserves homotopy pullbacks and final objects and right exact if it preserves homotopy pushouts and initial objects.The following statement is well-known in ordinary category theory. Here we extend it to arbitrary morphisms of derivators: A morphism of derivators which preserves directed homotopy colimits and finite coproducts also preserves arbitrary coproducts.Let G→' be a morphism of derivators that preserves directed homotopy colimits and finite coproducts, and let I be a set, viewed as a discrete category. We consider the directed set P:=𝒫^<ω(I) of the finite parts of I, ordered by inclusion, and consider it as a posetal category. There is an obvious inclusion functor uI→ P, such that i↦{i}. Furthermore,(@C=40pt I[r]^-u P[r]^-_P ) = (I[r]^-_I ).By the compatibility of mates with pasting (see <cit.>) to prove that G commutes with coproducts indexed by I, that is, with homotopy left Kan extensions along _I (use (Der.1) for this equivalence), it is enough to verify that G commutes, separately, with homotopy left Kan extensions along u and along _P, the latter being trivial as P is directed. It remains to prove that (γ_u^G)_! u_!∘ G_I⟹ G_P∘ u_! is invertible. By (Der.2), (γ_u^G)_! is an isomorphism if, and only if, F^*((γ_u^G)_!) is an isomorphism for each F∈ P. Fix then an F∈ P and consider the following natural isomorphismsF^*∘u_!∘ G_I≅_u/F∘_F^*∘ G_I≅_u/F∘ G_u/F∘_F^*,where the first isomorphism comes from the axiom (Der.4), while the second is induced by the natural isomorphism γ^G__F. Similarly we have the following two isomorphisms, the first one induced by γ^G_F and the second one by (Der.4): F^*∘ G_P∘ u_!=G_∘ F^*∘ u_!≅ G_∘_u/F∘_F^*.Hence, to prove that F^*((γ_u^G)_!) is an isomorphism, it is enough to prove that (γ__u/F^G)_!_u/F∘ G_u/F⟹ G_∘_u/F is invertible. On the other hand, u/F≅ F is a finite discrete category, so that _u/F is a finite coproduct, and G commutes with finite coproducts by hypothesis.Homotopical epimorphism.Following <cit.>, we call a functorv J→ Ibetween small categories ahomotopical epimorphism if, for any derivator, the induced functorv^*(I)→(J)is fully faithful. As this holds for any derivator, in particular it holds for all the shifts^K(see Example <ref>) of a given derivator, showing thatv J→ Iis ahomotopical epimorphism if, and only if, for any derivator, it induces a fully faithful morphism of derivatorsv^*^I→^J.If v J→ I is a homotopical epimorphism between small categories, then it is also a lax epimorphism in the sense of <cit.>, that is, for any category , the functor (-)∘ v^I→^J is fully faithful. By the equivalence “(1)⇔(2)” of <cit.>, it is enough to check that (-)∘ v^I→^J is fully faithful. This last condition follows by the definition of homotopical epimorphism applied to the derivator 𝕐_ (see Example <ref>).The following is a useful criterion to identify homotopical epimorphisms:Let u A→ I be an essentially surjective functor in , letbe a derivator, and let 𝔼⊆^A be a sub-prederivator of ^A (in the sense that 𝔼(J) is a subcategory of ^A(J) for each J∈) that satisfies the following two conditions:* Im(u^*^I→^A)⊆𝔼;*for any J∈ and 𝒳∈𝔼(J), the component of the counit ϵ_𝒳 u^*u_*𝒳→𝒳 of the adjunction u^*⊣ u_* is an isomorphism.Then, u^*^I→^A is fully faithful andIm(u^*)= 𝔼. In particular, 𝔼 is a derivator. §.§ Pointed and additive derivatorsWe refer to <cit.> for a detailed discussion, as well as for the precise definitions ofpointed derivators (i.e., derivatorsfor which()has a zero object). Similarly, we refer to <cit.> for a discussion ofadditive derivators (i.e., derivatorsfor which()is additive). In fact, for a pointed (resp., additive) derivator, the condition imposed on()implies that the categories(I)(for eachIin) are automatically pointed (resp., additive). Ifand'are pointed (resp., additive) derivators, a morphismF→'is calledpointed (resp.,additive) ifF_Iis a pointed (resp., additive) functor, for eachIin. Note that, for eachu J→ Iin, the three morphismsu^*^I→^Jandu_!,u_*^J→^Iare always pointed (resp., additive).Let ^→ be an additive derivator and I∈. A sequence𝒳[r]^ϕ 𝒴[r]^ψ 𝒵in (I),is said to bepointwise split-exact if, for each i∈ I, the sequence 𝒳_i→𝒴_i→𝒵_i, obtained by applying i^* to(<ref>), is split-exact in the additive category ().Let 𝔻^→ be an additive derivator. For each I∈, the functor _I𝔻(I)→𝔻(1)^I takes pointwise split-exact sequences to kernel-cokernel sequences. Let 𝒳ϕ⟶𝒴ψ⟶𝒵 be a pointwise split-exact sequence in 𝔻(I). It follows that𝒳_i=(dia_I𝒳)_iϕ_i⟶𝒴_i=(dia_I𝒴)_iψ_i⟶𝒵_i=(dia_I𝒵)_i is a split-exact sequence, whence a kernel-cokernel one, in 𝔻(1),for all i∈ I. Now use the fact a morphism in 𝔻(1)^I has a co/kernel if, and only if, so do all of its components and, in this case, the co/kernel is computed componentwise. §.§ Strong and stable derivators We refer to <cit.> for a detailed discussion, as well as for the precise definitions ofstable derivators (they are the pointed derivators in which a coherent commutative square is cartesian if, and only if, it is cocartesian), and ofstrong derivators (the ones in which the partial diagram functors(2× I) →(I)^2are full and essentially surjective for eachI∈).The key fact, which can be found in <cit.>, is that, given a strong and stable derivator, each(I)is in fact a triangulated category. Furthermore, ifand'are stable derivators, left and right exactness of a morphismF→'are equivalent and left (=right) exact morphisms are simply called exact in this context. IfF→'is an exact morphism of strong and stable derivators, allF_I(I)→'(I)are naturally triangulated functors by <cit.>. In particular, all the morphisms of the formu^*and the Kan extensionsu_!,u_*, for someu J→ Iin, are naturally triangulated functors.Let ^→ be a strong and stable derivator, I∈ and let𝒳[r]^ϕ 𝒴[r]^ψ 𝒵be a pointwise split-exact sequence in (I). Then, this sequence can be completed to a distinguished triangle in (I). Consider the morphism ϕ𝒳→𝒴 and complete it to a triangle in (I):𝒳[r]^ϕ 𝒴[r]^ϕ' 𝒞[r]^ϕ” Σ𝒳.Since ψϕ=0 by hypothesis, there is a morphism f𝒞→𝒵 such that fϕ'=ψ; let us verify that f is an isomorphism. By (Der.2), it is enough to verify that f_i:=i^*(f) is an isomorphism, for each i in I. But this is clearly true since ϕ_i is a split monomorphism (by hypothesis) and both ϕ'_i and ψ_i are cokernels of ϕ_i (the first by <cit.> and the second by hypothesis). Hence, by the universality of cokernels, f_i is the unique morphism such that f_iϕ_i'=ψ_i and it is an isomorphism.In the next example, we mention some classes of strong and stable derivators that will appear frequently in the rest of the paper: Let (,𝒲,,ℱ) be a model category. For any small category I, let _I be the class of morphisms in ^I which belong pointwise to . By <cit.>, the universal localization ^I[_I^-1] can always be constructed and, furthermore, the assignment I↦^I[_I^-1] underlies a derivator _(,)^→. Furthermore, _(,) is always strong and it is pointed (resp., stable) ifhas the same property in the sense of model categories. For such derivators, homotopy co/limits and, more generally, homotopy Kan extensions, are just the total derived functors of the usual co/limit and Kan extension functors. Given a Grothendieck Abelian category ,we refer to the strong and stable derivator arising as above from the injective model structure on () as thecanonical derivator enhancing the derived category().§ CATEGORIES OF FINITE AND OF COUNTABLE LENGTH In this section we introduce the concept of category of finite length and that of category of countable length. Then, given such a categoryI, we study canonical presentations for objects in(I)(first in the finite and then in the countable length case), whereis allowed to be an arbitrary additive derivator. Finally, we verify that every small category is the homotopically surjective image of a suitable category of countable length. In this way the results about standard presentations for coherent diagrams over categories of countable length can be applied to arbitrary shapes. §.§ Categories of finite lengthA small category I is said to be offinite length if there is n∈_>0 andd I→𝐧^, a functor such that, for any pair of objects i,j∈ I, if there is a non-identity morphism in I(i,j), then d(i)>d(j). If I is a category of finite length, the smallest n∈_>0 for which there exists a functor d I→𝐧^ as above is called thelength of I, in symbols ℓ(I)=n.Note that, for a category of finite lengthI, we have thatI(i,i)={𝕀_i}for anyi∈ I. Furthermore, giveni≠ j∈ I, at most one ofI(i,j)andI(j,i)is not empty. Moreover, the length ofIis exactly the length of a maximal path inI.Given a category I of finite length, wesay that an object i in I isminimal if there is no non-identity morphism starting at i.Throughout this subsection, we fix the following notation: We let ^→ be an additive derivator, I a category of finite length and u J→ I the inclusion of the (full) subcategory J of I of all the non-minimal objects. We also denote by I∖ J⊆ I the subcategory of minimal objects. We denote by u_!η_uϵ_uu^* the induced adjunction. Furthermore, for each i∈ I∖ J we have the following adjunctions:i_!η_iϵ_ii^*and_u/iη_u/iϵ_u/i_u/i^*. Finally, we denote by γ_! (_u/i)_!_i^*⇒ i^*u_! the canonical natural transformation, which is invertible by (Der.4). The following lemma is a trivial fact about categories of finite length that will be extremely important when trying to prove things by induction: For each i∈ I∖ J, we have that ℓ(I)>ℓ(J)≥ℓ(u/i).The fact that ℓ(I)>ℓ(J) follows since any maximal path in I ends in a minimal object that, by definition, does not belong in J. Furthermore, a maximal path in u/i is something of the form (u(j_1)ϕ_1⟶i)ψ_1⟶(u(j_2)ϕ_2⟶i)ψ_2⟶…ψ_k-1⟶(u(j_k)ϕ_k⟶i)where ψ_s j_s→ j_s+1 and ϕ_s+1ψ_s=ϕ_s, for s=1,…,k-1. Then,j_1ψ_1⟶j_2ψ_2⟶…ψ_k-1⟶j_k is a path in J, so that ℓ(u/i)≤ℓ(J). We can now introduce the standard presentation for coherent diagrams of finite length: Given 𝒳 in (I), there is a pointwise split-exact sequence:@C=50pt@R=9pt∐_I∖ Ji_!i^*u_!u^*𝒳[rr]^-[ccc⋱0i_!i^*ϵ_u0⋱ ⋯ -ϵ_i ⋯]∐_I∖ Ji_!𝒳_i ⊔u_!u^*𝒳[rr]^-[[ ⋯ ϵ_i ⋯ ϵ_u ]] 𝒳.We have to verify that, for each k∈ I, when we apply the evaluation functor k^* to the sequence in the statement, we get a split-exact sequence in (). We distinguish two cases based on whether k is minimal or not.Indeed, if k∈ J then k^*i_!=0 for all i∈ I∖ J since then {i}⊆ I is a cosieve (cf. Remark <ref>), and k^*ϵ_u is an isomorphism by Lemma <ref>, so we get the following split-exact sequence:@C=30pt@R=9pt 0[rrr]^-0k^*u_!u^*𝒳[rrr]^-k^*ϵ_uk^*𝒳.Suppose now that k∈ I∖ J; using the natural isomorphism k^*ϵ_k k^*k_!k^* ⟹̃ k^* (see Lemma <ref> recalling that the functor k→ I is fully faithful), we get the following split-exact sequence:@C=29pt@R=9pt k^*k_!k^*u_!u^*𝒳[rrr]^-[[ k^*k_!k^*ϵ_u; ;-k^*ϵ_k ]]k^*k_!k^*𝒳⊔k^*u_!u^*𝒳[rrr]^-[[ k^*ϵ_k k^*ϵ_u ]] k^*𝒳,as this is an occurrence of the second part of Example <ref>(1). The following two technical propositions will be essential in the following section. Note that, even if we state them in the setting of this subsection, they hold for any small categoryI, any full subcategoryJand anyi∈ I∖ J. Given 𝒳,𝒴∈(I) and i∈ I∖ J, the following diagram commutes:@C=150pt@R=21pt(I)(i_!i^*𝒳,𝒴)[r]^-(-)∘ i_!i^*(ϵ_u)[ddd]^-[origin=c]90∼_-i^*(-)∘η_i (I)(i_!i^*u_!u^*𝒳,𝒴)[d]_-[origin=c]90∼^-i^*(-)∘η_i()(i^*u_!u^*𝒳,i^*𝒴)[d]_-[origin=c]90∼^-(-)∘γ_!()(_u/i_i^*u^*𝒳,i^*𝒴)[d]_-[origin=c]90∼^-_u/i^*(-)∘η_u/i ()(𝒳_i,𝒴_i)[r]_^*_u/i(-)∘γ^* (u/i)(_i^*u^*(𝒳),_u/i^*𝒴_i)where the columns are natural isomorphisms. Furthermore, for each f𝒳_i→𝒴_i the corresponding map ^*_u/i(f)∘γ^*_i^*u^*(𝒳)→_u/i^*𝒴_i acts as follows: for each object (j,a j→ i) in u/i,(j,a)^*(^*_u/i(f)∘γ^*)=f∘𝒳_a𝒳_j→𝒴_i.The left column and the first morphism in the right column are induced by the adjunction i_!⊣ i^* so they are clearly isomorphisms. The second and the third morphism in the right column are induced, respectively, by the natural transformation γ_!, which is invertible by (Der.4), and by the adjunction _u/i⊣_u/i^*, so it is invertible too. Finally, to see that the square commutes, consider the following equalities:^*_u/i(i^*(-)∘η_i)∘γ^* =^*_u/i(i^*(-)∘η_i∘ i^*(ϵ_u)∘γ_!)∘η_u/i=^*_u/i(i^*(-)∘ i^*i_!i^*(ϵ_u)∘η_i∘γ_!)∘η_u/i,where the first equality holds since γ^*=(γ_!)_* (see Remark <ref>) and the second one by the naturality of η_i𝕀_()⇒ i^*i_!.Given 𝒳,𝒴∈(I) and i∈ I∖ J, the following diagram commutes @C=150pt@R=21pt(I)(u_!u^*(𝒳),𝒴)[r]^-(-)∘ϵ_i[dr]_-i^*(-)[ddd]^-[origin=c]90∼_-u^*(-)∘η_u (I)(i_!i^*u_!u^*𝒳,𝒴)[d]_-[origin=c]90∼^-i^*(-)∘η_i()(i^*u_!u^*𝒳,i^*𝒴)[d]_-[origin=c]90∼^-(-)∘γ_!()(_u/i_i^*u^*𝒳,i^*𝒴)[d]_-[origin=c]90∼^-_u/i^*(-)∘η_u/i (J)(u^*(𝒳),u^*(𝒴))[r]_γ^*∘^*_i(-) (u/i)(_i^*u^*(𝒳),_u/i^*𝒴_i)where the columns are natural isomorphisms. Furthermore, for each f u^*𝒳→ u^*𝒴 the corresponding map γ^*∘^*_i(f)_i^*u^*(𝒳)→_u/i^*𝒴_i acts as follows: for each object (j,a j→ i) in u/i,(j,a)^*(γ^*∘^*_i(f))=𝒴_a∘ f_j𝒳_j→𝒴_i.The three morphisms in the right column are the same as the ones in Proposition <ref>, so they are isomorphisms. Furthermore, the upper triangle commutes by the usual triangular equalities relative to the adjunction i_!⊣ i^*. On the other hand, the morphism in the left column is the natural isomorphism induced by the adjunction u_!⊣ u^*, whose inverse is the map ϵ_u∘ u_!(-)(J)(u^*(𝒳),u^*(𝒴))→(I)(u_!u^*(𝒳),𝒴). Hence, the diagram in the statement commutes if, and only if, the following diagram commutes:@C=150pt@R=22pt(I)(u_!u^*(𝒳),𝒴)[r]^-i^*(-)@<-[dd]^-[origin=c]90∼_-ϵ_u∘ u_!(-) ()(i^*u_!u^*𝒳,i^*𝒴)[d]_-[origin=c]90∼^-(-)∘γ_!()(_u/i_i^*u^*𝒳,i^*𝒴)[d]_-[origin=c]90∼^-_u/i^*(-)∘η_u/i (J)(u^*(𝒳),u^*(𝒴))[r]_γ^*∘^*_i(-) (u/i)(_i^*u^*(𝒳),_u/i^*𝒴_i).Finally, the above diagram commutes by the following series of equalities:^*_u/i(i^*(ϵ_u∘ u_!(-))∘γ_!)∘η_u/i==^*_u/i(i^*(ϵ_u)∘ i^*u_!(-)∘γ_!)∘η_u/i=^*_u/i(i^*(ϵ_u)∘γ_!∘(_u/i)_!_i(-))∘η_u/i by naturality of γ_!;=^*_u/i(i^*(ϵ_u))∘^*_u/i(γ_!)∘^*_u/i(_u/i)_!_i(-)∘η_u/i=^*_u/i(i^*(ϵ_u))∘^*_u/i(γ_!)∘η_u/i∘_i(-) by naturality of η_u/i;=γ^*∘^*_i(-) since (γ_!)_*=γ^*.§.§ Categories of countable lengthA small category I is said to be ofcountable length if there is a functord I→^such that, for any pair of objects i,j∈ I, if there is a non-identity morphism in I(i,j), then d(i)>d(j) in .As for the finite length case, ifIhas countable length, thenI(i,i)={𝕀_i}for anyi∈ I. Furthermore, giveni≠ j∈ I, at most one ofI(i,j)andI(j,i)is not empty. Throughout this subsection, we fix the following notation: We let ^→ be an additive derivator and I a category of countable length with a fixed functor d I→^ like in Definition <ref>. For each n∈, we let I_≤ n⊆ Ibe the (full) subcategory of all { i∈ I: d(i)≤ n-1}. Furthermore, we let ι_m,n I_≤ n→ I_≤ m and ι_n I_≤ n→ I be the canonical cosieve inclusions, for all m≥ n in . Finally, we consider the adjunctions:(ι_n)_!η_nϵ_nι_n^*and(ι_m,n)_!η_m,nϵ_m,nι_m,n^*and we define ∂^n:=(ϵ_n⊛ (ι_n+1)_!ι_n+1^*)∘ ((ι_n)_!ι_n+1,n^*⊛η_n+1⊛ι_n+1^*) (ι_n)_!ι_n^*⇒(ι_n+1)_!ι_n+1^*. Let us remark that, in the above notation, the restrictiond_n I_≤ n→𝐧^of the functord I→^shows that eachI_≤ nis a category of finite length. Furthermore, for eachm≥ n∈, we haveι_m∘ι_m,n=ι_n. In particular, the equalityι_n+1,n^*∘ι_n+1^*=ι_n^*and Lemma <ref> provide us with a natural isomorphismψ_n (ι_n)_! ∼⟹(ι_n+1)_!(ι_n+1,n)_!given by ψ_n = (ϵ_n⊛(ι_n+1)_!(ι_n+1,n)_!) ∘ ((ι_n)_!ι_n+1,n^*⊛η_n+1⊛(ι_n+1,n)_!) ∘ ((ι_n)_!⊛η_n+1,n).Let now𝒳∈(I)andn∈, the following commutative diagram helps visualize∂^n:@C=120pt@R=5pt (ι_n)_!ι_n+1,n^*ι_n+1^*𝒳[r]^-(ι_n)_!ι_n+1,n^*(η_n+1)@=[d] (ι_n)_!ι_n+1,n^*ι_n+1^*(ι_n+1)_!ι_n+1^*𝒳@=[d](ι_n)_!ι_n^*𝒳[rddd]_-∂^n (ι_n)_!ι_n^*(ι_n+1)_!ι_n+1^*𝒳[ddd]^-ϵ_n(ι_n+1)_!ι_n+1^*𝒳. It is also instructive to recall Lemma <ref> and Remark <ref> in the context of the above definitions. For example, given𝒳∈(I), the component of((ι_n)_!ι_n^*𝒳)_kis isomorphic to𝒳_kifk∈ I_≤ nand((ι_n)_!ι_n^*𝒳)_k=0otherwise. The components of the counit(ι_n)_!ι_n^*𝒳→𝒳are isomorphisms or maps from the zero object of(), respectively. The next lemma shows that the map∂^nis of a similar nature (in fact, a direct but tedious computation also reveals that∂^ncoincides, up to the identification given byψ_nfrom (<ref>), with(ι_n+1)_!⊛ϵ_n+1,n⊛ι_n+1^* (ι_n+1)_!(ι_n+1,n)_!ι_n^*⟹(ι_n+1)_!ι_n+1^*). In the above notation, ϵ_n+1∘∂^n=ϵ_n, for each n∈.Let n∈ and note that, by the naturality of ϵ_n, we have:ϵ_n+1∘ (ϵ_n⊛ (ι_n+1)_!ι_n+1^*)=ϵ_n∘ ( (ι_n)_!ι_n^*⊛ϵ_n+1).In other words, the following diagram commutes for each 𝒳∈(I):@C=70pt (ι_n)_!ι_n^*𝒳@<-[r]^-(ι_n)_!ι_n^*(ϵ_n+1)[d]_ϵ_n (ι_n)_!ι_n^*(ι_n+1)_!ι_n+1^*𝒳[d]^ϵ_n 𝒳@<-[r]_-ϵ_n+1 (ι_n+1)_!ι_n+1^*𝒳.Therefore,we have that ϵ_n+1∘∂^n =ϵ_n+1∘ (ϵ_n⊛ (ι_n+1)_!ι_n+1^*)∘ ((ι_n)_!ι_n+1,n^*⊛η_n+1⊛ι_n+1^*) =ϵ_n∘ ( (ι_n)_!ι_n^*⊛ϵ_n+1)∘ ((ι_n)_!ι_n+1,n^*⊛η_n+1⊛ι_n+1^*) =ϵ_n∘ ((ι_n)_!ι_n+1,n^*⊛ ((ι_n+1^*⊛ϵ_n+1)∘ (η_n+1⊛ι_n+1^*)))=ϵ_n,where we have applied the triangular equality (ι_n+1^*⊛ϵ_n+1)∘ (η_n+1⊛ι_n+1^*)=𝕀_ι_n+1^* relative to the adjunction (ι_n+1)_!⊣ι_n+1^*. We are now ready to introduce the standard presentation of a coherent diagram of countable length:For any 𝒳∈(I), there is a pointwise split-exact sequence@C=130pt∐_(ι_n)_!ι_n^*𝒳[r]^α= [cccc 1 -∂^0 10-∂^1 10⋱ ⋱]∐_(ι_n)_!ι_n^*𝒳[r]^-β= [ccccϵ_0ϵ_1ϵ_2⋯]𝒳. It is easily seen that, by Lemma <ref>, β∘α=0. Let now k∈ I_≤ n∖ I_≤ n-1⊆ I (for some integer n≥1) and let us show that, after applying k^* to the sequence in the statement, we get a split-exact sequence in (). By <cit.> we get thatk^*(ι_m)_!ι_m^*=0,whenever m<n,while k^*(ϵ_m) is invertible for all m≥ n, by Lemma <ref>, and k^*(ϵ_m+1)∘ k^*( ∂^m)=k^*(ϵ_m), by Lemma <ref>. In particular, we obtain the following commutative diagram in (), where the three columns are isomorphisms@C=100pt∐_m≥ nk^*(ι_m)_!ι_m^*𝒳[r]^ k^*(α)[d]^-[origin=c]90∼_(k^*(ϵ_m))_m ∐_m≥ nk^*(ι_m)_!ι_m^*𝒳[r]^ k^*(β)[d]_-[origin=c]90∼^(k^*(ϵ_m))_m 𝒳_k@=[d]𝒳_k^()[r]|-[cccc 1 -1 10-1 10⋱ ⋱]𝒳_k^()[r]_[cccc 1 1 1 ⋯]𝒳_kshowing that our sequence is (isomorphic to) the one in Example <ref>. The following technical proposition will be essential in the following section: Given 𝒳,𝒴∈(I) and n∈, the following diagram commutes @C=100pt@R=30pt(I)((ι_n)_!ι_n^*(𝒳),𝒴)@<-[r]^-(-)∘∂^n[d]^-[origin=c]90∼_-ι_n^*(-)∘η_n (I)((ι_n+1)_!ι_n+1^*(𝒳),𝒴)[d]_-[origin=c]90∼^-ι_n+1^*(-)∘η_n+1 (I_≤ n)(ι_n^*(𝒳),ι_n^*(𝒴))@<-[r]^ι_n+1,n^*(-) (I_≤ n+1)(ι_n+1^*(𝒳),ι_n+1^*(𝒴))where the columns are natural isomorphisms.The two columns are isomorphisms because they are the maps given by the adjunctions (ι_n)_!⊣ι_n^* and (ι_n+1)_!⊣ι_n+1^*, respectively. In particular, the inverse of the map in the leftmost column is ϵ_n∘(ι_n)_!(-) and, therefore, the commutativity of the original square is equivalent to the commutativity of the following diagram:@C=100pt@R=30pt(I)((ι_n)_!ι_n^*(𝒳),𝒴)@<-[r]^-(-)∘∂^n@<-[d]^-[origin=c]90∼_-ϵ_n∘(ι_n)_!(-) (I)((ι_n+1)_!ι_n+1^*𝒳,𝒴)[d]_-[origin=c]90∼^-ι_n+1^*(-)∘η_n+1 (I_≤ n)(ι_n^*(𝒳),ι_n^*(𝒴))@<-[r]^ι_n+1,n^*(-) (I_≤ n+1)(ι_n+1^*(𝒳),ι_n+1^*(𝒴)).That is, we have to prove that ϵ_n∘(ι_n)_!(ι_n+1,n^*(ι_n+1^*(-)∘η_n+1))=(-)∘∂^n. But, in fact,ϵ_n∘(ι_n)_!(ι_n+1,n^*(ι_n+1^*(-)∘η_n+1)) =ϵ_n∘(ι_n)_!ι_n^*(-)∘(ι_n)_!ι_n+1,n^*(η_n+1)=(-)∘ϵ_n∘(ι_n)_!ι_n+1,n^*(η_n+1)=(-)∘∂^n,where the first equality holds since ι_n+1∘ι_n+1,n=ι_n, the second one is true by the naturality of ϵ_n (analogously to the commutativity of the diagram (<ref>) but with an arbitrary map instead of ϵ_n+1), and the last one follows just by definition of ∂^n.§.§ Homotopical epimorphic images of categories of countable length LetIbe a small category. In this section we associate withIa new category of countable lengthΔ(I)and a homotopical epimorphismuΔ(I) → I. The idea of using the categoryΔ(I)in this way was suggested to us by Fritz Hörmann. A similar idea appears in his paper <cit.>, where he proves much more in this context—he uses the categoriesΔ(I)to extend the domain of definition of fibred (multi)derivators from directed categories of countable length to arbitrary shapes. Let us start with the following definition: Given a small category I, the category Δ(I) is defined as follows:*the objects of Δ(I) are the functors 𝐧→ I, for n∈_>0;*given two objects a𝐧→ I and b𝐦→ I, we defineΔ(I)(a,b):={φ𝐦→𝐧 strictly increasing and s.t. a∘φ=b}.There is a canonical functoruΔ(I)⟶ Isuch that (a𝐧→ I)↦ a(0).In what follows, our aim is to prove that the above functoruΔ(I)→ Iis a homotopical epimorphism. Giveni∈ I, define thefiber ofuatito be the categoryu^-1(i), given by the following pullback diagram in:u^-1(i)@.>[r]^ι_i@[dr]|P.B.@.>[d]__u^-1(i) Δ(I)[d]^u [r]_i IWe can describeu^-1(i)as the subcategory of the slice categoryi/uof those objects(a,φ i→ u(a))such thatu(a)=iandφ=𝕀_i. The following lemma is taken from the discussion in <cit.>. In the above notation, the following statements hold true:* Δ(I) has countable length;*the inclusion u^-1(i)→ i/u has a right adjoint;* u^-1(i) has a terminal object.(1). We define a functor dΔ(I)→^ as follows: d(𝐧→ I):=n-1 and, given a morphism (𝐧→ I)φ⟶ (𝐦→ I), we have that n≥ m, so we can map φ to the unique map (n-1)→ (m-1) in ^. It is now easy to verify that this functor satisfies the conditions of Definition <ref>. (2). Let x=(a_0→ a_1→…→ a_n, φ i→ a_0)∈ i/u and suppose that φ𝕀_x (i.e., that x∉u^-1(i)). Let c(x) be the following object of u^-1(i):c(x)=(iφ⟶a_0→ a_1→…→ a_n, 𝕀_i i→ i).Of course the morphism 𝐧+1→𝐧+2 such that k↦ k+1 gives a well-defined morphism c(x)→ x in i/u. It is easy to verify that this morphism is a coreflection of x onto u^-1(i). (3). One checks directly that (i⟶I, 𝕀_i i→ i) is a terminal object in u^-1(i). Before introducing the following definition, let us recall that an object𝒳∈(J)is said to be constant if, and only if,𝒳≅_J^*𝒳_jfor some (i.e., all)j∈ J. Note also that, ifJhas a terminal objectt, then there is a unique natural transformationα𝕀_J⇒ t∘_Jand𝒳is constant if, and only if,α^*_𝒳𝒳→_J^*𝒳_tis an isomorphism. By (Der.2) this is equivalent to ask thatj^*(α^*_𝒳)is an isomorphism for eachj∈ J. Furthermore,j^*(α^*_𝒳)=𝒳_a𝒳_j→𝒳_t, wherea j→ tis the unique morphism inJ(j,t). As a consequence, wheneverJhas a terminal object, a coherent diagram𝒳∈(J)is constant if, and only if,𝒳_a𝒳_j→𝒳_kis an isomorphism for each morphisma j→ kinJ. This observation is important because it reduces the property of being constant for the coherent diagram𝒳, to a condition that we can check on the incoherent diagram_J(𝒳)(that is, that all its transitions maps are invertible). Letbe a derivator and 𝒳∈(Δ(I)). We say that 𝒳 isconstant on fibers if, for any i∈ I, ι_i^*𝒳 is a constant diagram in (u^-1(i)) (where ι_i u^-1(i)→Δ(I) is the functor introduced in (<ref>)). We are now ready to state and prove the main result of this subsection: In the above notation, uΔ(I)→ I is a homotopical epimorphism. Furthermore, given a derivator , the essential image of u^*^I→^Δ(I) is the sub-derivator 𝔼⊆^Δ(I), where 𝔼(J) is spanned by those 𝒳∈^Δ(I)(J) that are constant on fibers, for each J∈. Fix a derivatorand let us verify that the sub-prederivator 𝔼⊆^Δ (I) defined in the statement satisfies the conditions of Lemma <ref>. Indeed, the first condition of the lemma can be verified as follows: let J∈ and 𝒳∈^I(J), let us prove that u^*𝒳∈𝔼(J)(⊆^Δ (I)(J)). Indeed, choose i∈ I and a∈ u^-1(i).Applying our derivator to the pullback diagram in (<ref>), we get the following commutative square@C=40pt(Δ (I))[d]_ι^*_i (I)[l]_u^*[d]^u(a)^* (u^-1(i)) ()[l]^(.35)^*_u^-1(i)showing that ι_i^*(u^*𝒳)= ^*_u^-1(i)u(a)^*𝒳= ^*_u^-1(i)(u^*𝒳)_a, as desired. As for the second condition, choose J∈ and 𝒴∈𝔼(J), and let us study the component of the counit ϵ_𝒴 u^*u_*𝒴⟶𝒴. By (Der.2), isomorphisms in (Δ (I)) can be detected pointwise, so it is enough to show that, for each a∈Δ (I),the following mapis an isomorphisma^*ϵ_𝒴 a^*u^*u_*𝒴⟶𝒴_a. Let a∈Δ (I) and i:=u(a); by (Der.4), there is an isomorphism a^*u^*u_*𝒴=i^*u_*𝒴≅⟶_i/u_i^*𝒴, where _i i/u→Δ (I) is the obvious projection. Hence, it suffices to prove that the canonical map _i/u_i^*𝒴→𝒴_a is an isomorphism (see also <cit.>). Now let α_i u^-1(i)→ i/u be the obvious inclusion; by Lemma <ref>, α_i is a left adjoint and so, by the dual of <cit.>, the following canonical map is an isomorphism_i/u_i^*𝒴≅⟶_u^-1(i)α_i^*_i^*𝒴=_u^-1(i)ι_i^*𝒴≅_u^-1(i)_u^-1(i)^*𝒴_awhere the last isomorphism holds since we have started with 𝒴∈𝔼(J). To conclude recall that, by Lemma <ref>, there is a terminal object t_i→ u^-1(i), that is, t_i is right adjoint to _u^-1(i). Hence, _u^-1(i)^*≅ (t_i)_*, showing that _u^-1(i)_u^-1(i)^*𝒴_a≅_u^-1(i)(t_i)_*𝒴_a≅ (_u^-1(i)∘ t_i)_*𝒴_a≅𝒴_a. § LIFTING INCOHERENT DIAGRAMS ALONG DIAGRAM FUNCTORS Let^→be a strong and stable derivator. In this section we study the following problem: Given an (incoherent) diagram X I→(), under which conditions is it possible to lift it to a coherent diagram 𝒳 of shape I? That is, can we find an object 𝒳∈(I) such that _I(𝒳)≅ X?The property ofbeing strong implies that we can always lift diagrams of shape. The main result of this section is the following theorem that gives sufficient conditions for a diagram of arbitrary shape to lift to a coherent diagram. These sufficient conditions consist in assuming that there are no “negative extensions" in()between the components of our diagramX I→(). Similar conditions are given to identify pairs of coherent diagrams𝒳, 𝒴∈(I)for which_I(-)(I)(𝒳, 𝒴)→()^I(_I𝒳, _I𝒴)is bijective.Given a strong and stable derivator ^→, the following statements hold true for any small category I:*given 𝒳, 𝒴∈(I), the canonical map (I)(𝒳,𝒴)→()^I(_I𝒳,_I𝒴) is an isomorphism provided ()(Σ^n 𝒳_i,𝒴_j)=0 for all i,j∈ I and n>0;*given X∈()^I, there is an object 𝒳∈(I) such that _I(𝒳)≅ X, provided ()(Σ^n X_i,X_j)=0 for all i,j∈ I and n>0. The proof of the above theorem is quite involved and it will occupy the rest of this section, which is divided in three subsections to reflect the main steps of the argument. §.§ Lifting diagrams of finite length In this subsection we are going to verify the statement of Theorem <ref> for diagrams of shapeI, withIa category of finite length. These results are very close to some of the main results in <cit.>. We offer here a different and self-contained argument.Let ^→ be a strong and stable derivator, I a small category of finite length, and𝒳, 𝒴∈(I). The canonical map _I(-)(I)(𝒳,𝒴)⟶()^I(_I𝒳,_I𝒴), is an isomorphism provided ()(Σ^n 𝒳_i,𝒴_j)=0 for all i,j∈ I and n>0. We proceed by induction on ℓ(I). If ℓ(I)=1, then I is a disjoint union of copies of and, by (Der.1), there is nothing to prove. For ℓ(I)>1, consider the setting of Notation <ref> and let 𝒳, 𝒴∈(I). By Lemmas <ref> and <ref>, there is a triangle in (I):∐_I∖ Ji_!i^*u_!u^*𝒳[r]^-α ∐_I∖ Ji_!i^*𝒳⊔ u_!u^*𝒳[r]^-β 𝒳[r] Σ(∐_I∖ Ji_!i^*u_!u^*𝒳).where the maps α and β are described explicitly in Lemma <ref>. We are going now to apply the functor (-,𝒴):=(I)(-,𝒴) to the above triangle and study the resulting long exact sequence in . We start noting that, by Propositions <ref> and <ref>, for each i∈ I, there is the following commutative diagram:@C=52pt@R=4pt (i_!i^*𝒳⊔ u_!u^*𝒳,𝒴)@=[d](i_!i^*𝒳,𝒴)× (u_!u^*𝒳,𝒴)[rr]^-[ccc(-)∘ i_!i^*ϵ_u -(-)∘ϵ_i ] [dddddd]^-[origin=c]90∼_[cc i^*(-)∘η_i 0 0 u^*(-)∘η_u ](i_!i^*u_!u^*𝒳,𝒴)[dddddd]^(*)_-[origin=c]90∼()(i^*𝒳,i^*𝒴)×(J)(u^*𝒳,u^*𝒴)[rr]_-[ccc^*_u/i(-)∘γ^* -γ^*∘^*_i(-) ](u/i)(_i^*u^*(𝒳),_u/i^*𝒴_i)where both columns are isomorphisms and the map (*) is the composition of the three vertical maps on the right-hand-side of the diagrams in both Propositions <ref> and <ref>. We also have the following series of isomorphisms: (Σ(∐_I∖ Ji_!i^*u_!u^*𝒳),𝒴) ≅∏_I∖ J()(Σ i^*u_!u^*𝒳,𝒴_i)≅∏_I∖ J()(Σ_u/i_i^*u^*𝒳,𝒴_i)≅∏_I∖ J(u/i)(Σ_i^*u^*𝒳,_u/i^*𝒴_i),showing that (Σ(∐_I∖ Ji_!i^*u_!u^*𝒳),𝒴)=0 by inductive hypothesis. Therefore, after all these observations, we have obtained the following commutative diagram in@C=10pt@R=14pt 0[d](𝒳,𝒴)[d] ∏_I∖ J() (i^*𝒳,i^*𝒴)×(J)(u^*𝒳,u^*𝒴)[dddd]| -[ccc|c⋱0 ⋮_u/i^*(-)∘γ^* -γ^*∘_i^*(-) 0⋱ ⋮][rr]^-_-∼∏_I∖ J ()(i^*𝒳,i^*𝒴)×()^J(_J(u^*𝒳),_J(u^*𝒴))[dddd]^Φ∏_I∖ J(u/i)(_i^*u^*(𝒳),_u/i^*𝒴_i)[rr]^-_-∼∏_I∖ J()^u/i(_u/i(_i^*u^*(𝒳)),_u/i(_u/i^*𝒴_i))where the left column is exact, showing that (𝒳,𝒴) is the kernel of the following map in the diagram, while the rows consist in an application of the corresponding diagram functors and, therefore, they are isomorphisms by inductive hypothesis (as we are working with categories of shorter length than ℓ(I)). To conclude, it is enough to verify that (Φ)=()^I(_I𝒳,_I𝒴). Hence, take an element f:=(f_k)_k∈ I∈∏_I∖ J()(i^*𝒳,i^*𝒴)×()^J(_J(u^*𝒳),_J(u^*𝒴))where f_ J:=(f_j)_j∈ J satisfies the required compatibilities to be a morphism of diagrams _J(u^*𝒳)→_J(u^*𝒴). By Propositions <ref> and <ref>, this f is sent toΦ(f)=((f_i∘𝒳_a-𝒴_a∘ f_j)_(j,a)∈ u/i)_i∈ I∖ J.That is, Φ((f_k)_k∈ I)=0 if, and only if, for each i∈ I∖ J and each (j,a j→ i)∈ u/i,we have that f_i∘𝒳_a-𝒴_a∘ f_j=0, that is, each of the following diagrams commutes:@R=25pt@C=50pt𝒳_j[r]^f_j[d]_𝒳_a 𝒴_j[d]^𝒴_a 𝒳_i[r]_f_i 𝒴_i.Equivalently, (f_k)_k∈ I represents a morphism in ()^I from _I𝒳 to _I𝒴.Let ^→ be a strong and stable derivator, I a small category of finite length, andX∈()^I. Then, there is an object 𝒳∈(I) such that _I(𝒳)≅ X, provided ()(Σ^n X(i),X(j))=0 for all i,j∈ I and n>0.We proceed by induction on ℓ(I). If ℓ(I)=1, then I is a disjoint union of copies of and, by (Der.1), there is nothing to prove. If ℓ(I)>1, consider the setting of Notation <ref>. By inductive hypothesis we havean object 𝒳_J∈(J) such that there exists an isomorphismξ:=(ξ_j)_j∈ J_J(𝒳_J)[r]^-∼X_ Jin (1)^J. For each i∈ I, let 𝒳_u/i:=_i^*𝒳_J∈(u/i) and notethat _u/i(𝒳_u/i)=_J(𝒳_J)∘_i[rrr]^-∼_-(ξ__i(a))_a∈ u/iX_ J∘_i.We also need to consider _u/i^*X(i), for which _u/i(_u/i^*X(i))=κ_u/i(X(i)) is constant, and C_i:=_u/i𝒳_u/i∈(), for which we have the usual isomorphismC_i=_u/i𝒳_u/i[rr]^-∼_-γ_!i^*u_!𝒳_J.Consider now the following morphism in ()^u/i:(X_a)_a∈ u/i X_ J∘_i[rr]κ_u/i(X(i))where, given a=(j,a j→ i)∈ u/i, the morphism X_a X(j)→ X(i) is the obvious connecting map in X∈()^I. By Proposition <ref>, the diagram functor _u/i induces the isomorphism (u/i)(𝒳_u/i, _u/i^*X(i))≅()^u/i(_u/i(𝒳_u/i), _u/i(_u/i^*X(i))), hence there is a unique isomorphismφ̅_i𝒳_u/i⟶_u/i^*X(i)such that _u/i(φ̅_i)=(X_a∘ξ__i(a))_a∈ u/i. Furthermore,the adjunction (_u/i)_!⊣_u/i^* and the axiom (Der.4) induce the following isomorphisms:@C=22pt@R=2pt()(i^*u_!𝒳_J,X(i))[rr]^-∼_-(-)∘γ_!()(C_i, X(i))[rr]^-∼_-_u/i^*(-)∘η_u/i(u/i)(𝒳_u/i,_u/i^*X(i)) φ_i@|->[rrrr]φ̅_iso we can define φ_i i^*u_!𝒳_J→ X(i) as the unique map such that _u/i^*(φ_i∘γ_!)∘η_u/i=φ̅_i. Now define 𝒳 as the cone in (I) of the following map α:@C=32pt@R=9pt∐_I∖ Ji_!i^*u_!𝒳_J[rr]^-α:=[ccc⋱0i_!φ_i0⋱ ⋯ -ϵ_i ⋯]∐_I∖ Ji_!X(i)⊔u_!𝒳_J [r]^-β 𝒳[r] Σ(∐_I∖ Ji_!i^*u_!𝒳_J).Note also that the above triangle is pointwise split-exact: it is enough to verify that k^*(α) is a split monomorphism in () for all k∈ I, but this is trivially true for k∈ J, as in this case k^*(∐_I∖ Ji_!i^*u_!𝒳_J)=0, while for k=i∈ I∖ J, so the morphism i^*(α) becomes:@C=32pt@R=9pt i^*i_!i^*u_!𝒳_J[rr]^- i^*(α)=[ccc i^*i_!φ_i-i^*ϵ_i ] i^*i_!X(i)⊔i^*u_!𝒳_J,which is a split monomorphism because -i^*ϵ_i i^*i_!i^*u_!𝒳_J→ i^*u_!𝒳_J is an isomorphism. As a consequence, one sees that _I(𝒳) is the cokernel of the map _I(α). Finally, define the following map of (incoherent) diagramsφ:=(φ_i)_i∈ I_I(u_!𝒳_J)⟶ Xin ()^I, where φ_i is the map defined in (<ref>),and consider the following commutative diagram@C=28pt@R=30pt∐_I∖ J_I(i_!i^*u_!𝒳_J)[d]^-[origin=c]90∼_χ[rr]^-_I(α) ∐_I∖ J_I(i_!X(i))⊔_I( u_!𝒳_J)[d]^-[origin=c]90∼_χ[rr]^-_I(β)_I(𝒳) ∐_I∖ J(i^*u_!𝒳_J)⊗ i[rr]|-[ccc⋱0φ_i⊗ i0⋱ ⋯ -e_i ⋯]∐_I∖ JX(i)⊗ i⊔_I( u_!𝒳_J) [rr]_-[[ ⋯ e_i ⋯ φ ]] Xwhere the vertical maps are induced by the natural isomorphisms χ_I∘ i_!⟹̃-⊗ i (with i varying in I∖ J) and e_i is the counit of the adjunction (-⊗ i)⊣ (-)_ i (see Example <ref>). One now checks easily that the second row is point-wise split, so X is a cokernel of the first map, which is isomorphic to _I(α). Therefore, _I(𝒳)≅ X, as desired.§.§ Lifting diagrams of countable lengthLet ^→ be a strong and stable derivator, I a small category of countable length and 𝒳,𝒴∈(I). Then, the canonical map _I(-)(I)(𝒳,𝒴)⟶()^I(_I(𝒳),_I(𝒴)) is an isomorphism provided ()(Σ^n 𝒳_i,𝒴_j)=0 for all i,j∈ I and n>0.Fix Notation <ref> and consider the following triangle given by the Lemmas <ref> and <ref>, where α and β are the explicit matrices given in the lemma:@C=20pt∐_(ι_n)_!ι_n^*(𝒳)[rr]^α∐_(ι_n)_!ι_n^*(𝒳)[rr]^-β𝒳[rr]Σ(∐_(ι_n)_!ι_n^*(𝒳)). Apply (-,𝒴):=(I)(-,𝒴) to the above triangle to get the following long exact sequence:@C=8.5pt⋯[r] 0[r] (𝒳,𝒴)[r] (∐_ (ι_n)_!ι_n^*(𝒳),𝒴)[rrr]^(-)∘α (∐_(ι_n)_!ι_n^*(𝒳),𝒴)[r] ⋯where we can write the 0 on the left by the following series of isomorphisms: (Σ(∐_(ι_n)_!ι_n^*(𝒳)),𝒴) ≅∏_((ι_n)_!Σι_n^*(𝒳),𝒴)≅∏_(I_≤ n)(Σι_n^*(𝒳),ι_n^*(𝒴))≅∏_()^I_≤ n(_I_≤ n(Σι_n^*(𝒳)),_I_≤ n(ι_n^*(𝒴)))=0.where the first isomorphism holds because Σ is an equivalence (and, as such, it commutes with coproducts) and (ι_n)_! is triangulated, the second isomorphism follows by the adjunction (ι_n)_!⊣ι_n^*, the third one is a consequence of Proposition <ref> (as I_≤ n is of finite length) and the last equality to 0 follows by our orthogonality hypotheses. A consequence of thelong exact sequence above is that (𝒳,𝒴) is the kernel of the map (-)∘α and, therefore, we will have concluded if we could prove that also ()^I(_I(𝒳),_I(𝒴)) is a kernel for this map. We start considering the following commutative diagram:@C=48pt@R=30pt(I)((ι_n)_!ι_n^*(𝒳),𝒴)@<-[r]^-(-)∘∂^n[d]^-[origin=c]90∼_-ι_n^*(-)∘η_n (I)((ι_n+1)_!ι_n+1^*(𝒳),𝒴)[d]_-[origin=c]90∼^-ι_n+1^*(-)∘η_n+1 (I_≤ n)(ι_n^*(𝒳),ι_n^*(𝒴))@<-[r]^ι_n+1,n^*(-)[d]^-[origin=c]90∼_-_I_≤ n(-) (I_≤ n+1)(ι_n+1^*(𝒳),ι_n+1^*(𝒴))[d]_-[origin=c]90∼^-_I_≤ n+1(-) ()^I_≤ n(_I(𝒳)_ I_≤ n,_I(𝒴)_ I_≤ n)@<-[r]^(-)_ I_≤ n ()^I_≤ n+1(_I(𝒳)_ I_≤ n+1,_I(𝒴)_ I_≤ n+1)where the upper square commutes by Proposition <ref>, and the lower vertical maps are isomorphisms because of Proposition <ref> (and the fact that I_≤ n has finite length, for all ).Hence, identifying (I_≤ n)(∐_(ι_n)_!ι_n^*(𝒳),𝒴) with ∏_(I_≤ n)((ι_n)_!ι_n^*(𝒳),𝒴), and also identifying each (I_≤ n)((ι_n)_!ι_n^*(𝒳),𝒴) with ()^I_≤ n(_I(𝒳)_ I_≤ n,_I(𝒴)_ I_≤ n) via the isomorphism _I_≤ n(ι_n^*(-)∘η_n) (as in the columns of the above diagram), we can describe the action of (-)∘α as follows:∏_n()^I_≤ n(_I(𝒳)_I_≤ n,_I(𝒴)_I_≤ n) →∏_n()^I_≤ n(_I(𝒳)_I_≤ n,_I(𝒴)_I_≤ n) (ϕ_n)_n∈ ↦ (ϕ_n-(ϕ_n+1)_I_≤ n)_n∈.Thus, a sequence ϕ:=(ϕ_n)_n∈∈∏_n()^I_≤ n(_I(𝒳)_I_≤ n,_I(𝒴)_I_≤ n) is in the kernel of (-)∘α if, and only if, (ϕ_n+1)_I_≤ n=ϕ_n for all n∈, that is if, and only if, ϕ represents (the sequence of successive truncations of) a morphism _I(𝒳)→_I(𝒴).Let ^→ be a strong and stable derivator, I a small category of countable length and X∈()^I. Then, there is an object 𝒳∈(I) such that _I(𝒳)≅ X, provided ()(Σ^n X_i,X_j)=0 for all i,j∈ I and n>0.For each n∈, the inclusions ι_n I_≤ n→ I and ι_n+1,n I_≤ n→ I_≤ n+1 are cosieves and, therefore, the associated restrictions (-)_ I_≤ n()^I⟶()^I_≤ nand(-)_ I_≤ n+1()^I⟶()^I_≤ n+1have left adjoints that act as extension by zero functors (recall Remark <ref>). We denote these adjoints by (-)⊗ι_n()^I_≤ n⟶()^Iand(-)⊗ι_n+1,n()^I_≤ n⟶()^I_≤ n+1,respectively. We denote the counits of the above adjunctions by e^n=(e^n_i)_I(-)_ I_≤ n⊗ι_n⇒𝕀_()^Iand d^n=(d^n_i)_I_≤ n+1(-)_ I_≤ n⊗ι_n+1,n⇒𝕀_()^I_≤ n+1where e^n_i and d^n_i are identities if i∈ I_≤ n and 0 otherwise.Consider now the diagram X∈()^I of the statement, let X^n:=(X_ I_≤ n)⊗ι_n, for all n∈, and consider the counit d^n (X_ I_≤ n)⊗ι_n+1,n=(X_ I_≤ n+1)_ I_≤ n⊗ι_n+1,n⟶ X_ I_≤ n+1.Extending this counit we get a morphism d^n⊗ι_n+1 X^n→ X^n+1 (whose components relative to I_≤ n areidentities and the remaining components are trivial). Then there is a pointwise split-exact sequence in ()^I of the form @C=35pt∐_X^n[rrr]^(.47)α̅:= [cccc 1 -d^0⊗ι_1 10-d^1⊗ι_2 10⋱ ⋱]∐_X^n[rrr]^-β̅:= [cccc e^0e^1e^2⋯]X. By Proposition <ref>, for any n∈_>0 we can find an object 𝒳^n∈(I_≤ n) such that _I_≤ n(𝒳^n)≅ X_ I_≤ n. Furthermore, by Proposition <ref>, for each n∈ there is a unique morphism δ^n(ι_n+1,n)_!𝒳^n→𝒳^n+1 such that _I_≤ n+1(δ^n)=d^n. Consider also a natural isomorphism ξ^n (ι_n)_!⇒ (ι_n+1)_!∘ (ι_n+1,n)_!(which exists because these functors are both left adjoints to ι_n^*=ι_n+1,n^*∘ι_n^*) and construct an object 𝒳∈(I) as the cone of the map α in the following triangle:@C=14pt∐_(ι_n)_!(𝒳^n)[rrrrrrrr]^(.46)α:= [ccccc 1 -(ι_1)_!δ^0∘ξ^0 10-(ι_2)_!δ^1∘ξ^110⋱⋱] ∐_(ι_n)_!(𝒳^n)[r] 𝒳[r] (∐_(ι_n)_!(𝒳^n)).Just by construction, _I(α) is isomorphic to α̅, so the above triangle is pointwise split-exact and _I(𝒳) is the cokernel of_I(α). Since X is the cokernel of α̅, we deduce that _I(𝒳)≅ X.§.§ Lifting in general We finally have all the ingredients to complete the proof of the main result of this section: As in Subsection <ref>, consider the category of countable length Δ(I) and the homotopical epimorphism uΔ(I)→ I. By the results of Subsection <ref>, we know that the statements (1) and (2) hold in (Δ(I)) and we should try to restrict them along the fully faithful functor u^*(I)→(Δ(I)). (1) Let 𝒳, 𝒴∈(I) and consider the following commutative diagram:(I)(𝒳,𝒴)[d]|≅[rr]()^I(_I𝒳,_I𝒴)[d]|(*) (Δ(I))(u^*𝒳,u^*𝒴)[rr]|(.4)(**)()^Δ(I)((_I𝒳)∘ u,(_I𝒴)∘ u)where the leftmost vertical map is an isomorphism since u^* is fully faithful, (**) is an isomorphism by Proposition <ref> and the fact that Δ(I) has countable length, and (*) is an isomorphism by Lemma <ref>.(2) Given X as in the statement, by Proposition <ref> there is 𝒳_Δ(I) in (Δ(I)) such that _Δ(I)(𝒳_Δ(I))≅ X∘ u. Now one should just prove that 𝒳_Δ(I) does belong in the essential image of u^*, but this is true since being constant on fibers is a property that just depends on the underlying (incoherent) diagrams (see the discussion before Definition <ref>). § T-STRUCTURES ON STRONG AND STABLE DERIVATORS Let us start by fixing a strong and stable derivator^→.By definition, at-structure inis just at-structure in the base of, so let us also fix such at-structure=(,Σ)on()whose heart we denote by$̋.Furthermore, for each small category I, we let_I:={𝒳∈(I):𝒳_i∈, ∀ i∈ I}_I:={𝒳∈(I):𝒳_i∈, ∀ i∈ I}.The goal of this section is to prove Theorem <ref>, i.e., we will verify that each _I:=(_I,Σ_I) is a t-structure in (I) and, moreover, that the heart _̋I of _I is equivalent to the functor category ^̋I. Our proof will rely heavily on the results of Section <ref>; also, the scheme of the proof will be the same: we will verify our statement for categories of finite length, then for categories of countable length and, finally, in full generality. We start with the following definition: A subcategory 𝒞 of 𝔻() is said to be closed under taking homotopy colimits (resp.,directed homotopy colimits) with respect to 𝔻, when for any small category (resp., directed set) Iand any object 𝒳∈𝔻(I), one has that _I 𝒳∈𝒞 whenever 𝒳_i∈𝒞 for all i∈ I. Closedness under (inverse) homotopy limits is defined dually. In fact, when the classin the above definition is either an aisle or a co-aisle, then some closure properties are automatic:In the above setting, the aisle 𝒰 is closed under taking homotopy colimits and the co-aisle 𝒱 is closed under taking homotopy limits. Using the terminology of <cit.>, it is clear that 𝒰 is closed under coproducts and it follows from <cit.> or <cit.> that it is closed under homotopy pushouts. Then,by <cit.>, we conclude that 𝒰 is closed under homotopy colimits. The argument for the co-aisle is formally dual. Now we can prove the lifting property of t-structures for categories of finite length. In the above setting, _I=(_I,Σ_I) is a t-structure in (I), for any small category of finite length I.We proceed by induction on ℓ(I). If ℓ(I)=1, then I is a disjoint union of copies of and, by (Der.1), there is nothing to prove. Hence, let I be a small category of finite length ℓ(I)>1, fix Notation <ref>, and let us verify that _I is a t-structure. Note that, by Theorem <ref>, the axiom (t-S.1) holds, while the axiom (t-S.2) holdsby construction. Hence, we have just to verify (t-S.3). Indeed, take 𝒴∈(I) and let us construct a suitable truncation triangle with respect to _I. By inductive hypothesis, we can consider the following truncation triangles in (J) and (), respectively, @C=28pt𝒰_u^*𝒴[r]^-φ_uu^*𝒴[r]^-ψ_u 𝒱_u^*𝒴[r] Σ𝒰_u^*𝒴U_𝒴_i[r]^-φ_i 𝒴_i[r]^-ψ_iV_𝒴_i[r] Σ U_𝒴_iwith i ranging in I∖ J. Furthermore, since φ_i U_𝒴_i→𝒴_i is a coreflection of 𝒴_i onto , and i^*u_!𝒰_u^*𝒴≅_u/i_i^*𝒰_u^*𝒴∈ (by Proposition <ref>), there exists a unique morphism e_i i^*u_!𝒰_u^*𝒴→ U_𝒴_i that makes the following square commute:@C=70pt i^*u_!𝒰_u^*𝒴@.>[r]^-∃! e_i[d]_-i^*u_!φ_u U_𝒴_i[d]^-φ_ii^*u_!u^*𝒴[r]_-i^*ϵ_u 𝒴_i.Consider now the following commutative square in (I):@C=50pt@R=75pt∐_I∖ Ji_!i^*u_!𝒰_u^*𝒴[rr]|-α_𝒰:=[ccc⋱0i_!e_i0⋱ ⋯ -ϵ_i ⋯] [d]|-Φ^(1):=[ccc⋱0i_!i^*u_!φ_u0⋱] ∐_I∖ Ji_!U_𝒴_i⊔ u_!𝒰_u^*𝒴[d]|-[ccc|c⋱0 ⋮i_!φ_i0 0⋱ ⋮ ⋯ 0 ⋯ u_!φ_u ]=:Φ^(2) ∐_I∖ Ji_!i^*u_!u^*𝒴[rr]|-α:=[ccc⋱0i_!i^*ϵ_u0⋱ ⋯ -ϵ_i ⋯] ∐_I∖ Ji_!𝒴_i⊔ u_!u^*𝒴where the cone of α is just 𝒴, by Lemma <ref>.By the 3× 3 Lemma in triangulated categories we can complete the above square to a commutative diagram in (I), where all the rows and columns are distinguished triangles@R=15pt∐_I∖ Ji_!i^*u_!𝒰_u^*𝒴[rr]^-α_𝒰[dd]_-Φ^(1)∐_I∖ Ji_!U_𝒴_i⊔ u_!𝒰_u^*𝒴[dd]^Φ^(2)[rr]^-β_𝒰𝒰[dd][r]^(.65)+ ∐_I∖ Ji_!i^*u_!u^*𝒴[rr]^-α[dd]_-Ψ^(1)∐_I∖ Ji_!𝒴_i⊔ u_!u^*𝒴[rr]^-β[dd]^-Ψ^(2)𝒴[dd][r]^(.65)+ 𝒱^(1)[d]^(.65)+[rr]^-α_𝒱𝒱^(2)[d]^(.65)+[rr]^-β_𝒱𝒱[d]^(.65)+[r]^(.65)+and β=[[ ⋯ ϵ_i ⋯ ϵ_u ]]. We will have concluded if we can prove that 𝒰∈_I and 𝒱∈_I, that is, k^*𝒰∈ and k^*𝒱∈ for any k∈ I. We start from the case when k∈ J, and we apply k^*to the above 3× 3 diagram, obtaining the following commutative diagram in (), where all the rows and columns are distinguished triangles:@C=30pt@R=10pt 0[rr][dd]0⊔ k^*u_!𝒰_u^*𝒴[dd][rr]^-∼𝒰_k[dd][r]^(.65)+0[rr][dd]0⊔ k^*u_!u^*𝒴[rr]^-∼[dd]𝒴_k[dd][r]^(.65)+(𝒱^(1))_k[d]^(.65)+[rr](𝒱^(2))_k[d]^(.65)+[rr]^-∼𝒱_k[d]^(.65)+[r]^(.65)+Since k∈ J, the category u/k has a terminal object and, therefore, using (Der.4) and <cit.>, one shows that there is a natural isomorphism k^*u_!≅ k^*. Using this and the fact that k^* sends triangles to triangles, we can observe that the central column is a truncation triangle of 𝒴_k≅ k^*u_!u^*𝒴 with respect to the t-structure (𝒰, Σ𝒱) on (). Since also k^*𝒱^(1)=0 (see the first column), we have 𝒰_k≅ k^*u_!𝒰_u^*𝒴≅ U_𝒴_k∈ and 𝒱_k≅𝒱^(2)_k≅ V_𝒴_k∈.On the other hand, if k∈ I∖ J, applying k^* to the above diagram we get the following commutative diagram in (), where rows and columns are distinguished triangles:@R=10pt k^*k_!k^*u_!𝒰_u^*𝒴[rr][dd]k^*k_!U_𝒴_k⊔ k^*u_!𝒰_u^*𝒴[dd][rr]𝒰_k[dd][r]^(.65)+k^*k_!k^*u_!u^*𝒴[rr][dd]k^*k_!𝒴_k⊔ k^*u_!u^*𝒴[rr][dd]𝒴_k[dd][r]^(.65)+(𝒱^(1))_k[d]^(.65)+[rr](𝒱^(2))_k[d]^(.65)+[rr]𝒱_k[d]^(.65)+[r]^(.65)+The components k^*k_!k^*u_!𝒰_u^*𝒴→ k^*u_!𝒰_u^*𝒴 and k^*k_!k^*u_!u^*𝒴→ k^*u_!u^*𝒴 are isomorphisms by Lemma <ref> (since k→ I is fully faithful), so thefirst maps in each of the first two rows is a split monomorphism. Hence, the first two rows are split triangles and, therefore, the map 𝒰_k→𝒴_k is isomorphic to k^*k_!U_𝒴_k→ k^*k_!𝒴_k. Furthermore, by Example <ref> (and the fact that |I(k,k)|=1), there is a natural isomorphism k^*k_!⇒𝕀_(), therefore k^*k_!U_𝒴_k≅ U_𝒴_k∈ and k^*k_!V_𝒴_k≅ V_𝒴_k∈, showing that k^*k_!U_𝒴_k→ k^*k_!𝒴_k (and, therefore, also 𝒰_k→𝒴_k) is a coreflection onto . Hence, 𝒱_k∈ as desired.In the same setting as above, _I=(_I,Σ_I) is a t-structure in (I), for any small category of countable length I.As in the proof of Lemma <ref>, we have just to verify (t-S.3). Indeed, take 𝒳∈(I) and let us construct a suitable truncation triangle with respect to _I. Fix Notation <ref>, then, by Lemma <ref>, for any n∈_>0, there is a triangle𝒰^n[r]^φ_n ι_n^*𝒳[r] 𝒱^n[r] Σ𝒰^nwith 𝒰^n∈_I_≤ n and 𝒱^n∈_I_≤ n Applying (ι_n)_! we obtain a triangle in (I):@C=45pt (ι_n)_!𝒰^n[r]^-(ι_n)_!(φ_n)(ι_n)_!ι_n^*𝒳[r](ι_n)_!𝒱^n[r] Σ (ι_n)_!𝒰^n.Since (ι_n)_! is an extension by 0, we have that (ι_n)_!𝒰^n∈_I and (ι_n)_!𝒱^n∈_I, sothe triangle in (<ref>) is a truncation triangle with respect to _I and, as a consequence, (ι_n)_!(φ_n) is a coreflection of (ι_n)_!ι_n^*𝒳 onto _I. Consider the following solid diagram:@C=50pt@R=35pt (ι_1)_!𝒰^1[d]|(ι_1)_!(φ_1)@.>[r]^∃! ∂^1_𝒰 (ι_2)_!𝒰^2[d]|(ι_2)_!(φ_2)@.>[r]^∃! ∂^2_𝒰 …@.>[r] (ι_n)_!𝒰^n[d]|(ι_n)_!(φ_n)@.>[r]^∃! ∂^n_𝒰 …(ι_1)_!ι_1^*𝒳[r]^∂^1 (ι_2)_!ι_2^*𝒳[r]^∂^2 …[r] (ι_n)_!ι_n^*𝒳[r]^∂^n …Given n∈_>0, since (ι_n+1)_!(φ_n+1) is a coreflection onto _I and (ι_n)_!𝒰^n∈_I, there is a unique map ∂^n_𝒰 (ι_n)_!𝒰^n→ (ι_n+1)_!𝒰^n+1 such that (ι_n+1)_!(φ_n+1)∘∂^n_𝒰=∂^n∘ (ι_n)_!(φ_n). Take now k∈ I and note that, by Lemma <ref>, k^*(ϵ_n) is invertible for all n≥ d(k). Furthermore, k^*(ϵ_n+1)∘ k^*( ∂^n)=k^*(ϵ_n) by Lemma <ref> and, therefore, k^*(∂^n) is an isomorphism for all n≥ d(k). Similarly, k^*(∂^n_𝒰) is an isomorphism for all n≥ d(k), since k^*(∂^n_𝒰) is a coreflection ontoof the isomorphism k^*(∂^n). Taking the cones of the vertical maps in the above diagram, we obtain a sequence like the following one:@C=40pt (ι_1)_!𝒱^1[r]^∂_𝒱^1 (ι_2)_!𝒱^2[r]^∂_𝒱^2 …[r] (ι_n)_!𝒱^3[r]^∂_𝒱^n ….Given k∈ I, also k^*(∂_𝒱^n) is an isomorphism for all n≥ d(k). To see this just consider the morphism of triangles (k^*(∂_𝒰^n),k^*(∂^n),k^*(∂_𝒱^n)): as the first two components are invertible forn≥ d(k), then so has to be the third. We can now conclude by taking the Milnor colimits of these three sequences (i.e. the rightmost terms in triangles as in Lemma <ref>) to get the following triangle:𝒰→𝒳→𝒱→Σ𝒰with 𝒰∈_I, as 𝒰_k≅ (𝒰^d(k))_k∈, and 𝒱∈_I, as 𝒱_k≅ (𝒱^d(k))_k∈, for all k∈ I, by Example <ref>(2). Now we can prove Theorem <ref> as stated in the Introduction. That is, _I=(_I,Σ_I) is a t-structure in (I) for any small category I, and the functor _I(I)→()^I induces an equivalence of categories _̋I≅^̋I. As in the proof of Lemmas <ref> and <ref>, we have just to verify (t-S.3). Indeed, take 𝒳∈(I) and let us construct a suitable truncation triangle with respect to _I.As in Subsection <ref>, we consider the homotopical epimorphism uΔ(I)→ I. By Lemma<ref>,_Δ(I) is aon (Δ(I)), so we can take the following truncation triangle in (Δ(I)):𝒰_u^*𝒳→ u^*𝒳→𝒱_u^*𝒳→Σ𝒰_u^*𝒳.We have to verify that 𝒰_u^*𝒳 and 𝒱_u^*𝒳 belong to the essential image of u^*. For that, we need to prove that 𝒰_u^*𝒳 and 𝒱_u^*𝒳 are constant on fibers. Indeed, let i∈ I and consider the following diagram in (u^-1(i)):ι_i^*𝒰_u^*𝒳[r] ι_i^*u^*𝒳[r]@=[d]ι_i^*𝒱_u^*𝒳[r]Σι_i^*𝒰_u^*𝒳. _u^-1(i)^*𝒰_𝒳_i[r] _u^-1(i)^*𝒳_i[r] _u^-1(i)^*𝒱_𝒳_i[r] _u^-1(i)^*Σ𝒰_𝒳_iBoth the first and the second line are truncation triangles for ι_i^*u^*𝒳= _u^-1(i)^*𝒳_i in (u^-1(i)) with respect to the t-structure _u^-1(i). By the uniqueness oftruncations, ι_i^*𝒰_u^*𝒳≅_u^-1(i)^*𝒰_𝒳_i and ι_i^*𝒱_u^*𝒳≅_u^-1(i)^*𝒱_𝒳_i, so 𝒰_u^*𝒳 and 𝒱_u^*𝒳 are constant on fibers. Thus, the triangle (<ref>) lies in the essential image of u^* and, therefore, we can lift it back along u^* to obtain the desired truncation triangle of 𝒳 in (I).Finally, note that the functor _̋I→^̋I induced by _I is fully faithful and essentially surjective by Theorem <ref>. § FINITENESS CONDITIONS FOR OBJECTS IN THE BASE Let us start by fixing a strong and stable derivator ^→. In the first part of the section we recall what a closed monoidal derivator 𝕍 is, and what it means forto have a closed action by such a 𝕍. We then introduce the class of homotopically finitely presented objects in (), relative to a specific closed action on . In the second part of the section we introduce an intrinsic version of homotopic finite presentability for objects in (), where by “intrinsic” we mean that this notion does not depend on the choice of a specific action, as it just uses data fromitself. We conclude the section by proving that a given object in () is compact (in the usual sense of triangulated categories with coproducts) if, and only if, it is intrinsically homotopically finitely presented if, and only if, it is homotopically finitely presented with respect to any closed action onby a closed monoidal derivator 𝕍, whose tensor unit is a compact generator. §.§ Closed actions by a closed monoidal derivator Following <cit.>, amonoidal structure on a prederivator 𝕍 is given by*a tensor product ⊗𝕍×𝕍→𝕍 and*a tensor unit 𝕊𝕐_→𝕍 (here 𝕐_ stands for the terminal derivator, i.e. 𝕐_(I)= for each I∈, cf. Examples <ref> and <ref>),together with the usual unitality, associativity and symmetry constraints in the 2-category of prederivators. One can restrict this definition to derivators and, after some tedious work, arrive at the definition of aclosed monoidal derivator𝕍, see <cit.>. We will further say that 𝕍 is monogenic (cf. <cit.>) when it is further a strong and stable derivator, and the image of the unique object by the functor 𝕊:𝕐_1( )=→𝕍(), that we still denote by 𝕊, compactly generates 𝕍( ).Given a closed monoidal derivator 𝕍 we have, for each I,J∈, the functor ofinternal hom-objects 𝕍(I)^×𝕍(J)⟶𝕍(I^× J)(𝒳,𝒴) ⟼𝒳𝒴. The situation further generalizes as follows: if 𝕍 is a closed monoidal derivator, we say thathas a closed 𝕍-action if there is a morphism ⊗𝕍×→ together with associativity and unitality constraints in the 2-category of derivators which is at the same time aadjoint in the sense of <cit.>. In that case, <cit.> provides us again with aninternal hom-functor for each I,J∈, (I)^×(J) ⟶𝕍(I^× J).Moreover, if we specialize to I=, these bifunctors, for any fixedC∈(), naturally assemble to give a morphism of derivators C-⟶𝕍.Crucial for our purposes is the fact that, in this last situation, for all V∈𝕍() and X, Y∈(), we have an adjunction isomorphism,which is functorial in V, X and Y: ()(V⊗ X,Y)≅𝕍()(V,X Y).Our first example of this situationis aimed at algebraically minded readers: Letbe a Grothendieck Abelian category. Then, the derived category () is enriched over () (we have 𝐑_()^×() →()). Similarly, the canonical derivator _ enhancing () (see Example <ref>) is enriched over _ and from this one gets an internal hom bifunctor --=𝐑__(I)^×_(J) ⟶_(I^× J),Note that _ is strong and stable, and it is also closed monoidal with respect to usual derived tensor product. Furthermore, theunit object 𝕊∈_()= () is precisely ℤ, viewed as a stalk complex at zero, so that _ is monogenic. For I= andC∈_()=(), we have the obvious morphism of derivators C -=𝐑_(C,-)_⟶_.We next remind the reader that the situationapplies to any strong and stable derivator: Letbe a closed monoidal model category of spectra and _ be the corresponding strong stable derivator of spectra. More concretely, there are various mutually Quillen equivalent stable closed monoidal model categories of spectra which we can choose for , see <cit.>, and we pass to the corresponding homotopy strong stable derivator (see Example <ref>). This will be a closed monoidal derivator by <cit.>.Moreover, its unit object 𝕊 is the sphere spectrum (see <cit.>), which iscompact in _ ()=Ho() (see our Appendix [sec_spectra]A).For the sake of completeness, we also remark that theBousfield-Friedlander stable model category of spectra studied in Appendix [sec_spectra]A is not suitable to derive the monoidal structure in _—although it is also Quillen equivalent to the others, it does not have the required structure of a monoidal model category.Now, given any strong and stable derivator, the discussion in <cit.> shows that there is a canonical closed action ⊗_×→. This in particular impliesthat for each C ∈(), we have a morphism of derivators C -= F(C,-)⟶_,called thefunction spectrum, which is right adjoint (internal to the 2-category of derivators) to -⊗ C_→.§.§ Homotopically finite presentability relative to a closed action All through this subsection, we assume thatis a strong and stable derivator,𝕍 a closed monoidal, strong and stable derivator, and we fix a closed action:⊗𝕍×⟶, Given C∈(), we have corresponding internal hom-functor C -→𝕍. By construction, C -is a morphisms of derivators and so it comes equipped with a natural isomorphismγ_u^C:=γ_u^C - u^*∘ (C -) ⟹̃ (C -)∘ u^*,for each u J→ I in . As in Subsection <ref>, one can then construct the following natural transformation:(γ_u^C)_! u_!∘ (C -) ⟹ (C -)∘ u_!.In particular, when I= and u:=_J J→, we get a canonical natural transformation (γ__J^C)_!_J∘ (C -)⟹ (C-)∘_J .Note that thehomotopy colimiton the left hand side is taken in 𝕍, while that on the right hand side is taken in . This leads to the following crucial definition: Given a small category I, an object C ∈𝔻() is said to be I-homotopically finitely presented in 𝔻, relative to 𝕍, when the last given natural transformation is an isomorphism, i.e., when the canonical map_I (C𝒳)⟶ C(_I 𝒳)is an isomorphism for all 𝒳∈𝔻(I).We say that C is homotopically finitely presented in , relative to 𝕍,when it is I-homotopically finitely presented, for any directed set I. In the algebraic case, we have the following very intuitive interpretation. In the situation of Example <ref>, for each C∈_()=(),the morphism C - is given by 𝐑_(C,-)_(I)= (^I)→ (^I)=_(I). Saying that C is I-homotopically finitely presented in _, relative to _, amounts to say that the canonical morphism _I𝐑_(C,𝒳)⟶𝐑_(C, _I𝒳) is an isomorphism, for all 𝒳∈_(I)= (^I). In the next lemma weshow that, for certain shapes I, the I-homotopical finite presentability comes for free. We recall that I∈ is strictly homotopy finite if its nerve contains only finitely many non-degenerate simplices, <cit.>. In our terminology, this is equivalent to I being of finite length and having finitely many morphisms. A small category is homotopy finite if it is equivalent to one that is strictly homotopy finite. Every object C∈() is I-homotopically finitely presented in , relative to 𝕍, if the small category I is homotopy finite.The morphism C-→𝕍 is exact (=left exact, as we are in the stable setting) since it is right adjoint to -⊗ C (see <cit.>). This is enough to conclude since, by <cit.>, every exact morphism between stable derivators preserves homotopy finite colimits.We can now give a handy characterization of homotopical finite presentability: For an object C∈(), the following assertions are equivalent:*the morphism of derivators C -:→𝕍 preserves all homotopy colimits;*the object C is homotopically finitely presented in , relative to 𝕍;*the morphism of derivators C -:→𝕍 preservescoproducts.The implication “(1)⇒(2)” is clear, and “(2)⇒(3)” follows by Lemma <ref>. Finally, the implication“(3)⇒(1)” follows since C -:→𝕍 is an exact morphism of derivators, and so it preserves homotopy pushouts. By <cit.>, a morphism of derivators that preserves coproducts and homotopy pushouts preserves all homotopy colimits. §.§ Intrinsic definition of homotopical finite presentability In this subsection we present an alternative definition of homotopically finitely presented object in (), that depends only on , and not on any action by a closed monoidal derivator.Indeed, given a small category I, consider the following adjunction:_Iη_Iϵ_I_I^*(I)⟶(). Recall also that, by Example <ref>, _I∘_I^*=κ_I()→()^I is the constant diagram functor, and so we get the following natural transformation _I(η_I)_I⟹κ_I∘_I(I)⟶()^I. In turn, this yields the following canonical morphism of Abelian groups: μ_C,𝒳_I()(C,𝒳_i)⟶()(C,_I𝒳), for any C∈() and 𝒳∈(I), which is natural in both variables.An object C∈() is intrinsically homotopically finitely presented inwhen, for every directed set I, the morphism μ_C,𝒳_I()(C,𝒳_i)⟶()(C,_I𝒳) is an isomorphism, for all 𝒳∈(I).The following result will be crucial in the proof of Theorem <ref>. If an object C∈() is homotopically finitely presented in , then it is also compact in (). Let I be a set and adopt the notation of the proof of Lemma <ref>, that is, I is a set and P:=𝒫^<ω(I) is the directed set of finite subsets of I, with u I→ P the obvious inclusion.Given F∈ P, there is an equivalence F≅ u/F and, by (Der.4), we get the following isomorphism_F𝒳_ F≅_u/F _F^* 𝒳⟶̃(u_!𝒳)_Ffor each 𝒳∈(I). We can conclude by the following series of isomorphisms:∐_I()(C,X_i) ≅_P ∐_F ()(C,𝒳_i)≅_P ()(C,(u_!𝒳)_F)≅()(C,_P(u_!𝒳)) as P is homo. f.p.;≅()(C,(_I)_!𝒳) as _I=_P∘ u;≅()(C,∐_IX_i) as I is discrete.§.§ Compactness versus homotopical finite presentability In this final subsection we show that the two versions (extrinsic and intrinsic) of homotopical finite presentability coincide provided that, in the extrinsic case, the ⊗-unit of the given closed action, is a compact generator. Furthermore, both versions ofhomotopical finite presentability are equivalent to compactness, in the usual sense of triangulated categories with coproducts. This generalizes an analogous result obtained in <cit.> for the special case of =_R, the canonical derivator enhancing ( R) for a ring R (cf. Example <ref>), and 𝕍=_. Let 𝕍 be a closed monoidal derivator which is monogenic, anda stable derivator with a closed 𝕍-action. The following assertions are equivalent for C∈():* C is intrinsically homotopically finitely presented in ;* C is compact in (), i.e.,the functor ()(C,-)()→ preserves coproducts;*the morphism of derivators C -→𝕍 preserves coproducts;* C is homotopically finitely presented in , relative to 𝕍;*the morphism of derivators C -→𝕍 preserves all homotopy colimits. The equivalence of assertions (3–5) is Proposition <ref> and the implication “(1)⇒(2)” is Lemma <ref>. (2)⇔(3). By (Der.1), assertion (3) holds if the following functor preserves coproducts:C -()⟶𝕍(). Let then I be a set, 𝒳=(X_i)_I∈(I)≅()^I, and consider the canonical morphism λ =(γ__I^C)_! ∐_I(C X_i)⟶ C (∐_ IX_i)in 𝕍(1),as in Subsection <ref>. Furthermore, consider the canonical morphism μ=μ_Σ^nC,𝒳∐_I()(Σ^n C,X_i)⟶()(Σ^n C,∐_IX_i)in , as in Subsection <ref>. Then, assertion (3) holds if, and only if, λ is an isomorphism for all 𝒳∈(I), and assertion (2) holds if, and only if, μ is an isomorphism, for all 𝒳∈(I) and all n∈ℤ. In turn, since the unit 𝕊 is a compact generator of 𝕍(), we know that λ is an isomorphism if, and only if, themap λ_*𝕍()(Σ^n𝕊,∐_I(C X_i))⟶𝕍()(Σ^n𝕊,C (∐_IX_i)) is an isomorphism, for all n∈ℤ. Now, to see that μ is an isomorphism if, and only if, λ_* is an isomorphism, we consider the following commutative diagram, where all vertical arrows are isomorphisms:@R=15pt 𝕍(1)(Σ^n𝕊,∐_ I(C X_i)) [r]^-λ_∗ 𝕍(1)(Σ^n𝕊,C (∐_IX_i)) ∐_I𝕍(1)(Σ^n𝕊,C X_i) [u]^-≅ 𝔻()(Σ^n𝕊⊗ C,∐_IX_i) [u]_-≅∐_I𝔻()(Σ^n𝕊⊗ C,X_i) [u]^-≅ 𝔻()(Σ^nC,∐_IX_i) [u]_-≅∐_I𝔻()(Σ^nC,X_i) [u]^-≅[ur]_-μFor the invertibility of the vertical arrows, use the natural isomorphism (𝕊⊗ -)≅𝕀_, the adjunction (-⊗ C)⊢ (C -) and the compactness of 𝕊 in 𝕍(). Therefore, λ_* is an isomorphism if, and only if, so is μ. (2)⇒(1). As we now know that assertions (2–5) are all equivalent, for any given closed symmetric monoidal derivator 𝕍 which is monogenic and acts on , we can assume in this implication that 𝕍=_ and that (C -)=F(C,-)=→_ is the function spectrum (see Example <ref>). In particular,C is homotopically finitely presented, relative to _. Let then I be any directed set and 𝒳∈(I). As in the proof of the implication “(2)⇔(3)”, consider the canonical map λ_I(C𝒳)→ C_IX, and the induced mapsλ^*𝕍()(Σ^n𝕊,_I(C𝒳))⟶𝕍()(Σ^n𝕊, C_IX) for each n∈ℤ. We are going to verify that λ^* is an isomorphism if, and only if, the following canonical map is an isomorphism for all n∈:μ=μ_Σ^n C,𝒳_I𝒟(Σ^n C,𝒳_i)⟶𝒟(Σ^n C,_I𝒳). In fact, arguing as above, we get a commutative diagram whose columns are isomorphisms:@R=15pt 𝕍(1)(Σ^n𝕊,_I(C𝒳)) [r]^-λ_∗ 𝕍(1)(Σ^n𝕊,C_I𝒳) _I𝕍(1)(Σ^n𝕊,C𝒳_i) [u]^-≅ 𝔻()(Σ^n𝕊⊗ C,_I 𝒳) [u]_-≅_I𝔻()(Σ^n𝕊⊗ C,𝒳_i) [u]^-≅ 𝔻()(Σ^nC,_I𝒳) [u]_-≅_I𝔻()(Σ^nC,𝒳_i) [u]^-≅[ur]_-μwhere the upper left vertical arrow is an isomorphism because the sphere spectrum 𝕊 is intrinsically homotopically finitely presented in _ (see Appendix [sec_spectra]A). As an immediate consequence (see Example<ref>), we get Let ^→ be a strong and stable derivator. The following assertions are equivalent for an object C∈():* C is intrinsically homotopically finitely presented in ;* C is a compact object of ();* C is homotopically finitely presented in , relative to _. § FINITENESS CONDITIONS AND DIRECTED COLIMITS IN THE HEART Let us start by fixing a strong and stable derivator ^→. In this section wediscuss some finiteness conditions on a t-structure 𝐭 = (,Σ) in (). The simplest condition which we can impose is thatis closed under coproducts—this in fact makes sense in any triangulated category. However, it will turn out later in Example <ref> that this is not sufficient for the heart to be (Ab.5) Abelian. A stronger condition which does imply exactness of directed colimits in the heart is thatis closed under directed homotopy colimits. This will establish Theorem <ref>. §.§ Homotopically smashing t-structures Let us start introducing the finiteness conditions which we are going to study: A t-structure =(,Σ) in () is said to be:* compactly generated whenis of the form =𝒮^⊥_≤ 0, where S⊆() is a set of compact objects (see Subsection <ref>);* homotopically smashing (with respect to 𝔻) when 𝒱 is closed under taking directed homotopy colimits;* smashing when 𝒱 is closed under taking coproducts.The three notions relate as follows. Let =(,Σ) be a t-structure in () and consider the following conditions:*compactly generated;*homotopically smashing;*smashing.Then, the implications “(1)⇒(2)⇒(3)” hold and none of them can be reversed in general.The implication “(1)⇒(2)” is a direct consequence of the definition of intrinsically homotopically finitely presented object and of Corollary <ref>. As for the implication “(2)⇒(3)”, let us assume thatis homotopically smashing and let 𝒳=(X_i)_i∈ I be a family of objects of 𝒰^⊥. We have seen in the proof of Lemma <ref> that, looking at I as a discrete category,we can view 𝒳 as an object of 𝔻(I) and then there is a canonical isomorphism ∐_i∈ IX_i≅_P u_!𝒳, with the same notation as in that proof. Then we just need to check that (u_!𝒳)_F∈𝒰^⊥, for all finite subsets F⊆ I. But, in fact, (u_!𝒳)_F≅∐_i∈ FX_i, which is an object of 𝒰^⊥ since co-aisles are closed under finite coproducts. We refer toExamples <ref> and <ref> in the next section for explicit counterexamples showing that the implications in the statement cannot be reversed in general.§.§ Directed colimits in the heart Here we prove Theorem <ref>, as stated in the Introduction, i.e. that the heart of a homotopically smashing t-structure is an (Ab.5) Abelian category. We start with the following consequence of Theorem <ref>: Let =(,Σ) be a t-structure in (). Given I∈ and 𝒳∈_̋I, _I _I 𝒳≅τ^Σ(_I 𝒳),where thecolimit on the left hand side is taken in the heart =̋∩Σ.ByTheorem <ref>, _I induces an equivalence F_I_̋I→^̋I, and we fix a quasi-inverse F_I^-1^̋I→_̋I. Now, _I is defined as the left adjoint to κ_I→̋^̋I so, composing the two adjunctions _I⊣κ_I and F_I⊣ F_I^-1, we obtain that _I∘ F_I is left adjoint to F_I^-1∘κ_I. Furthermore, F_I^-1∘κ_I≅_I^*_.On the other hand, (τ^Σ V)_ is left adjoint to the inclusion →̋ and _I is left adjoint to _I^*()→(I). Composing the restrictions to the corresponding subcategories of the two adjunctions, we see that τ^Σ V∘(_I)__̋I is a left adjoint to (_I^*)_→̋_̋I. Thus, we deduce the desired natural isomorphism:τ^Σ∘(_I)__̋I≅_I∘ (_I)__̋I. The above lemma has the following immediate consequence:If =(,Σ) is a homotopically smashing t-structure in (), I is a directed set and 𝒳∈_̋I, then _I _I 𝒳≅_I 𝒳. We can now proceed with the proof of Theorem <ref>:In view of Proposition <ref>, it remains to prove the (Ab.5) condition for the heart of a homotopically smashing t-structure =(,Σ). That is, given three diagrams X, Y, and Z I→$̋ for some directed setI, together with natural transformationsf X⇒ Yandg Y⇒ Z, such that 0⟶ X_if_i⟶ Y_ig_i⟶Z_i⟶ 0is a short exact sequence in$̋ for any i∈ I, then0⟶_ IX_i⟶_IY_i⟶_IZ_i⟶ 0is short exact. By Theorem <ref>, the short exact sequence 0→ X→ Y→ Z→ 0 in ^̋I, can be identified with a short exact sequence in _̋I⊆(I). Remember that a sequence in the heart of a t-structure is short exact if and only if it represents a triangle of the ambient category whose three first terms happen to lie in the heart. Hence, there is a map Z→Σ X such thatX⟶ Y⟶ Z⟶Σ Xis a triangle in (I). Taking homotopy colimits we get a triangle in ():_I X⟶_I Y⟶_I Z⟶Σ_I X.Asis homotopically smashing,_I X, _I Y and _I Z belong to $̋, so the following sequence in$̋ is short exact:0⟶_I X⟶_I Y⟶_I Z⟶ 0.One concludes by Corollary <ref> since _I X≅ (X_i)_i∈ I, and similarly for Y and Z. § TILTED T-STRUCTURES AND EXAMPLES This section is devoted to providing examples of smashing t-structures which are not homotopically smashing and of homotopically smashing t-structures which are not compactly generated. This will complete the proof of Proposition <ref>. The strategy is to start with the canonical t-structure of a suitable Grothendieck Abelian category, tilt it using a torsion pair (see Example <ref>), and relate the properties of the resulting tilted t-structure to the properties of the torsion pair. Throughout the section, ^→will be a fixed, strong and stable derivator. §.§ Homotopically smashing tilts of t-structures Our first result characterizes the situation when the Happel-Reiten-Smalø tilt _τ of a homotopically smashing t-structureis homotopically smashing again. Let=(,Σ) be a homotopically smashing t-structure in (), and let _τ := (_τ,Σ_τ) be the tilt of this t-structure with respect to a torsion pair τ=(𝒯,ℱ) in the heart :̋=∩Σ of . Then _τ is a smashing t-structure. Moreover, the following conditions are equivalent:* _τ is homotopically smashing;* ℱ is closed under taking directed colimits in $̋. Since the heart $̋ ofis an (Ab.5) Abelian category by Theorem <ref>, the canonical map∐_I X_i →∏_I X_iis a monomorphism for any setIand(X_i)_I⊆$̋. Indeed, the latter map is a directed colimit of the split inclusions ∐_J X_i ≅∏_J X_i →∏_I X_i, where J runs over all finite subsets of I. In particular, F is closed under coproducts both in $̋ and()and, sinceis closed under coproducts in()by Proposition <ref>, so is_τ. Thus,_τis smashing. (1)⇒(2). LetIbe a directed set and letF=(F_i)_i∈ Ibe a direct system in$̋, that is, an object in ^̋I. By Theorem <ref>, ^̋I≅_̋I⊆(I), so we can identify F with an object in _̋I and, as such, there is an isomorphism_I F≅_I F_iwhere the colimit on the right-hand side is taken in $̋ (see Corollary <ref>). Now, ifF_i∈ℱfor eachi∈ I, the fact that_τis homotopically smashingtells us that_I F∈_τand so_I F∈_τ∩=̋ℱ.(2)⇒(1). LetIbe a directed set and let𝒴∈(I)be such that𝒴_i∈_τfor anyi∈ I. Consider the truncation triangle of𝒴with respect to the liftedt-structure_Iin(I):𝒰⟶𝒴⟶𝒱⟶Σ𝒰,where𝒰_i∈_and𝒱_i∈_, for anyi∈ I. For anyi∈ Iwe get a triangle in():Σ^-1𝒱_i⟶𝒰_i⟶𝒴_i⟶𝒱_i.SinceΣ^-1𝒱_i∈Σ^-1⊆Σand𝒴_i∈_τ⊆Σ, we get that𝒰_i∈∩Σ=$̋. On the other hand, Σ^-1𝒱_i∈Σ^-1⊆Σ^-1_τ⊆_τ, and so 𝒰_i∈_τ. These two observations together give us that 𝒰_i∈∩̋_τ=ℱ. Taking now the homotopy colimit of the triangle in (<ref>), we get the following triangle in ():_I 𝒰⟶_I 𝒴⟶_I 𝒱⟶Σ_I 𝒰.As we know that 𝒰_i∈ℱ⊆$̋ for anyi∈ I, we have_I 𝒰≅_I𝒰_iand, by our assumptions, this last directed colimit belongs toℱ. We can now conclude by noting that_I 𝒴∈ℱ*=_τ.Let _^→ be the canonical derivator enhancing the derived category () of a Grothendieck Abelian category(Example <ref>), and let =(,Σ) be the canonical t-structure in (). If τ=(𝒯,ℱ) is a torsion pair inthat is not of finite type (that is, ℱ≠ℱ), then the Happel-Reiten-Smalø tilt _τ ofwith respect to τ is smashing but not homotopically smashing.Explicitly we can take = and for 𝒯 the class of all divisible Abelian groups. In this case, the heart _̋τ of _τ is not an (Ab.5) Abelian category. Indeed, given any prime number p, we have in _̋τ a chain of monomorphisms Σ_p ↪Σ_p^2↪Σ_p^3↪…whose directed colimit is τ^Σ_τ(Σ_p^∞) = 0 by Lemma <ref>.§.§ Compactly generated tilts of t-structures Now we focus on what conditions a torsion pair must satisfy in order for the corresponding tiltedt-structure to be compactly generated. We start with a lemma which says that compact objects in the aisle induce finitely presented objects in the heart (cp. <cit.>). Let =(,Σ) be a homotopically smashing t-structure in () andC ∈. If C is compact in (), then H_^0(C) is finitely presented in $̋.As C is homotopically finitely presented by Corollary <ref>, we have a canonical isomorphism _I()(C, 𝒳_i)≅()(C,_I 𝒳) for any directed set I and 𝒳∈(I). Furthermore, if 𝒳∈_̋I, then ℋ(H_^0(C), 𝒳_i)≅()(C, 𝒳_i) for each i∈ I and ℋ(H_^0(C),_I 𝒳)≅()(C,_I 𝒳), so the canonical morphism _Iℋ(H_^0(C), 𝒳_i)≅ℋ(H_^0(C),_I 𝒳) is invertible as well. The conclusion follows since _I 𝒳≅_I 𝒳_i by Corollary <ref>. If thet-structure is even compactly generated, a much better description of finitely presented objects in the heart was obtained in <cit.>. The proof is more involved and we are not going to discuss it here. Let =(,Σ) be a compactly generated t-structure in a triangulated category 𝒟 with coproducts and let X∈=̋∩Σ. Then X is finitely presented in $̋ if, and only if, there exists a compact objectC∈such thatX≅ H_^0(C). We can now give our criterion for a Happel-Reiten-Smalø tilt of at-structure to be compactly generated. Let =(,Σ) be a homotopically smashing t-structure in () and let _τ =(𝒰_τ,Σ_τ) be the tilt ofwith respect to a torsion pair τ = ( T, F) in the heart =̋∩Σ. Consider the following conditions:* _τ is compactly generated,*there is a set 𝒮⊆𝒯 of finitely presented objects of $̋ withℱ=⋂_S∈𝒮(ℋ(S,-)).Then, (1) implies (2). If, moreover,is compactly generated,then (1) and (2) are equivalent.(1)⇒(2). Let 𝒮̂⊆𝒰_τ⊆ be a set of compact objects such that Σ𝒮̂⊆𝒮̂ and which generates _τ.We put 𝒮:={H_^0(X):X∈𝒮̂},where each H_^0(X) is finitely presented in $̋ by Lemma <ref>. As(ℋ(H_^0(X),-)) = ∩̋(()(H_^0(X),-)) = ∩̋(()(X,-))for eachX∈𝒮̂, we have⋂_S∈𝒮(ℋ(S,-)) = ∩̋⋂_X∈𝒮̂(()(X,-)) = ∩̋_τ = ℱ.(2)⇒(1),assuming thatis compactly generated.It is enough to adapt the proof of <cit.>. We start with a set of objects which are finitely presented andℱ=⋂_S∈𝒮((̋S,-))in$̋. For each S∈𝒮, we fix an object X_S∈ which is compact in () and H_^0(X_S)≅ S. Such an object exists by Proposition <ref>. If 𝒴 is a set of compact generators of , we shall prove that Σ𝒴∪𝒳 generates _τ, where 𝒳:={X_S:S∈𝒮}. Bearing in mind that Σ𝒴∪𝒳⊆_τ, our task reduces to prove that 𝒱':=(Σ𝒴)^⊥_≤ 0∩𝒳^⊥_≤ 0=(Σ𝒴∪𝒳)^⊥_≤ 0⊆_τ. Since (Σ𝒴)^⊥_≤ 0 = Σ, an object V∈() is in 𝒱' if, and only if, V∈Σ and ()(X_S,V)=0, for all S∈𝒮. However, since X_S∈ for all S∈𝒮, the following series of isomorphisms for each V∈Σ,()(X_S,V)≅()(H_^0(X_S),V)≅(̋H_^0(X_S),H_^0(V))=(̋S,H_^0(V)),shows that V∈𝒱' if, and only if, V∈Σ and H_^0(V)∈ℱ or, equivalently, V∈𝒱_τ. Let _R^→ be the canonical derivator enhancing the derived category ( R) for a ring R (Example <ref>), and let =(,Σ) be the canonicalon ( R). Suppose that R admits a non-trivial two-sided idempotent ideal I contained in its Jacobson radical J(R) (see <cit.> for an explicit example and note that such an I must be infinitely generated from either side and, consequently, R must be non-Noetherian, by the Nakayama Lemma). Consider the torsion pair τ =(𝒯_I,ℱ_I), where 𝒯_I:={T∈ R:TI=T}andℱ_I:={F∈ R:FI=0}.Then, the tilted t-structure _τ in ( R) is homotopically smashing but not compactly generated. Indeed, the torsion-free class ℱ_I is closed under directed colimits in R, so _τ is homotopically smashing by Proposition <ref>. On the other hand, due to the Nakayama Lemma, 𝒯_I does not contain any non-zero finitely generated module. Then _τ cannot be compactly generated as a t-structure because of Proposition <ref>.Example <ref> gives a negative answer to <cit.>. § ON THE EXISTENCE OF A SET OF GENERATORS We conclude with the discussion of when the heart of at-structure in the base()of a strong and stable derivatoris actually is a Grothendieck Abelian category. Unlike the exactness of directed colimits, the existence of a generator is, to a large extent, a purely technical condition which is usually satisfied in examples “from the nature”. In the world of∞-categories, a condition for the heart of at-structure to be Grothendieck Abelian was given by Lurie in <cit.>. We give a similar (and short) discussion also in our setting, restricting to derivators of the form=_(,), where(,, B, F)is a combinatorial stable model category (see Example <ref>). These derivators are called “derivators of small presentation” and they can be characterized internally to the2-category of derivators, see <cit.>. Recall that a model category(,, B, F)iscombinatorial if *is a locally presentable category (see <cit.>);*the model structure (, B, F) is cofibrantly generated. These model categories were introduced by J. Smith. Let us recall here the following properties whose proof essentially follows by<cit.> or <cit.>: Let (,, B, F) be a combinatorial model category and 𝒮⊆ aset of objects. Then, there exists an infinite regular cardinal λ such that*the functors (S,-) commute with λ-directed colimits for all S∈𝒮;*there are co/fibrant replacement functors which preserves λ-directed colimits;* λ-directed colimits of weak equivalences are again weak equivalences;*given a λ-directed set I and a diagram X∈^I, there is an isomorphism in () L_IX≅ F(_I X),where F→() is the canonical functor. (1) follows sinceis locally presentable, while (2,3) are <cit.>. (4). Given a λ-directed set I, since _I induces a functor _I →, the universal property of F_I^I→(^I)=^I[_I^-1] yields a unique functor completing the following solid diagram to a commutative square:@C=50pt^I[d]__I[r]^-F_I (^I)@.>[d] [r]_-F (C)Of course, such a functor automatically satisfies the universal property for being the total left derived functor of _I, hence we deduce the isomorphism in the statement. As remarked in Example <ref>, for derivators arising from model categories, the functor_I(^I)→()is the total left derived functor of_I^I→. Using this identification, one easily deduces the following corollary: Let (,, B, F) be a combinatorial model category, = _(,) the induced strong derivator as in Example <ref> and λ an infinite regular cardinal satisfying (4) in Lemma <ref>. If I is a λ-directed and X∈^I, then there is an isomorphism in () _IX≅ F(_I X),where F→() is the canonical functor. We can prove the existence of a generator for the following large class oft-structures which includes all homotopically smashing ones (they constitute a special case forλ=ℵ_0): Let (,, B, F) be a stable and combinatorial model category, and =(,Σ)a t-structure in (). We say thatis λ-accessibly embedded in (), with λ an infinite regular cardinal, ifis closed under λ-directed homotopy colimits in _(,).Let (,, B, F) be a stable and combinatorial model category, λ an infinite regular cardinal satisfying (3) and (4) in Lemma <ref>, and =(,Σ) a λ-accessibly embedded t-structure with heart =̋∩Σ in (). Then, the following composition functor preserves λ-directed colimits:F⟶()H^0⟶.̋Given a λ-directed set I, consider the following diagram:@C=50pt^I[r]^-F_I[d]__I (^I)[r]^-H^0_I[d]|_I _̋I[d]^_I [r]_-F ()[r]_-H^0 .̋By Theorem <ref>, _̋I≅^̋I and, identifying these two categories, (_I)__̋I is conjugated to _I^̋I→$̋ (see the proof of Corollary <ref>). This observation tells us that it is enough to show that the external square in the above diagram commutes. Weverify instead that the smaller squares do commute. In fact, the commutativity of the square on the left hand side is given by Corollary <ref>, while the commutativity of the square on the right hand side follows from the fact that both(by Proposition <ref>) and(by assumption) are closed underλ-directed homotopy colimits. Before we prove, as a main result of the section, Theorem <ref> from the introduction, we give a discussion of its assumptions: A triangulated category is called algebraic if it is the stable category of a Frobenius exact category <cit.>. Compactly generated algebraic triangulated categories are then triangle equivalent to the derived categories of small dg categories, and such derived categories are the homotopy categories ofcombinatorial model categories of dg modules. We refer to <cit.> for a more detailed discussion.More generally, one can consider algebraic triangulated categories which are well generated in the sense of <cit.>. These are, by <cit.>, simply localizations of compactly generated algebraic triangulated categories D with respect to localizing subcategories L generated by a small set S of objects. Such a localization is also the homotopy category of a combinatorial model category.As an upshot, each well generated algebraic triangulated category is the homotopy category of a combinatorial model category. An analogous result for well generated topological triangulated categories (i.e.,homotopy categories of spectral model categories) can be found in <cit.>.Let (,, B, F) be a stable combinatorial model category and =(,Σ) a t-structure in () generated by aset S of objects of . Then there exists an infinite regular cardinal λ such thatis λ-accessibly embedded. Up to replacing 𝒮 by another set of the same cardinality, we can assume that any object in 𝒮 is cofibrant. Let now λ be an infinite regular cardinal with the properties (1–4) described in Lemma <ref>, and fix a fibrant replacement functor R that commutes with λ-directed colimits. We have to prove that =𝒮^⊥ is closed under taking λ-directed homotopy colimits. Indeed, let I be a λ-directed set and X∈^I such that X_i∈ for all i∈ I. For any C∈𝒮 and i∈ I, there is a commutative diagram as follows:(C, RX_i)@->>[d]_(*)[rr](C, _IRX_i)@->>[d]^(**)0=()(C, RX_i)[rr]()(C, _IRX_i)where (*) is surjective since C is cofibrant and RX_i is fibrant, while (**) is surjective since _IRX_i≅_IRX_i≅ R(_IX_i) by Lemma <ref> and the fact that R commutes with λ-directed colimits. Using the universal property of _I in , we obtain the following commutative diagram:_I(C, RX_i)@->>[d][rr]^≅(C, _IRX_i)@->>[d]0=_I()(C, RX_i)[rr]()(C, _IRX_i)where the top row is invertible byLemma <ref>, so that ()(C, _IRX_i)=0, for all C∈𝒮, and so _IX_i≅_IRX_i∈𝒮^⊥=.We remind the reader that we need to prove the following. If (,, B, F) is a stable combinatorial model category and =(,Σ) is a λ-accessibly embedded t-structure in (), then the heart =̋∩Σ has a generator. We may also, without loss of generality, assume that λ satisfies conditions (3) and (4) from Lemma <ref>. Since the categoryis locally presentable, there exists a setof objects ofsuch that every object C∈ is a λ-directed colimit C = _I_C (Q_i) of a λ-direct system (Q_i)_i∈ I_C in . Consider the following set of objects in $̋:𝒬:={H^0(Q):Q∈}.Now notice thatH^0(F(C))≅ H^0(F(_I_CQ_i))(*)≅_I_CH^0(F(Q_i)),where(*)follows by Proposition <ref>.Since any object in$̋ is of the form H^0(F(C)) for some C∈, we have just verified that 𝒬 is a set of generators for $̋. If the t-structure in the above proof is compactly generated, then it is homotopically smashing by Proposition <ref>. Since the heart is (Ab.5) by Theorem <ref> and it has a generator by Theorem <ref>, it is a Grothendieck Abelian category.Corollary <ref> is a straightforward consequence of Theorems <ref> and <ref>. On the other hand, under the same set of hypotheses, one even has that =̋ H^0_(𝒯), where 𝒮 is a set of compact generators forand 𝒯 is the smallest subcategory of () containing𝒮 which is closed under extensions, suspension and direct summands. Indeed, 𝒯 is precisely the class of compact objects of () which are contained in the aisle . We refer to <cit.> or <cit.> for this fact. The conclusion then follows from <cit.>, where it was shown, while this paper was under review, that the heart $̋ is locally finitely presentable andH^0_(𝒯)is precisely the class of finitely presented objects (recall Proposition <ref>).§ APPENDIX A: DIRECTED HOMOTOPY COLIMITS OF SPECTRA tocsectionAppendix A: Directed homotopy colimits of spectraThe main point of this appendix is to establish that the stable homotopy groups of spectra (in the topological sense) commute with (homotopy) directed colimits. In the terminology of Subsection <ref>, one can equivalently say that the sphere spectrum is intrinsically homotopically finitely presented in_. This can be viewed as a topological analogue of the fact that the cohomology groups of complexes of Abelian groups commute with directed colimits. Although this result seems to be well-known to experts in homotopy theory (see, e.g., <cit.> or <cit.>), we are lacking an adequate reference.Model structures on categories of diagrams.When discussing homotopy colimits in detail, we are confronted with the following question: Given a model categorywith the class of weak equivalencesand a small categoryI, is there a suitable model structure on the diagram category^Iwith pointwise weak equivalences? Although the existence of such model structures is not clear in full generality and for many purposes one can work around this problem with other techniques (cf. <cit.>), they nevertheless exist in many situations and make the discussion easier there. Most notably, one usually considers two “extremal” model structures, provided that they exist: Given a model category (, , B, F) and a small category I,a model structure on ^I is called *the projective model structure if the weak equivalences and fibrations are defined pointwise (i.e., a morphism f X → Y in ^I is a weak equivalence or fibration if f_i X_i → Y_i is a weak equivalence or fibration, respectively, infor each i∈ I); *the injective model structure if the weak equivalences and cofibrations are defined pointwise. Note that both the projective and the injective model structures, when they exist, are unique. It is also a standard fact that the class of injective fibrations is included in the class of projective fibrations and dually for cofibrations. To see this, note that for eachi∈ I, the evaluation functor(-)_i^I →,X ↦ X(i)has a left adjoint-⊗ i→^Igiven by(X⊗ i)(k)≅∐_I(i,k)X, for allk∈ I(see Example <ref>). One readily checks that-⊗ ipreserves cofibrations and trivial cofibrations if^Iis equipped with the injective model structure. Thus,(-⊗ i, (-)_i)is a Quillen adjunction and(-)_isends injective fibrations to fibrations infor eachi∈ I. Another fact which we will need is the following observation regarding the structure of fibrant and cofibrant objects. Let (,, B, F) be a model category and (I,≤)a partially ordered set (viewed as a small category). If a projective model structure exists, X∈^I is a projectively cofibrant object and i≤ j are elements of I, then X(i) → X(j) is a cofibration in . Dually, if an injective model structure exists and X is an injectively fibrant object, then X(i) → X(j) is a fibration in . Weonly prove the part regarding the injective model structure, the other part is dual. In view of <cit.>, we need to prove that, given any commutative square U [d]_-c[r]^-uX(i) [d] V [r]_-v@.>[ur]||h X(j) in , where c U → V is a trivial cofibration, the dotted arrow can be filled in so that both triangles commute. To construct h, we start with the fact that X is injectively fibrant, i.e., (V',X) →(U',X) is surjective for any pointwise trivial cofibration f' U' → V' in ^I. We apply this property to a specially crafted f', based on the morphism f above. We let V' = V ⊗ i, i.e., V'(k) = V if k ≥ i and is the initial object 0∈ otherwise. Let also: U'(k) = V ifk ≥ j; U ifk ≥ iandk ≱j; 0 otherwise. The morphisms U'(k) → U'(ℓ) are copies of c if k≥ i, k≱j and ℓ≥ j, and the identity morphisms or the morphisms from the initial object, otherwise. There is an obvious morphism c' U' → V' whose components are just identity morphisms and copies of c. Furthermore, the commutative square (<ref>) allows us to define a morphism u' U' → X such that* u'(k) is the composition of v with X(j)→ X(k) if k≥ j,* u'(k) is the composition of u with X(i)→ X(k) if k≥ i, but k≱j and* u'(k) is the morphism from the initial object otherwise. As mentioned above, our assumption dictates that u' U' → X factors through c' U' → V' via a map h' V' → X. The restriction of the morphism c' and h' to the components i,j∈ I yields the following commutative diagram in , @C=50pt U [d]_-c[r]^-cV @=[d] [r]^-h'(i)X(i) [d] V @=[r] V [r]_-h'(j)X(j), where the compositions in the rows are u and v respectively. It follows that v=h'(j) and h = h'(i) fits into (<ref>). Our main motivation for considering the projective model structure is that, if it exists, the constant diagram functorκ_I→^Ipreserves fibrations and trivial fibrations, so that(_I, κ_I)is a Quillen adjunction. In other words, thetotal left derived functor𝐋_I(^I) →()exists and it can be computed using projectively cofibrant resolutions of diagrams. We will denote𝐋_Iby_Iand call it the homotopy colimit functor. Of course, formally dual statements hold for limits and the injective model structure.Finally, we touch the problem of the existence of projective an injective model structures in the case of combinatorial model structures (see Section <ref>): Let (,, B, F) be a combinatorial model category and I a small category. Then the diagram category ^I admits both the projective and the injective model structures. Model categories of simplicial sets and spectra. LetΔbe the category with all finite ordinals1, 2, 3, …, where𝐧={0→ 1→…→ (n-1)}, as objects and order-preserving maps as morphisms. Here we keep our convention for ordinals from set-theory rather than from topology (there one often denotes by[n]the ordinal𝐧+1as it indexes then-dimensional simplices). The categoryof simplicial sets is defined as the category of functorsΔ^→. As customary, we denote byΔ[n] := Δ(-,(𝐧+1))the representable functors in.A simplicial set can be viewed as a combinatorial model for a topological space. More precisely, there is a geometric realization functor|-|→,X ∈↦| X|∈. This is a left adjoint functor which preserves finite limits up to a weak homotopy equivalence. More in detail, recall that a continuous map if topological spacesf X → Yis a weak homotopy equivalence ifπ_n(f)π_n(X,x) →π_n(Y,f(x))is a bijection for alln≥ 0and all base pointsx∈ X(1)(here,π_0(X,x)is just the set of all path components ofX, whileπ_n(X,x)is a group forn≥ 1and an Abelian group forn≥ 2). Now ifI∈is a finite category andX∈^I, the canonical map|lim_i∈ I X_i|→lim_i∈ I| X_i|is a bijection of sets, but the topologies may not agree. Nevertheless, it is always a weak homotopy equivalence—see <cit.> for details.The topological spaces of the form| X|have very nice properties as they naturally have a structure of CW-complexes <cit.>. In fact, ifX⊆ Yis an inclusion of simplicial sets, then| X|is naturally a CW-subcomplex of| Y|in the sense of <cit.>. This follows, e.g., directly from the proof of <cit.>.GivenX ∈,x∈ X(1)andn≥ 0, one definesπ_n(X,x):=π_n(| X|,| x| )with the base point corresponding tox. It is also possible to defineπ_n(X,x)combinatorially directly on the simplicial setX, see <cit.>. The categorycomes equipped with a standard model structure such that cofibrations are inclusions of simplicial sets, fibrations are the so-called Kan fibrations, andf X → Yis a weak equivalence ifπ_n(f)π_n(X,x) →π_n(Y,f(x))is a bijection for alln≥ 0and all base pointsx∈ X(1)(or, equivalently, by <cit.>, if| f|is a homotopy equivalence of topological spaces).Since homotopy groups need a choice of a base point, it will be helpful to work with the category_*of pointed simplicial sets, which is simply the slice categoryΔ[0]/. In pedestrian terms, its objects are pairs(X,x), whereX∈andx∈ X(1), and the morphisms are base point-preserving morphisms of simplicial sets. This category also carries a model structure where a morphismf (X,x) → (Y,y)is a weak equivalence, cofibration or fibration if the underlying mapX→ Yis such in(see <cit.>).IfIis a directed set and(X_i,x_i)_i∈ I∈_*^I, then there is a canonical homomorphism_Iπ_n(X_i,x_i) →π_n(_I(X_i,x_i))for eachn≥ 0. More explicitly, we first replace(X_i,x_i)_i∈ Iby a pointwise weakly equivalent projectively cofibrant diagram(X'_i,x'_i), then applyπ_nto the adjunction unitη (X'_i,x'_i) →κ_I(_I (X'_i,x'_i)) = κ_I(_I (X_i,x_i)), and finally take the colimit map of the resulting cocone of sets or groups. In the above setting, the map _Iπ_n(X_i,x_i) →π_n(_I(X_i,x_i)) is a bijection. The morphisms X_i → X_j in the projectively cofibrant diagram (X'_i,x'_i)∈_* are inclusions by Lemma <ref>, so we can assume that X'_i are simplicial subsets of _I X'_i and that _I X'_i is the directed union of the X'_i. As any compact subset of |_I X'_i | is contained in | X_i|, for some i∈ I, by <cit.>, we can conclude by <cit.>. Since our main object of interest are stable Grothendieck derivators and any stable derivator is enriched over spectra by <cit.>,our main goal is the analogue of Lemma <ref> for spectra. To this end, let us quickly recall one model structure for the category of spectra from <cit.>.First note that the categorycarries a Cartesian (closed) symmetric monoidal structure(,×,Δ[0]). Now the forgetful functor^*→has a left adjoint which sendsX∈to the disjoint unionX_+ := X ∪Δ[0], and this adjunction allows to define a unique monoidal structure(,∧,Δ[0]_+)such that∧preserves colimits in each component andX ↦ X_+is a monoidal functor. The functor∧is called the smash product.The key object for the definition of a spectrum is a combinatorial model for the topological circle. We can define𝕊^1∈as the coequalizer ofΔ[0] ⇉Δ[1], where the two maps come from the two morphisms1→2inΔ. As this construction gives a canonical mapΔ[0]→𝕊^1, we can view𝕊^1as an object of_*. A spectrum is a sequence X = (X^n,σ^n) indexed by the natural numbers such that X^n∈_* and σ^n𝕊^1∧ X^n → X^n+1 is a map of pointed simplicial sets for each n∈. Maps of spectra f (X^n,σ^n) → (Y^n,σ'^n) are defined in the obvious way as collections of maps f^n X^n→ Y^n such that f^n+1σ^n = σ'^n (𝕊^1∧ f^n) for all n. The category of spectra will be denoted by . For a spectrumX = (X^n,σ^n)andm∈, we can define the m-th stable homotopy groupπ^s_m(X) := _n≫ 0π_m+n(X^n)(see <cit.> for details). In fact,π_m^sis a functor→and one can define the classof weak equivalences onas those morphismsf X → Yfor whichπ_m^s(f)is an isomorphism for allm∈.The classis a part of the Bousfield-Friedlander model structure(,, B, F), <cit.>. A morphismf X → Yina cofibration in this model structure iff^0 X^0 → Y^0and the pushout morphisms X^n+1⊔_𝕊^1∧ X^n (𝕊^1∧ Y^n) ⟶ Y^n+1are cofibrations (i.e., inclusions) in_*for alln≥ 0. It follows by induction onn(e.g., using <cit.>) that allf^n X^n→ Y^nare cofibrations in_*. Fibrant spectraXare those for which eachX^nis fibrant in_*(i.e.,X^nis a Kan complex) and the maps of topological spaces| X^n|→| X^n+1|^|𝕊^1|adjoint to|σ^n||𝕊^1|∧| X^n|→| X^n+1|are weak homotopy equivalences (i.e., they induce bijections for allπ_n(-,x)withx ∈| X^n|andn≥ 0). This model structure is known to be combinatorial (to seethis, either combine <cit.> with <cit.>, or directly apply <cit.>). Now we can prove the desired result (cp. <cit.>): Let I be a directed set and X ∈^I. Then the canonical map _I π_m^s(X(i)) →π_m^s(_I X) is an isomorphism for each m∈.The projective model structure on ^I exists by Proposition <ref>. If X' is a projectively cofibrant replacement of X, the maps X'(i)^n → X'(j)^n of pointed simplicial sets are inclusions (by Lemma <ref>) for each i≤ j in I and each n ≥ 0. Using Lemma <ref>, π_m^s(_I X') = _n≫ 0π_m+n(_I X'(i)^n) ≅_I_n≫ 0π_m+n(X'(i)^n) = _Iπ_m^s(X'(i)).We conclude by combining this with _I X ≅_I X'.The categoryis a stable model category, so _ is a strong, stable derivator and () = _() is a triangulated category.There is a privileged object in () which plays the role of ℤ in ()—the sphere spectrum𝕊 = (𝕊^n, 𝕀_𝕊^n+1), where for each n≥0 we define 𝕊^n := 𝕊^1∧…∧𝕊^1_n times(we formally put 𝕊^0 = Δ[0]_+, the monoidal unit for the smash product).The sphere spectrum 𝕊 is intrinsically homotopically finitely presented in_. Since there is a natural equivalence ()(𝕊,-) ≅π_0^s of functors ()→, the result immediately follows by Proposition <ref>. tocsectionReferencesalpha5cm0.4pt Manuel Saorín – Departamento de Matemáticas, Universidad de Murcia, Aptdo. 4021, 30100 Espinardo, Murcia, SPAINJan Šťovíček –Department of Algebra, Faculty of Mathematics and Physics, Charles University in Prague, Sokolovská 83, 18675 Praha 8, CZECH REPUBLIC Simone Virili – orDepartamento de Matemáticas, Universidad de Murcia,Aptdo. 4021, 30100 Espinardo, Murcia, SPAIN
http://arxiv.org/abs/1708.07540v3
{ "authors": [ "Manuel Saorín", "Jan Šťovíček", "Simone Virili" ], "categories": [ "math.CT", "math.AG", "math.AT", "math.RT" ], "primary_category": "math.CT", "published": "20170824201059", "title": "$t$-Structures on stable derivators and Grothendieck hearts" }
http://arxiv.org/abs/1708.07895v1
{ "authors": [ "Orchidea Maria Lecian" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170825213924", "title": "Semiclassical Length Measure from a Quantum-Gravity Wave Function" }
[email protected] Author to whom any correspondence should be addressed.Institut für Physik, Ernst-Moritz-Arndt-Universität Greifswald, 17487 Greifswald, Germany We introduce ℤ_2-valued bulk invariants for symmetry-protected topological phases in 2+1 dimensional driven quantum systems. These invariants adapt the W_3-invariant, expressed as a sum over degeneracy points of the propagator, to the respective symmetry class of the Floquet-Bloch Hamiltonian. The bulk-boundary correspondence that holds for each invariant relates a non-zero value of the bulk invariant to the existence of symmetry-protected topological boundary states. To demonstrate this correspondence we apply our invariants to a chiral Harper, time-reversal Kane-Mele, and particle-hole symmetric graphene model with periodic driving, where they successfully predict the appearance of boundary states that exist despite the trivial topological character of the Floquet bands. Especially for particle-hole symmetry, combination of the W_3 and the ℤ_2-invariants allows us to distinguish between weak and strong topological phases. Topological invariants for Floquet-Bloch systems with chiral, time-reversal, or particle-hole symmetry Holger Fehske December 30, 2023 ========================================================================================================§ INTRODUCTION Topological states of matter <cit.> have become the subject of intensive research activities over the past decade. More recently, unconventional topological phases in periodically driven systems <cit.> have moved into focus. Driving allows for non-trivial topological phases even if each individual Floquet band is topologically trivial. These phases cannot be characterized by static invariants, such as the Chern numbers of the Floquet bands, but only through invariants that depend on the entire dynamical evolution of the system <cit.>. Irradiated solid state systems <cit.> and photonic crystals <cit.>, where the third spatial dimension represents the time axis, are promising candidates for the realization of these new topological phases.The relevant topological invariant of driven 2+1 dimensional systems is the W_3-invariant of unitary maps <cit.>, which is evaluated for the Floquet-Bloch propagator U(k⃗,t) that solves the Schrödinger equation ∂_t U(k⃗,t) = H(k⃗,t) U(k⃗,t) with a periodic Hamiltonian H(k⃗, t+T) = H(k⃗, t).The bulk-boundary correspondence for the W_3-invariant guarantees that the value of W_3(ϵ) equals the number of chiral boundary states in the gap around the quasienergy ϵ.The situation changes again for driven systems with additional symmetries. Symmetry-protected boundary states appear in pairs of opposite chirality, such that the W_3-invariant can no longer characterize the non-trivial topological phases <cit.>. Two questions arise immediately: Can the phases be characterized by new invariants?Can these invariants be computed for complicated Hamiltonians and Floquet-Bloch propagators? In this paper we try to answer both questions affirmatively by deriving and evaluating ℤ_2-valued bulk invariants for Floquet-Bloch systems with chiral, time-reversal, or particle-hole symmetry. In each case, the symmetry is given by a relation of the form H(k⃗, t) = ± S H(k̂⃗̂, ± t) S^-1 for the time-dependent Bloch Hamiltonian H(k⃗,t), with a (anti)-unitary operator S and an involution k⃗↦k̂⃗̂ on the Brillouin zone ℬ. The symmetry relation implies a zero W_3-invariant in certain gaps, because the degeneracy points of U(k⃗, t) that contribute to W_3(ϵ) occur in symmetric pairs and cancel. Conceptually, the new symmetry-adapted invariants count only one partner of each pair of degeneracy points. Since the result depends on which partner is counted, the new invariants are ℤ_2-valued. Symmetry-protected topological boundary states appear in gaps where the symmetry relation enforcesW_3(ϵ)=0, but the ℤ_2-invariants are non-zero.Topological invariants for Floquet-Bloch systems with and without additional symmetries have been introduced before <cit.>, and our constructions resemble some of them <cit.> in various aspects. However, most constructions in the literature differ for each symmetry.One of our goals is to show that the construction of invariants in terms of degeneracy points of U(k⃗, t) applies to each symmetry equally, with only the obvious minimal modifications. In this way the constructionsdescribed here constitute a unified approach to topological invariants in Floquet-Bloch systems with symmetries. Our presentation begins in Sec. <ref> with the discussion of an expression for the W_3-invariant that is particularly well suited for the following constructions, before the different invariants for chiral, time-reversal, and particle-hole symmetry are introduced in Sec. <ref>. Sec. <ref> summarizes our conclusions, and the appendices (App. <ref>–App. <ref>) give details on the derivations in the main text. § W_3-INVARIANT AND DEGENERACY POINTSThe starting point for the construction of the ℤ_2-invariants is the expression W_3(ϵ)=∑_ν=1^n ∑_i=1^dp N^ν(ϵ, d⃗_i) C^ν(d⃗_i)of the W_3-invariant as a sum over all degeneracy points i = 1,…, dp of the Floquet-Bloch propagator U(k⃗,t) that occur during time evolution 0 ≤ t ≤ T.Eq. (<ref>) is a modified version of an expression for W_3(ϵ) given in Ref. HAF17. As explained in App. <ref>, which contains a detailed derivation, it generalizes a similar expression introduced in Ref. 1367-2630-17-12-125014.A unique feature of Eq. (<ref>) is the invariance of all quantities under general shifts ϵ(·) ↦ϵ(·) + 2 π m of the Floquet quasienergies, whereby the ambiguity of mapping eigenvalues e^-ϵ(·) of U(·) to quasienergies ϵ(·) is resolved from the outset.For this reason, Eq. (<ref>) isparticularly convenient for numerical evaluation, e.g., with the algorithm from Ref. HAF17.Note that for the sake of clarity of the main presentation we assume in Eq. (<ref>) that the bands are topologically trivial for t → 0. The general case is given in the appendix.To evaluate Eq. (<ref>), we must decompose U(k⃗, t) = ∑_ν=1^n e^-ϵ^ν |s⃗^ν⟩⟨s⃗^ν| into bands ν = 1, …, n with quasi­energies ϵ^ν≡ϵ^ν(k⃗,t) and eigenvectors s⃗^ν≡s⃗^ν(k⃗, t). Quasienergies are measured in units of 1/T, and defined up to multiples of 2 π. We assume that the ϵ^ν(k⃗,t) are continuous functions. At t=T, ϵ^ν(k⃗, T) agrees (modulo 2 π) with the Floquet quasienergyderived from the eigenvalues of U(k⃗, T). A degeneracy pointd⃗_i = (k⃗_i, t_i, ϵ_i) occurs whenever the quasienergies ϵ^ν(k⃗_i,t_i), ϵ^μ(k⃗_i,t_i) of two bands νμdiffer by a multiple of 2 π, such thate^-ϵ^ν = e^-ϵ^μ = e^-ϵ_i for two eigenvalues of U(k⃗_i, t_i). With each degeneracy point, we can associate the Chern numbers C^ν(d⃗_i) =∮_𝒮(d⃗_i) F_α^ν dS^α, given as the integral of the Berry curvature 2 π F_α^ν(k⃗,t)=ϵ_αβγ∂^β(s⃗^ν(k⃗,t)^† ∂^γs⃗^ν(k⃗,t)) over a small surface 𝒮(d⃗_i) enclosing the degeneracy point. It is C^ν(d⃗_i) = - C^μ(d⃗_i)0 only for the bands ν, μ that touch in the degeneracy point.The contribution from each degeneracy point is multiplied by the integer N^ν(ϵ, d⃗_i) = ⌈ (ϵ^ν(k⃗_i, t_i) - ϵ)/(2 π) ⌉ + ⌈ (ϵ-ϵ^ν(k⃗, T))/(2 π) ⌉that counts how often band ν crosses the gap at ϵ while it evolves from the degeneracy point at t = t_i to its final position at t=T. Here, ⌈·⌉ denotes rounding up to the next integer.Since ϵ lies in a gap, N^ν(ϵ, d⃗_i) does not depend on k⃗.Moving from one gap at ϵ to the next gap at ϵ', both being separated by band ν, the value of N^ν(ϵ, d⃗_i) changes by one, such that W_3(ϵ) changes by C^ν = ∑_i=1^dp C_i^ν(d⃗_i). The value of C^ν is just the Chern number of band ν at t=T.Note that when we move once through the quasienergy spectrum, letting ϵ↦ϵ + 2 π, we change W_3(ϵ) by ∑_ν C^ν = 0.In the situation sketched in Fig. <ref>,we have N^ν(ϵ, d⃗_i) = 1 (or N^ν(ϵ, d⃗_i) = 0) for the band directly below (or above) the gap at ϵ. Here, where the bands of U(k⃗, t) do not wind around the circle independently, W_3(ϵ) is simply the sum over the degeneracy points in each gap. § ℤ_2-INVARIANTS FOR FLOQUET-BLOCH SYSTEMS WITH SYMMETRIESFor the construction of the new ℤ_2-invariants we adapt Eq. (<ref>), essentially by including only half of the degeneracy points in the summation. We will now, for each of the three symmetries, introduce the respective invariant, formulate the bulk-boundary correspondence between the invariant and symmetry-protected topological boundary states, and present an exemplary application to aFloquet-Bloch system with the specific symmetry.§.§ Chiral symmetry. The symmetry relation for chiral symmetry, realized as a sublattice symmetry on a bipartite lattice, isH_ch(k⃗, t) = - S H_ch(k⃗ + k⃗_π ,T-t) S^-1with a unitary operator S, and a reciprocal lattice vector k⃗_π corresponding to the sublattice decomposition (e.g., k⃗_π = (π,π) for a square lattice).Note that the symmetry relation (<ref>) differs from the standard definition of chiral symmetry <cit.>, which does not contain the momentum shift k⃗_π. The inclusion of the momentum shift k⃗_π is crucial for the existence of symmetry-protected boundary states and of the ℤ_2-invariant defined below.As detailed in App. <ref>, a Hamiltonian H_ch(·) that fulfills Eq. (<ref>) also fulfills the standard chiral symmetry relation but possesses an additional symmetry that protects the topological phases and boundary states.Without the k⃗_π–shift, chiral symmetry does not allow for the symmetry-protected boundary states observed here <cit.>.Because of the T-t argument on the right hand side, the symmetry relation (<ref>)does not extend to U(k⃗ , t) but only to the time-symmetrized propagator U_⋆(k⃗,t) = U(k⃗, 12 (t + T) ) U^†(k⃗, 12 (T-t)), for which it implies SU_⋆(k⃗ + k⃗_π,t)S^-1 = U_⋆^†(k⃗, t). Therefore, degeneracy points of U_⋆(·) occur in pairs d⃗_i = (k⃗_i, t_i, ϵ_i), d̂⃗̂_i = (k⃗_i + k⃗_π, t_i, -ϵ_i) with opposite sign of C^ν(d⃗_i) = - C^ν(d̂⃗̂_i). The W_3-invariant, computed from U_⋆(·), fulfills W_3(-ϵ) = - W_3(ϵ), especially W_3(ϵ) = 0 for a gap at ϵ = 0, π. Note that U_⋆(·) belongs to a family of propagators that are related to U(·) by the homotopy s ↦ U(k⃗,(1-s) t+sT)U^†(k⃗,s(T-t)). For s=0, we obtain the original propagator U(·), for s=1/2 the symmetrized propagator U_⋆(·). Since U_⋆(·) is homotopic to U(·), with fixed boundary values U_⋆(k⃗, 0) = 1 and U_⋆(k⃗, T) = U(k⃗, T), we obtain the same result if W_3(ϵ) is computed with the original propagator U(·). In this computation, however, the cancellation of degeneracy points would not be obvious. We now define a ℤ_2-invariant, for ϵ = 0 or ϵ=π, viaW_ch(ϵ) ≡∑_ν=1^n ∑_i=1^dp/2 N^ν(ϵ, d⃗_i) C^ν(d⃗_i)2 ,where the upper limit dp/2 in the sum over i indicates that exactly one degeneracy point of each symmetric paird⃗_i, d̂⃗̂_i is included. Depending on which points are included the sum can differ by an even number, such that W_ch(ϵ) ∈ℤ_2. Since the degeneracy points in each pair are separated by k⃗_π, a homotopy of H_ch(·) that respects chiral symmetry cannot annihilate the degeneracy points. Therefore, W_ch(ϵ) is invariant under such a homotopy.A non-zero value of W_ch(ϵ) indicates thatan odd number of pairs of degeneracy points occur in the gap at ϵ during time-evolution from 0 to T. If a boundary is introduced into the system, say along the x-direction, the first pair of degeneracy points d⃗_i, d̂⃗̂_i gives rise to two boundary states B_I, B_II of opposite chirality that appear immediately after t_i at momenta (k⃗_i)_x, (k⃗_i + k⃗_π)_x. During the subsequent time-evolution the dispersion of these boundary states is related by ϵ_I(k_x) = - ϵ_II(k_x + π) due to chiral symmetry. Therefore, the boundary states are protected: They cannot annihilate each other, because the number of crossing through ϵ=0, π is fixed by the above relation. The pair of boundary states can disappear only through the appearance of a second pair of degeneracy points at a later t _j > t_i. In this way, each pair flips the value of W_ch(ϵ) and the number of symmetry-protected boundary states in the respective gap. This consideration establishes the bulk-boundary correspondence for chiral symmetry: A non-zero bulk invariant W_ch(ϵ) corresponds to the existence of a pair of symmetry-protected boundary states with opposite chirality in the gap at ϵ. Chiral symmetry is realized in the extended Harper model on a square lattice <cit.>H_ch(t) = ∑_ij[ J_x(t) ( e^2πα j c^†_i+1,j c_ij + h.c. ) + J_y ( c^†_i,j+1 c_ij + h.c. )],provided that J_x(T-t) = J_x(t). The rational parameter α=p/n controls the number n of Floquet bands. Note that the (magnetic) unit cell of this model has one element in x-direction and n elements in y-direction.For the results in Fig. <ref> we set J_x(t)=J_x,1+J_x,2cos(2 π t/T), with α=1/3, J_x,1=2, J_x,2=1, J_y=2. Since n is odd, chiral symmetry preventsthe opening of a gap at ϵ =0. In the gap at ϵ = π, where W_3(π)=0, a pair of symmetry-protected boundary states exists in accordance with the non-zero value of W_ch(π). Note for the interpretation of Fig. <ref> that according to the magnetic unit cell for α=1/3 the two boundary states along the y-axis can coexist at three different quasienergies for a given k_y, but indeed cross the gap at ϵ=π only once with opposite chirality. In summary, we see thatEq. (<ref>) defines a ℤ_2-valued bulk invariant for chiral symmetry, which predicts the appearance (or absence) of a symmetry-protected topological phase and of the corresponding boundary states.A different ℤ_2-invariant, which is constructed for a finite system with absorbing boundaries, has been introduced in Ref. PhysRevB.93.075405, where also the `weak' or `strong' nature of topological phases with chiral symmetry is addressed.To relate these results to our ℤ_2-invariant we include in App. <ref> additional data for different boundary orientations in the Harper model from Eq. (<ref>). §.§ Time-reversal symmetry.The symmetry relation for time-reversal symmetry of fermionic particles isH_tr(k⃗, t) = Θ H_tr(-k⃗ ,T-t) Θ^-1 ,with an anti-unitary operator Θ for which Θ^2=-1. The symmetry relation (<ref>) implies Θ U_⋆(-k⃗,t) Θ^-1 = U_⋆^†(k⃗,t), again for the time-symmetrized propagator U_⋆(·). Therefore, degeneracy points of U_⋆(k⃗,t) occur in pairs d⃗_i = (k⃗_i,t_i, ϵ_i), d̂⃗̂_i = (-k⃗_i, t_i, ϵ_i) with opposite sign of C^ν(d⃗_i) = - C^ν(d̂⃗̂_i). It is W_3(ϵ) = 0 in each gap.We now define a ℤ_2-invariantW_tr(ϵ) ≡∑_ν=1^2n∑_i=1^dp/2 N^ν(ϵ, d⃗_i) C^ν(d⃗_i)2 ,where again only one degeneracy point from each symmetric pair is included in the sum.Note that the bands of U_⋆(·) appear in Kramers pairs <cit.> which, if arranged in this specific order, fulfill ϵ^2ν-1(-k⃗,t)=ϵ^2ν(k⃗,t).The two bands of each Kramers pair are degenerate at the invariant momenta (IM) k⃗≡ - k⃗ (modulo a reciprocal lattice vector). The Kramers degeneracy at the IM, which isenforced by time-reversal symmetry for all t, must be distinguished from the degeneracy points that contribute in Eq. (<ref>): These occur only at certain t_i and involve two bands from two different Kramers pairs. The considerations leading to a bulk-boundary correspondence are similar to those for chiral symmetry. Again, a pair of degeneracy points d⃗_i, d̂⃗̂_i gives rise to two boundary states, which now appear at momenta (k⃗_i)_x, - (k⃗_i)_x. Their dispersion relations are connected by ϵ_I(k_x) = ϵ_II(-k_x), with Kramers degeneracy at the IM k_x ≡ - k_x. Because of Θ^2 = -1 the boundary states are two-fold degenerate at the IM, which prevents their mutual annihilation. Continuing with the reasoning as before, we conclude that a non-zero value of W_tr(ϵ) implies the existence of a pair of symmetry-protected boundary states with opposite chirality in the gap at ϵ. If we move from one gap at ϵ to the next gap at ϵ', separated by a Kramers pair of bands 2 ν -1, 2 ν, the value of W_tr(ϵ) changes by W_tr(ϵ') -W_tr(ϵ) ≡∑_i=1^dp/2 ( C^2 ν-1(d⃗_i) + C^2 ν(d⃗_i))2. The right hand side of this expression gives just the Kane-Mele invariant <cit.> of the respective Kramers pair (see App. <ref>).Time-reversal symmetry is realized in the extended Kane-Mele model on a graphene lattice <cit.>H_tr(t) = J_1(t) ∑_⟨ i,j ⟩ c^†_i c_j + J_2(t) ∑_⟨⟨ i,j ⟩⟩ν_ij c^†_i σ_z c_j+λ_ν∑_i ξ_i c^†_i c_i + λ_R ∑_⟨ i,j ⟩ c^†_i (σ× d_ij)_zc_j,provided thatJ_1,2(T-t) = J_1,2(t). For the results in Fig. <ref> we setJ_1(t)=J_a+J_b cos(2 π t/T), J_2(t)=J_c+J_d cos(2 π t/T) with J_a=0.9, J_b=1.8, J_c=0.6, J_d=1.2, and λ_ν=1.8, λ_R=0.3.The W_tr-invariant correctly predicts the appearance of symmetry protected boundary states in the gaps at ϵ=0 and ϵ=π, while the Kane-Mele invariants of the Floquet bands and the W_3-invariant vanish. In summary, we see that Eq. (<ref>) defines a ℤ_2-valued bulk invariant for time-reversal symmetry. The construction of this invariant closely resembles the construction from Ref. 1367-2630-17-12-125014, to which it reduces under the additional conditions stated in App. <ref> for the W_3-invariant.A different ℤ_2-invariant has been introduced in Ref. PhysRevLett.114.106806, which is based on the original expression <cit.> for the W_3-invariant and requires amore complicated auxiliary construction <cit.> of a time-symmetrized propagator. §.§ Particle-hole symmetry. The symmetry relation for particle-hole symmetry of fermionic particles isH_ph(k⃗, t) = - Π H_ph(-k⃗ ,t) Π^-1 ,with an anti-unitary operator Π for which Π^2=1. The symmetry relation (<ref>) implies Π U(-k⃗,t) Π^-1= U(k⃗,t), for the original propagator U(·). If degeneracy points of U(k⃗,t) occur in pairs d⃗_i = (k⃗_i,t_i, ϵ_i), d̂⃗̂_i = (-k⃗_i,t_i, -ϵ_i), they now occur with the same sign of C^ν(d⃗_i) = C^ν(d̂⃗̂_i). We can only conclude W_3(ϵ) = W_3(-ϵ), and in contrast to chiral and time-reversal symmetry the symmetry relation does not enforce W_3(ϵ)=0 in any gap.Despite this difference, symmetry-protected boundary states exist also for particle-hole symmetry, because the IM k⃗≡ - k⃗ again have specific significance but play the opposite role as in the case of time-reversal symmetry. There, Θ^2 = -1 forbids single unpaired boundary states at the IM, while here Π^2 = 1 is compatible with their appearance. An unpaired boundary state in the gaps at ϵ = 0,π, which is pinned at the IM, is protected by particle-hole symmetry <cit.>. These states are associated with unpaired degeneracy points of U(k⃗, t) at the IM.Let the four IM in the 2+1 dimensional bulk system be M⃗_0 = 0, M⃗_1 = b⃗_1/2, M⃗_2 = b⃗_2/2, M⃗_3 = (b⃗_1 + b⃗_2)/2, for two primitive reciprocal lattice vectors b⃗_1, b⃗_2. If we introduce a boundaryalong a primitive lattice vector a⃗, with a⃗·b⃗_1,2∈{ 0, 2π}, the four IM are projected onto two momenta k_a⃗ = a⃗·M⃗_m ∈{ 0, π}. Symmetry-protected boundary states, with dispersion relation ϵ(-k_a⃗) = - ϵ(k_a⃗), can exist at both momenta. To capture this situation, we need a total of four ℤ_2-invariants, defined for α =0, π and ϵ = 0, π asW^α_ph(ϵ) =∑_ν=1^n∑_k⃗_i ∈{M⃗_m} a⃗·k⃗_i = α N^ν(ϵ, d⃗_i) C^ν(d⃗_i)2.In Eq. (<ref>) only unpaired degeneracy points at the two IM M⃗_m with a⃗·M⃗_m = α contribute. Therefore, a non-zero W_ph-invariant implies the existence of a symmetry-protected boundary state that is pinned at the respective momentum k_a⃗ = α. For example, W^π_ph(0)0 corresponds to a symmetry-protected boundary state with ϵ(k_a⃗ = π) = 0 in the gap at ϵ=0. Note that we assume here the absence of boundary states for t = 0 (but see App. <ref> for an extended discussion).The W_ph-invariants only count unpaired degeneracy points, which necessarily occur at IM. The W_3-invariant also counts paired degeneracy points with opposite momenta ±k⃗_i that occur away from the IM. Since paired degeneracy points change the W_3-invariant by an even number, we have W^0_ph(ϵ) + W^π_ph(ϵ) ≡ W_3(ϵ)2.According to the summation in Eq. (<ref>) the W_ph-invariants depend on the boundary orientation given by a⃗. Especially if W_3(ϵ) = 0 a `weak' topological phase can occur <cit.>, where two symmetry-protected boundary states exist on some boundaries where W^0_ph = W^π_ph = 1, but not on other boundaries where W^0_ph = W^π_ph = 0. If, on the other hand, W_3(ϵ)0 in a `strong' topological phase, boundary states occur on each boundary. Especially for odd W_3(ϵ), we must have non-zero W_ph invariants for each boundary orientation, and thus a symmetry-protected boundary state at either k_a⃗ = 0 or k_a⃗ = π.Particle-hole symmetry is realized in the graphene lattice model  <cit.>H_ph(t) = ∑_r⃗∑_l=1^3 J_l(t) c_B,r⃗^† c_A,r⃗ + δ_l + h.c.,without further constraints on the J_l(t). The J_l(t)are periodically varied according to the protocol in Ref. PhysRevB.93.075405. For the results in Fig. <ref> we set J_s,1=-3π/2, J_s,2=-3π/2, J_s,3=3π/2, J_u,1=0, J_u,2=-1.2, J_u,3=0.9.In Fig. <ref>we recognize the weak topological phase just discussed: On a zigzag boundary along a lattice vector a⃗_3, with invariants W^0_ph(ϵ) = W^π_ph(ϵ)0, we observe in each gap two symmetry-protected boundary states with opposite chirality at momenta k_3^z = 0, π. On an armchair boundary along a nearest-neighbor vector δ_1, with invariants W^0_ph(ϵ) = W^π_ph(ϵ) = 0, no boundary states cross ϵ=0 or ϵ=π. The W_ph-invariants, together with the zero W_3-invariant, correctly describe this situation.Note that for a hexagonal lattice, with three inequivalent orientations for each boundary type, an exhaustive analysis is significantly more complicated than suggested by Fig. <ref>. For details we refer the reader to App. <ref>.In summary, we see that Eq. (<ref>) defines four ℤ_2-valued bulk invariants for particle-hole symmetry, which predict the appearance of symmetry-protected boundary states at k_a⃗ = 0, πin dependence on the boundary orientation.Since non-zero W_ph-invariants are compatible with both W_3(ϵ)=0 and W_3(ϵ) 0, weak and strong topological phases can be distinguished. The possible combinations of the four invariants for fixed W_3(ϵ) are given by the summation rule stated above.Different ℤ_2-invariants have been introduced in Ref. PhysRevB.93.075405, in the form of scattering invariants for finite systems.§ CONCLUSIONSThe ℤ_2-invariants introduced here allow for the classification of topological phases in driven systems with chiral, time-reversal, or particle-hole symmetry. In this way, they complement the W_3-invariant for driven systems without additional symmetries. The ℤ_2-invariants are related to previous constructions for symmetry-protected topological phases <cit.>, but they combine two substantial aspects. First, they are bulk invariants of driven systems, and a bulk-boundary correspondence holds for each invariant. Second, they are given by simple and explicit expressions that involve the (time-symmetrized) Floquet-Bloch propagator,but require no complicated auxiliary constructions. Quite intuitively, the invariants are defined through counting of half of the degeneracy pointsthat appear in symmetric pairs. Note that the invariants depend on the entire time evolution of U(k⃗,t) over one period 0 ≤ t ≤ T, as required for driven systems with the possibility of anomalous boundary states <cit.>. Once the degeneracy points are known computation of the invariants according to Eqs. (<ref>), (<ref>), (<ref>) is straightforward. Particularly efficient computation of the ℤ_2-invariants is possible with the algorithm from Ref. HAF17. These aspects should make the ℤ_2-invariants viable tools in the analysis of driven systems with symmetries. For the three generic models considered here, the invariants correctly predict the appearance of symmetry-protected topological boundary states, even if the static invariants and the W_3-invariant vanish. Concerning the nature of these states, chiral and time-reversal symmetry are set apart from particle-hole symmetry. In the latter case, the existence of symmetry-protected states depends on the orientation of the boundary, similar to the situation for three-dimensional weak topological insulators <cit.> or quantum Hall systems <cit.>.It will be interesting to study the different impact of symmetries on topological phases, and on the anomalous boundary states that are unique to driven systems, in nature.One way towards realization of the proper symmetries should be offered by photonic crystals <cit.>. This work was financed in part by Deutsche Forschungsgemeinschaft through SFB 652.B.H. was funded by the federal state of Mecklenburg-West Pomerania through a postgraduate scholarship within the International Helmholtz Graduate School for Plasma Physics. § DERIVATION OF EXPRESSION EQ. (<REF>) FOR THE W_3–INVARIANTWe here give the details of the derivation of Eq. (<ref>).In slightly different notation, Eqs. (3.18) and (5.4) in Ref. HAF17 yield the expressionW_3(ϵ) =1/2 π∑_ν=1^n [ ∫_0^T∬_ℬ ( ∂^α F_α^ν(k⃗,t) ) ϵ^ν(k⃗,t)dk_1 dk_2 dt + ∬_ℬF_3^ν( k⃗, T ) (log_ϵ e^-ϵ^ν(k⃗,T)- ϵ^ν(k⃗,T) )dk_1 dk_2+ ∬_ℬF_3^ν( k⃗, 0 ) ϵ^ν(k⃗,0) dk_1 dk_2 ]for the W_3-invariant from Ref. PhysRevX.3.031005.It is written as an integral of the Berry curvatureF_α^ν(k⃗,t)= 1/2 πϵ_αβγ∂^β(s⃗^ν(k⃗,t)^† ∂^γs⃗^ν(k⃗,t)),which involves the eigenvectors s⃗^ν(k⃗,t) and the quasienergies ϵ^ν(k⃗, t) of the different bands of the Floquet-Bloch propagator U(k⃗, t). Both quantities are obtained from diagonalization of U(k⃗,t) asU(k⃗, t) = ∑_ν=1^n e^-ϵ^ν(k⃗, t) | s⃗^ν(k⃗,t) ⟩⟨s⃗^ν(k⃗,t)|.For the above expression to make sense, we assume continuous quasienergies ϵ^ν(k⃗, t).In Eq. (<ref>), ϵ_αβγ is the antisymmetric Levi-Civita tensor, the indices α, β, γ run over permutations of the parameters k_1, k_2, t of U(·), and summation over repeated indices is implied.In all expressions, e.g., for F^ν_3, we choose tas the third coordinate. The integration is over one period 0 ≤ t ≤ T and over the two-dimensional Brillouin zone ℬ. The invariant W_3(ϵ) depends on the quasienergy ϵ within a gap through the second term, the boundary term at t=T, where the branch cut of the complex logarithm log_ϵ(·) is chosen along the line from zero through e^-ϵ. The above expression, which is the starting point for the construction of the algorithm in Ref. HAF17, is not fully suitable for the present study because it is formulated with respect to an absolute reference point ϵ = 0. Instead, we seek an expression where all quantities are computed relative to the quasienergy ϵ of the gap under consideration. To obtain this expression, note that the divergence of the Berry curvature F_α^ν(k⃗,t) is non-zero only <cit.> at a degeneracy point d⃗_i of U(k⃗,t). At such a point, it is ∂^α F_α^ν(k⃗_i,t_i)=C^ν(d⃗_i) δ(k⃗-k⃗_i,t-t_i), where C^ν(d⃗_i) =∮_𝒮(d⃗_i) F_α^ν dS^α with a small surface around d⃗_i. The quantity C^ν(d⃗_i) is an integer, which can be interpreted as the topological charge of the degeneracy point in band ν (cf. Ref. 1367-2630-17-12-125014). The net charge of a degeneracy point is zero, that is C^ν(d⃗_i) = - C^μ(d⃗_i) for the two bands μ, ν that touch at d⃗_i.We can now replace the first term in Eq. (<ref>) by a sum over all degeneracy points i. Each degeneracy point gives a contribution of the formC^ν(d⃗_i) (ϵ^ν(k⃗_i, t_i) + Δ_i)+C^μ(d⃗_i) (ϵ^μ(k⃗_i, t_i)+ Δ_i),where we can include a shift Δ_i that cancels because of C^ν(d⃗_i) = - C^μ(d⃗_i). We choose Δ_i such that ϵ^ν(k⃗_i,t_i)+ Δ_i=⌈(ϵ^ν(k⃗_i,t_i)-ϵ)/(2π)⌉, with the ceiling function ⌈·⌉ (i.e., rounding up to the next integer). Then, it is also ϵ^μ(k⃗_i,t_i)+ Δ_i=⌈(ϵ^μ(k⃗_i,t_i)-ϵ)/(2π)⌉ because at a degeneracy point ϵ^ν(k⃗_i,t_i) and ϵ^μ(k⃗_i,t_i) differ by a multiple of 2 π. For the second term in Eq. (<ref>),we note that the factor involving the quasienergies does not depend on k⃗ when ϵ is in a gap. Therefore, it can be pulled out of the integral. Evaluation of the complex logarithm, with the branch cut at the right position, giveslog_ϵ e^-ϵ^ν(k⃗,T)- ϵ^ν(k⃗,T)/2π =⌈ϵ-ϵ^ν(k⃗,T)/2π⌉ .For the third term in Eq. (<ref>), we have similarly that ϵ^ν(k⃗,0) does not depend on k⃗ and, because of U(k⃗,0) = 1, is in fact a multiple of 2 π.Now we can sum the contribution of all degeneracy points to one band ν, and findC^ν - C^ν_0 =∬_ℬF_3^ν( k⃗, T ) - F_3^ν( k⃗, 0 )dk_1 dk_2= ∑_i=1^dp C^ν(d⃗_i),whereandare the Chern numbers of band ν at the final time t=T and initial time t=0.Putting everything together, we arrive atW_3(ϵ) = ∑_ν=1^n [∑_i=1^dp N^ν(ϵ,d⃗_i) C^ν(d⃗_i) + ⌈ϵ-ϵ^ν(k⃗,T) + ϵ^ν(k⃗, 0)/2π⌉ C_0^ν],with N^ν(ϵ, d⃗_i) = ⌈ϵ^ν(k⃗_i, t_i) - ϵ/2 π⌉ + ⌈ϵ-ϵ^ν(k⃗, T)/2 π⌉ .Note that these expressions are invariant under shifts ϵ^ν(·) ↦ϵ^ν(·) + 2 π m of the quasienergies of a band by multiples of 2 π, as it should. We can especially choose ϵ^ν(k⃗,0) = 0, if we prefer, for example as in Fig. 1 in the main text. For the sake of brevity, we also drop the last term and set C_0^ν=0 in the main text, as if all bands were topologically trivial at t=0. One might want to note the similarity of Eq. (<ref>) to Eq. (4.4) in Ref. HAF17, which is the basis of the algorithm presented there. Owing to this similarity, evaluation of the above expression, and also of the ℤ_2-invariants defined in the main text, is possible with that algorithm.Let us finally remark that the expression for the W_3-invariant given in Eq. (9) of Ref. 1367-2630-17-12-125014 can be recovered from our Eq. (<ref>) if we adopt the same ordering of the Floquet bands in a “natural quasienergy Brillouin zone”. Specifically, we have to (a) set ϵ^ν(k⃗,0)=0, (b) impose the ordering condition: ϵ^ν(k⃗,t)≤ϵ^ν'(k⃗,t) for ν < ν', and (c) assume that ϵ^n(k⃗,t) - ϵ^1(k⃗,t)≤ 2 π. Now suppose that the gap at ϵ separates Floquet bands m, m+1, that is ϵ^m(k⃗,T) < ϵ < ϵ^m+1(k⃗,T).In this case, Eq. (<ref>) reduces toW_3(ϵ)=∑_ν=1^m C^ν+∑_ν=1^n ∑_i=1^dp⌈ϵ^ν(k⃗_i, t_i) - ϵ/2 π⌉ C^ν(d⃗_i) .In this expression, the contributions from a degeneracy point d⃗_i that occurs between two bands 1 ≤μ < μ+1 ≤ n, that is for ϵ^μ(k⃗_i, t_i) = ϵ^μ+1(k⃗_i, t_i), cancel: The ceiling function ⌈·⌉ has the same value for ν∈{μ, μ+1}, but C^μ(d⃗_i) = - C^μ+1(d⃗_i). Only the degeneracy points that occur between bands 1, n, which fulfill ϵ^1(k⃗_i, t_i) = ϵ^n(k⃗_i, t_i) - 2π, contribute: Now ⌈·⌉ = 0 for ν = 1, but ⌈·⌉ = 1 for ν = n.In Ref. 1367-2630-17-12-125014, these degeneracy points are called “zone-edge singularities”. We thus obtain, under the above assumptions, an expression of the formW_3(ϵ)=∑_ν=1^m C^ν+∑_i=1^dp C^n(d⃗_i) ,which is, up to notational differences, Eq. (9) from Ref. 1367-2630-17-12-125014.We can thus recognize this equation as a special case of the more general Eq. (<ref>).§ CHIRAL SYMMETRY WITH A MOMENTUM SHIFT K⃗_Π In Eq. (<ref>) chiral symmetry is defined with a k⃗↦k⃗ + k⃗_π momentum shift, which differs from the standard definition in the literature <cit.>,H̃_ch(k⃗, t) = - S H̃_ch(k⃗ ,T-t) S^-1 ,that does not involve a momentum shift. The origin of the momentum shift in Eq. (<ref>) is a bipartite even-odd sublattice symmetry assumed there. Specifically, we consider the original lattice, whose units cells are enumerated by two indices (i,j), as being composed of the sublattices of even (i+j ≡ 02) and odd (i+j ≡ 12) unit cells. If the chiral symmetry operator includes an alternating sign flip for every second unit cell of the lattice,say for the odd unit cells, the sign flip translates into the shift k⃗↦k⃗ + k⃗_π for the Bloch Hamiltonian.We can now consider the Bloch Hamiltonian for a 2 × 2 unit cell that comprises four unit cells of the original lattice. If we enumerate these four unit cells in the obvious way, say in the order (2i, 2j), (2i +1, 2j), (2i, 2j+1), (2i+1, 2j+1), the new Bloch Hamiltonian has the 4 × 4 block formĤ (k⃗,t) = [ H_loc H_x H_y H_d; H_x H_loc H_d H_y; H_y H_d H_loc H_x; H_d H_y H_x H_loc ] .It contains diagonal blocks H_loc≡ H_loc(t) for terms within a unit cell, and the off-diagonal blocks H_x/y≡ H_x/y(k⃗,t) for hopping along the two lattice axes and H_d ≡ H_d(k⃗,t) for diagonal hopping. For a Hamiltonian with only nearest-neighbor hopping, the blocks H_d ≡ 0 vanish. The equality of the diagonal and off-diagonal blocks incorporated into Eq. (<ref>) follows from the translational symmetry of the original Hamiltonian. Note that we do not assume additional geometric symmetries, and allow for H_xH_y. The chiral symmetry operator for Ĥ(k⃗, t),Ŝ = [S ;-S; -S ; S ] , is block-diagonal. The plus and minus signs of the entries correspond to the alternating sign flip on the even-odd sublattice structure. If the original Hamiltonian H(k⃗, t) satisfies Eq. (<ref>), we have ŜĤ(k⃗,t) Ŝ^-1 = - Ĥ(k⃗,T-t) for the new Hamiltonian.Therefore, Ĥ(k⃗,t) fulfills the standard chiral symmetry relation (<ref>).To obtain Ĥ(k⃗,t) we have considered only translations by an even number of sites on the original lattice. Ĥ(k⃗,t) inherits additional symmetries from translations by an odd number of sites. The two symmetry operators are T̂_x = [ 0 1 0 0; 1 0 0 0; 0 0 0 1; 0 0 1 0 ] ,T̂_y = [ 0 0 1 0; 0 0 0 1; 1 0 0 0; 0 1 0 0 ] . Since [T̂_x, T̂_y]=0, only one of the two symmetry operators is needed below.Note that if we want to interpret T̂_x/y as a translation on the original lattice some prefactors ∼ e^ k_i must be included, but since the prefactors cancel trivially in all relations we have dropped them here.We have [T̂_x/y, Ĥ (k⃗,t)] = 0, but Ŝ T̂_x/y Ŝ^-1 = - T̂_x/y.The above relations carry over to the Floquet-Bloch propagator Û(k⃗, t) associated to Ĥ(k⃗,t). We have S Û_⋆(k⃗,t)S^-1 = Û_⋆^†(k⃗, t), and [T̂_x/y, Û_⋆ (k⃗,t)] = 0.Now let us assume that |ψ⟩ is an eigenstate of Û_⋆(k⃗,t), to the quasienergy ϵ.The state | ζ⟩ = Ŝ |ψ⟩ is an eigenstate of Û_⋆(k⃗,t) to the negative quasienergy -ϵ.Now if ϵ = 0,π the states |ψ⟩, |ζ⟩ are degenerate. For the original Hamiltonian, with symmetry relation (<ref>), degenerate quasienergies occur at momenta k⃗, k⃗ + k⃗_π. For the Hamiltonian Ĥ, with symmetry relation (<ref>), degeneracies occur at the same momentum k⃗. For time-reversal symmetry, where a similar situation occurs at the IM, Kramers' theorem implies the orthogonality of the two degenerate states, and thus the symmetry-protection of the corresponding topological phases.For chiral symmetry, the eigenstates can be classified by means of the symmetry operator T̂_x (or T̂_y), as T̂_x|ψ⟩ = ± |ψ⟩. Now |ζ⟩ is also an eigenstate of T̂_x, with the negative eigenvalue T̂_x|ζ⟩ = ∓ |ζ⟩.This observation implies the orthogonality of |ψ⟩ and |ζ⟩.Therefore, the situation for chiral symmetry is, although for different reasons, analogous to the situation for time-reversal symmetry: In both cases symmetry-protected topological phases exist because degenerate states occur only in orthogonal pairs.We repeat that without the momentum shift k⃗_π no such argument is possible, and we should not expect that a symmetry-protected topological phase exists in 2+1 dimensional systems with that type of chiral symmetry.To support these findings with additional numerical evidence we show in Figs. <ref>, <ref>invariants and boundary states of the periodically kicked Harper model introduced in Ref. PhysRevB.93.075405 for the study of topological phases with chiral symmetry. This model is equal to the Harper model of Eq. (<ref>), now with the time-dependenceJ_x(t)=J̃_x∑_m=-∞^∞δ(t-mT/2).In Fig. <ref>, with parameters α=1/3, J̃_x=π, J_y=π/3 that correspond to the central panel of Fig. 1 in Ref. PhysRevB.93.075405, we observe one pair of boundary states with opposite chirality in accordance with the non-zero value W_ch(π) of the W_ch-invariant. This pair exists independently of the boundary orientation. Note that in the gaps between ϵ = 0, π, which have no special significance for chiral symmetry, the number of unpaired boundary states is given by the W_3-invariant.Having changed the parameter J̃_x to J̃_x=3/2π in Fig. <ref>, which corresponds to the right panel of Fig. 1 in Ref. PhysRevB.93.075405, gaps have closed and reopened. The values of the invariants have changed, and now W_ch(π)=0. Since W_ch(π) is a ℤ_2-invariant we expect an even number of pairs of boundary states with opposite chirality. Indeed, we observe two pairs on a boundary along the x-axis (central panel), and zero pairs on a boundary along the y-axis (right panel). The two pairs are not protected, and could be annihilated by variation of additional model parameters <cit.>.These results agree with Ref. PhysRevB.93.075405, and with our statements in the main text. In particular, we observe the existence or absence of symmetry-protected boundary states in dependence on the value of the ℤ_2-invariant W_ch(ϵ), but independently of the boundary orientation. § DEGENERACY POINTS AND THE KANE-MELE INVARIANTIn the time-reversal symmetric case we can define an effective Brillouin zone ℰ such that either k⃗∈ℰ or - k⃗∈ℰ. Then, the sum over half of the degeneracy points in Eq. (6) for the W_tr-invariant in the main text can be performed by counting exactly the degeneracy points d⃗_i with k⃗_i ∈ℰ. Including the time coordinate, these degeneracy points lie in thebox ℬ𝒳 = ℰ× [0,T].Now consider a single Kramers pair of bands 2ν-1, 2ν. The sum over all degeneracy points of this pair can be written as∑_i=1^dp/2 C^2ν-1(d⃗_i)+ C^2ν(d⃗_i) = ∭_ℬ𝒳∂^α F_α^2ν-1(k⃗,t) + ∂^α F_α^2ν(k⃗,t)dk_1 dk_2 dt.With Gauss's theorem we can convert this integral into an integral over the surface of the box ℬ𝒳, which is the union of the two faces ℱ_0 = ℰ×{0 }, ℱ_T = ℰ×{T } and the cylinder 𝒞 = ∂ℰ× [0,T] that contains the points on the boundary curve ∂ℰ of ℰ.With Stokes' theorem, the integral of the Berry curvature F_α^2 ν -1, F_α^2 νover 𝒞 can be converted further into a line integral of the Berry connection A^α, 2ν-1, A^α, 2ν-1 over the two curves ∂ℰ×{0}, ∂ℰ×{T}.Recall that in terms of the eigenvectors of U_⋆(·), it is A^α,ν(k⃗,t)=1/2π[s⃗^ν(k⃗,t)]^†∂^αs⃗^ν(k⃗,t). Since the Berry connection is gauge-dependent, we here have to impose a time-reversal constraint on ∂ℰ, namelys⃗^2ν-1(-k⃗,t) =Θ s⃗^2ν(k⃗, t) , s⃗^2ν(- k⃗,t) =-Θ s⃗^2ν-1(k⃗,t),to obtain the Kane-Mele invariants in a manner analogously to Ref. PhysRevB.74.195312. Note that the s⃗(·) in this expression are the eigenvectors of the time-symmetrized propagator U_⋆(·), such that the time argument t is unchanged while k⃗ is flipped.We arrive at the relation (everything taken modulo two)∑_i=1^dp/2 C^2ν-1(d⃗_i)+ C^2ν(d⃗_i) ≡∬_ℰ F_α^2ν-1(k⃗,T) + F_α^2ν(k⃗,T)dk_1 dk_2- ∫_∂ℰ A^α,2ν-1(k⃗,T) + A^α,2ν(k⃗,T) dk_α - ∬_ℰ F_α^2ν-1(k⃗,0) + F_α^2ν(k⃗,0)dk_1 dk_2+ ∫_∂ℰ A^α,2ν-1(k⃗,0) + A^α,2ν(k⃗,0)dk_α≡KM^ν(T) - KM^ν(0) ,and recognize <cit.>the Kane-Mele invariants KM^ν(0) and KM^ν(T) of the Kramers pair 2ν-1, 2ν at t=0 and t=T. Therefore, the Kane-Mele invariants can be expressed as the sum over half of the degeneracy points of each Kramers pair. This observation justifies the corresponding statements in the main text. For the sake of brevity of the presentation, we there assume KM^ν(0)=0, as if the Kramers pairs were topologically trivial at t=0.§ PARTICLE-HOLE SYMMETRIC BOUNDARY STATES ON A HEXAGONAL LATTICEFor a hexagonal lattice as in Fig. <ref>, zigzag boundaries occur along directions given by primitive lattice vectors a⃗_1, a⃗_2, a⃗_3. Armchair boundaries occur along directions given by nearest-neighbor vectors δ_1, δ_2, δ_3. Note that the primitive translation vector for an armchair boundary is 3δ_i. In contrast to, say, the situation for a square lattice, both boundary types exist in three inequivalent orientations. This necessitates the more detailed analysis provided here.To evaluate Eq. (<ref>) for each boundary, we first need to project the IM M⃗_1, M⃗_2, M⃗_3 onto the boundary direction (the projection of M⃗_0 results in zero). For zigzag boundaries, we have a⃗_i ·M⃗_i = 0, and a⃗_i ·M⃗_m = π for the remaining two IM with mi. For armchair boundaries we obtain essentially the same result: 3δ _i ·M⃗_i = 0, and 3δ _i ·M⃗_m = π for mi. Note that all values are given modulo 2 π. We recognize that for each boundary orientation two IM will contribute in Eq. (<ref>) for given momentum k_a⃗ = 0, π. To evaluate Eq. (<ref>) we further need to determine the contribution N^ν(ϵ, d⃗_i) C^ν(d⃗_i) from unpaired degeneracy points at each IM M⃗_0, …, M⃗_3.For the model from Eq. (<ref>), in the situation of Fig. <ref> and Figs. <ref>, <ref>,these values are given in Tab. <ref>. They have been determined from the propagator U(k⃗, t) for 0 ≤ t ≤ T, using the algorithm from Ref. HAF17. With the information from Tab. <ref> we can now immediately evaluate Eq. (<ref>). In the gap at ϵ=π we obtain the W_ph-invariants given in the first column of Tab. <ref>. Note that because of W_3(π) = 0 we have W_ph^0(π) = W_ph^π(π). Comparison with Figs. <ref>, <ref>, where the boundary states are shown explicitly, confirms the correctness of the W_ph-invariants.In the gap at ϵ=0 another complication arises due to the possibility of boundary states for t=0. In the upper rows of Figs. <ref>, <ref> we show the boundary spectrum of the initial Hamiltonian H_ph(t=0), which is the starting point for the subsequent evolution described by U(·). Depending on the boundary orientation, H_ph(t=0) can possess a boundary state at ϵ=0.In the present situation, where H_ph(t=0) is particle-hole and (as a real-valued Hamiltonian) time-reversal symmetric, the dispersion of the boundary state is perfectly flat. Recall that U(k⃗, t), on the other hand, is not time-reversal symmetric according to Eq. (<ref>).The initial boundary states must be included in Eq. (<ref>), just as we had to do for the W_3-invariant in Eq. (<ref>) if the bands are not topologically trivial at t=0. Any initial boundary state changes the corresponding W_ph-invariant by one, that is, through counting modulo two, flips its value between zero and one.The number of initial boundary states N(t=0) in Tab. <ref> can be taken from the upper rows in Figs. <ref>, <ref>.The contribution from the degeneracy points of U(k⃗, t) is given in the third column of this table. Summation of both numbers now gives the W_ph-invariants for the gap at ϵ = 0. Note that because of W_3(0) = 0 we have again W_ph^0(0) = W_ph^π(0). Comparison with Figs. <ref>, <ref> confirms the correctness of the W_ph-invariants, also in cases where initial boundary states have to be taken into account.In the main text, for Fig. <ref>, we have selected two boundaries without initial boundary states (namely, the third column from Fig. <ref> and the first column in Fig. <ref>), which allowed for a straightforward discussion. With the present results for all boundaries, we recognize the full complexity associated with the `weak' topological phase.For the gap at ϵ=π, initial boundary states do not play a role (they simply do not exist outside of the spectrum of H_ph(t=0)). According to the 0-π pattern of the projections a⃗_i ·M⃗_m orδ_i ·M⃗_m, we expect that in a `weak' phase symmetry-protected boundary states exist for two out of three boundary orientations. This is true for both zigzag and armchair boundaries in Figs. <ref>, <ref>.For the gap at ϵ=0, the “two-out-of-three” rule does not apply because of the initial boundary states. For armchair boundaries, symmetry-protected boundary states do not occur for any boundary orientation. For zigzag boundaries, symmetry-protected boundary states occur for every boundary orientation. We like to stress that this effect is not a simple consequence of the different geometry of armchair and zigzag boundaries. In particular, as Figs. <ref>, <ref> show, no immediate relation between the appearance of boundary states at t=0 and at t=T exists.Unless one computes the full W_ph-invariants,which keep track of the creation and annihilation of symmetry-protected states during time-evolution,the entire situation remains obscure.36 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Klitzing et al.(1980)Klitzing, Dorda, and Pepper]PhysRevLett.45.494 author author K. v. Klitzing, author G. Dorda, and author M. Pepper, 10.1103/PhysRevLett.45.494 journal journal Phys. Rev. Lett. volume 45, pages 494 (year 1980)NoStop [Thouless et al.(1982)Thouless, Kohmoto, Nightingale, andden Nijs]PhysRevLett.49.405 author author D. J. Thouless, author M. Kohmoto, author M. P. Nightingale,andauthor M. den Nijs, 10.1103/PhysRevLett.49.405 journal journal Phys. Rev. Lett. volume 49, pages 405 (year 1982)NoStop [Kane and Mele(2005)]PhysRevLett.95.146802 author author C. L. Kane and author E. J. Mele, 10.1103/PhysRevLett.95.146802 journal journal Phys. Rev. Lett. volume 95, pages 146802 (year 2005)NoStop [Hasan and Kane(2010)]RevModPhys.82.3045 author author M. Z. Hasan and author C. L. Kane, 10.1103/RevModPhys.82.3045 journal journal Rev. Mod. Phys. volume 82,pages 3045 (year 2010)NoStop [König et al.(2007)König, Wiedmann, Brüne, Roth, Buhmann, Molenkamp, Qi, and Zhang]Konig766 author author M. König, author S. Wiedmann, author C. Brüne, author A. Roth, author H. Buhmann, author L. W. Molenkamp, author X.-L.Qi,and author S.-C.Zhang, 10.1126/science.1148047 journal journal Science volume 318, pages 766 (year 2007)NoStop [Fu et al.(2007)Fu, Kane, and Mele]PhysRevLett.98.106803 author author L. Fu, author C. L. Kane,andauthor E. J. Mele, 10.1103/PhysRevLett.98.106803 journal journal Phys. Rev. Lett. volume 98, pages 106803 (year 2007)NoStop [Kitagawa et al.(2010)Kitagawa, Berg, Rudner, and Demler]PhysRevB.82.235114 author author T. Kitagawa, author E. Berg, author M. Rudner,and author E. Demler, 10.1103/PhysRevB.82.235114 journal journal Phys. Rev. B volume 82, pages 235114 (year 2010)NoStop [Lindner et al.(2011)Lindner, Refael, and Galitski]Floquettopological author author N. H. Lindner, author G. Refael, and author V. Galitski, http://dx.doi.org/10.1038/nphys1926 journal journal Nat. Phys. volume 7, pages 490 (year 2011)NoStop [Kitagawa et al.(2011)Kitagawa, Oka, Brataas, Fu, and Demler]PhysRevB.84.235108 author author T. Kitagawa, author T. Oka, author A. Brataas, author L. Fu,and author E. Demler, 10.1103/PhysRevB.84.235108 journal journal Phys. Rev. B volume 84, pages 235108 (year 2011)NoStop [Fläschner et al.(2016)Fläschner, Rem, Tarnowski, Vogel, Lühmann, Sengstock, and Weitenberg]Flaschner author author N. Fläschner, author B. S. Rem, author M. Tarnowski, author D. Vogel, author D.-S. Lühmann, author K. Sengstock,and author C. Weitenberg, 10.1126/science.aad4568 journal journal Science volume 352, pages 1091 (year 2016)NoStop [Rudner et al.(2013)Rudner, Lindner, Berg, and Levin]PhysRevX.3.031005 author author M. S. Rudner, author N. H. Lindner, author E. Berg,and author M. Levin, 10.1103/PhysRevX.3.031005 journal journal Phys. Rev. X volume 3, pages 031005 (year 2013)NoStop [Klinovaja et al.(2016)Klinovaja, Stano, and Loss]PhysRevLett.116.176401 author author J. Klinovaja, author P. Stano, and author D. Loss, 10.1103/PhysRevLett.116.176401 journal journal Phys. Rev. Lett. volume 116, pages 176401 (year 2016)NoStop [Wang et al.(2013)Wang, Steinberg, Jarillo-Herrero, andGedik]Wang453 author author Y. H. Wang, author H. Steinberg, author P. Jarillo-Herrero,andauthor N. Gedik, 10.1126/science.1239834 journal journal Science volume 342, pages 453 (year 2013)NoStop [Calvo et al.(2015)Calvo, Foa Torres, Perez-Piskunow, Balseiro, and Usaj]PhysRevB.91.241404 author author H. L. Calvo, author L. E. F. Foa Torres, author P. M. Perez-Piskunow, author C. A. Balseiro,and author G. Usaj, 10.1103/PhysRevB.91.241404 journal journal Phys. Rev. B volume 91,pages 241404 (year 2015)NoStop [Wang et al.(2017)Wang, Liu, and Wang]Wang_SR author author Y. Wang, author Y. Liu,andauthor B. Wang, 10.1038/srep41644 journal journal Sci. Rep.volume 7, pages 41644 (year 2017)NoStop [Lu et al.(2014)Lu, Joannopoulos, and Soljacic]nphoton.2014.248 author author L. Lu, author J. D. Joannopoulos,and author M. Soljacic, http://dx.doi.org/10.1038/nphoton.2014.248 journal journal Nat. Photon. volume 8, pages 821 (year 2014)NoStop [Rechtsman et al.(2013)Rechtsman, Zeuner, Plotnik, Lumer, Podolsky, Dreisow, Nolte, Segev, and Szameit]nature12066 author author M. C. Rechtsman, author J. M. Zeuner, author Y. Plotnik, author Y. Lumer, author D. Podolsky, author F. Dreisow, author S. Nolte, author M. Segev,and author A. Szameit, http://dx.doi.org/10.1038/nature12066 journal journal Nature volume 496, pages 196 (year 2013)NoStop [Maczewsky et al.(2017)Maczewsky, Zeuner, Nolte, andSzameit]anomalous author author L. J. Maczewsky, author J. M. Zeuner, author S. Nolte, and author A. Szameit, http://dx.doi.org/10.1038/ncomms13756 journal journal Nat. Comm. volume 8, pages 13756 (year 2017)NoStop [Mukherjee et al.(2017)Mukherjee, Spracklen, Valiente, Andersson, Öhberg, Goldman, and Thomson]anomalous_2 author author S. Mukherjee, author A. Spracklen, author M. Valiente, author E. Andersson, author P. Öhberg, author N. Goldman,and author R. R. Thomson, http://dx.doi.org/10.1038/ncomms13918 journal journal Nat. Comm. volume 8, pages 13918 (year 2017)NoStop [Lababidi et al.(2014)Lababidi, Satija, and Zhao]PhysRevLett.112.026805 author author M. Lababidi, author I. I. Satija,and author E. Zhao, 10.1103/PhysRevLett.112.026805 journal journal Phys. Rev. Lett. volume 112, pages 026805 (year 2014)NoStop [Ho and Gong(2014)]PhysRevB.90.195419 author author D. Y. H.Ho and author J. Gong, 10.1103/PhysRevB.90.195419 journal journal Phys. Rev. B volume 90,pages 195419 (year 2014)NoStop [Zhou et al.(2014a)Zhou, Satija,and Zhao]PhysRevB.90.205108 author author Z. Zhou, author I. I. Satija, and author E. Zhao, 10.1103/PhysRevB.90.205108 journal journal Phys. Rev. B volume 90, pages 205108 (year 2014a)NoStop [Zhou et al.(2014b)Zhou, Wang, Ho, and Gong]Zhou2014 author author L. Zhou, author H. Wang, author Y. D. Ho,and author J. Gong, 10.1140/epjb/e2014-50465-9 journal journal Eur. Phys. J. B volume 87, pages 1 (year 2014b)NoStop [Carpentier et al.(2015a)Carpentier, Delplace, Fruchart, and Gaw ędzki]PhysRevLett.114.106806 author author D. Carpentier, author P. Delplace, author M. Fruchart,and author K. Gaw ędzki, 10.1103/PhysRevLett.114.106806 journal journal Phys. Rev. Lett. volume 114, pages 106806 (year 2015a)NoStop [Nathan and Rudner(2015)]1367-2630-17-12-125014 author author F. Nathan and author M. S. Rudner, http://stacks.iop.org/1367-2630/17/i=12/a=125014 journal journal New J. Phys. volume 17, pages 125014 (year 2015)NoStop [Fulga and Maksymenko(2016)]PhysRevB.93.075405 author author I. C. Fulga and author M. Maksymenko, 10.1103/PhysRevB.93.075405 journal journal Phys. Rev. B volume 93, pages 075405 (year 2016)NoStop [Roy and Harper(2017)]PhysRevB.96.155118 author author R. Roy and author F. Harper,10.1103/PhysRevB.96.155118 journal journal Phys. Rev. B volume 96, pages 155118 (year 2017)NoStop [Yao et al.(2017)Yao, Yan, and Wang]PhysRevB.96.195303 author author S. Yao, author Z. Yan,andauthor Z. Wang, 10.1103/PhysRevB.96.195303 journal journal Phys. Rev. B volume 96, pages 195303 (year 2017)NoStop [Fruchart(2016)]PhysRevB.93.115429 author author M. Fruchart, 10.1103/PhysRevB.93.115429 journal journal Phys. Rev. B volume 93, pages 115429 (year 2016)NoStop [Höckendorf et al.(2017)Höckendorf, Alvermann, and Fehske]HAF17 author author B. Höckendorf, author A. Alvermann,and author H. Fehske, http://stacks.iop.org/1751-8121/50/i=29/a=295301 journal journal J. Phys. A volume 50, pages 295301 (year 2017)NoStop [Yan et al.(2015)Yan, Li, Yang, and Wan]driven_KM author author Z. Yan, author B. Li, author X. Yang,and author S. Wan, http://dx.doi.org/10.1038/srep16197 journal journal Sci. Rep. volume 5, pages 16197 (year 2015)NoStop [Carpentier et al.(2015b)Carpentier, Delplace, Fruchart, K. Gaw ędzki, and Tauber]CARPENTIER2015779 author author D. Carpentier, author P. Delplace, author M. Fruchart, author K. K. Gaw ędzki,and author C. Tauber, http://dx.doi.org/10.1016/j.nuclphysb.2015.05.009 journal journal Nucl. Phys. B volume 896,pages 779(year 2015b)NoStop [Seroussi et al.(2014)Seroussi, Berg, and Oreg]PhysRevB.89.104523 author author I. Seroussi, author E. Berg, and author Y. Oreg, 10.1103/PhysRevB.89.104523 journal journal Phys. Rev. B volume 89, pages 104523 (year 2014)NoStop [Kohmoto et al.(1992)Kohmoto, Halperin, and Wu]PhysRevB.45.13488 author author M. Kohmoto, author B. I. Halperin,and author Y.-S. Wu, 10.1103/PhysRevB.45.13488 journal journal Phys. Rev. B volume 45,pages 13488 (year 1992)NoStop [Berry(1984)]Berry45 author author M. V. Berry, 10.1098/rspa.1984.0023 journal journal Proc. R. Soc. London, Ser. A volume 392, pages 45 (year 1984)NoStop [Fu and Kane(2006)]PhysRevB.74.195312 author author L. Fu and author C. L. Kane,10.1103/PhysRevB.74.195312 journal journal Phys. Rev. B volume 74, pages 195312 (year 2006)NoStop
http://arxiv.org/abs/1708.07420v4
{ "authors": [ "Bastian Höckendorf", "Andreas Alvermann", "Holger Fehske" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20170824135424", "title": "Topological invariants for Floquet-Bloch systems with chiral, time-reversal, or particle-hole symmetry" }
=1() [ ]a]Marco Bonvini,[a]Dipartimento di Fisica, Sapienza Università di Roma and INFN, Sezione di Roma 1,Piazzale Aldo Moro 5, 00185 Roma, Italy b]Simone Marzani[b]Dipartimento di Fisica, Università di Genova and INFN, Sezione di Genova,Via Dodecaneso 33, 16146, Italy c]and Claudio Muselli[c]Tif Lab, Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano,Via Celoria 16, 20133 Milano, Italy https://arxiv.org/abs/1708.075101708.07510 [email protected]@[email protected] global fits of parton distribution functions (PDFs) a large fraction of data points, mostly from the HERA collider, lies in a region of x and Q^2 that is sensitive to small-x logarithmic enhancements. Thus, the proper theoretical description of these data requires the inclusion of small-x resummation. In this work we provide all the necessary ingredients to perform a PDF fit to deep-inelastic scattering (DIS) data which includes small-x resummation in the evolution of PDFs and in the computation of DIS structure functions.To this purpose, not only we include the resummation of DIS massless structure functions, but we also consider the production of a massive final state (e.g. a charm quark), and the consistent resummation of mass collinear logarithms through the implementation of a variable flavour number scheme at small x. As a result, we perform the small-x resummation of the matching conditions in PDF evolution at heavy flavour thresholds.The resummed results are accurate at next-to-leading logarithmic (NLL) accuracy and matched, for the first time, to next-to-next-to-leading order (NNLO). Furthermore, we improve on our previous work by considering a novel all-order treatment of running coupling contributions. These results, which are implemented in a new release of , version , will allow to fit PDFs from DIS data at the highest possible theoretical accuracy, NNLO+NLL, thus providing an important step forward towards precision determination of PDFs and consequently precision phenomenology at the LHC and beyond.Towardsparton distribution functions with small-x resummation: HELL 2.0 [ August 24, 2017 ==========================================================================§ INTRODUCTION The outstanding quality of the CERN Large Hadron Collider (LHC) data continuously challenges the particle physics theoretical community to perform refined calculations with uncertainties comparable to the ones of the experimental results. Consequently, perturbative predictions for LHC processes nowadays often include radiative corrections in QCD at next-to-next-to-leading order (NNLO) and, in some cases, N^3LO <cit.>. These remarkable calculations have a tremendous impact on many aspects of LHC phenomenology. This can be seen, for instance, in the context of the determination of parton distribution functions (PDFs), where the inclusion in the fit of data points describing the transverse momentum of Drell-Yan lepton pairs <cit.> or various differential distributions in top pair production <cit.> has recently become possible thanks to the existence of fully differential calculations at NNLO.Moreover, the recent completion of the NNLO QCD corrections to jet production <cit.>, paves the way for global determinations of parton densities which are truly NNLO. On the other hand, it is well known that fixed-order calculations fail to provide reliable results in regions of phase space which are characterized by the presence of two or more disparate energy scales. In such cases, large logarithmic contributions appear to any order in perturbation theory, and they must be accounted for to all orders. Resummed calculations have also seen remarkable progress in recent years, and for many observables the resummation of next-to-next-to-leading (NNLL) logarithms of soft-collinear origin has become standard, even reaching N^3LL for a few observables <cit.>. Traditionally, resummation is not included in the calculations that are employed in PDF fits with the argument that the observables which are considered are rather inclusive and therefore not much sensitive to logarithmic enhancements. However, the LHC is exploring a vast kinematic region in both the momentum transferred Q^2 and the Bjorken variable x=Q^2/S, where √(S) is the centre-of-mass energy. It is therefore important to assess the role of logarithmic corrections both in the small-x, i.e. high-energy, regime and at large x, i.e. in the threshold region. For instance, the production of a lepton pair via the Drell-Yan mechanism, which is measured by the LHCb collaboration in the forward region and at low values of the leptons' invariant mass, probes values of x down to 10^-5÷10^-6. At the other end of the spectrum, searches for new resonances at high mass are sensitive to PDFs in the x∼10^-1 region.In the past, some of us included threshold resummation in PDF fits <cit.> (see also Ref. <cit.>) and performed dedicated studies that included threshold resummation in both coefficient functions and PDFsin the context of the production of heavy supersymmetric particles <cit.>.It has to be noted that the inclusion of threshold resummation in PDF fits did not pose particular challenges because in the widely usedscheme the splitting functions that govern DGLAP evolution are not enhanced at large x <cit.>, and the effect of threshold resummation can be included through a K-factor. The situation is radically different if we consider small-x resummation, where both coefficient functions and splitting functions receive single-logarithmic corrections to all orders in perturbation theory.High-energy resummation of PDF evolution is based on the BFKL equation <cit.>. However, the correct inclusion of LL and NLL corrections to DGLAP splitting functions is far from trivial. This problem received great attention in 1990s and early 2000s by various groups, see Refs. <cit.>, Refs. <cit.>, and Refs. <cit.> (for recent work in the context of effective theories, see <cit.>).High-energy resummation of partonic cross sections is based on the so-called -factorization theorem <cit.>, which has been used to compute high-energy cross sections for various processes: deep-inelastic scattering (DIS) <cit.>, heavy quark production <cit.>, direct photon production <cit.>, Drell-Yan <cit.>, and Higgs production <cit.>. The formalism has been subsequently extended to rapidity <cit.> and transverse momentum distributions <cit.>.Despite the wealth of calculations, it has proven very difficult to perform resummed phenomenology. Recently, some of us overcame these difficulties and developed a framework and a public code named(High-Energy Large Logarithms) <cit.>, which is based on the formalism developed by Altarelli, Ball and Forte (ABF) <cit.>, but does contain significant improvements.In this paper we further improve on our recent work, with the goal of providingall the necessary theoretical ingredients and numerical tools to perform a PDF fit which includes small-x resummation in both PDF evolution and in DIS partonic coefficient functions, including the correct treatment of the transitions at the heavy flavour thresholds, at NLL accuracy matched, for the first time, to NNLO. This is important because DIS data still represent the backbone of any PDF fits, and HERA data <cit.> do explore the small-x region.Achieving this target requires two ingredients which are the main results of this work.The first is the construction of a so-called “variable flavour number scheme” at the resummed level <cit.>, which provides coefficient functions with power-behaving mass effects and resummation of mass collinear logarithms, as well as describing the transition of PDF evolution at heavy quark thresholds. The second is the matching of the NLL resummed splitting functions to their fixed-order counterparts, which is realized for the first time up to NNLO.This requires the expansion in power ofof the resummed result, which proved to be non-trivial because of the way the resummed kernels are constructed.We also correct an error present in the our previous paper <cit.> that we inherited from the original ABF work <cit.>, which has however a small phenomenological impact.More importantly, we derive a new way to perform the resummation of running coupling effects, which streamlines the construction of the resummed evolution kernels and solves an issue in the construction of itsexpansion. All these improvements are included in a new release of , version , which is publicly available at the webpage https://www.ge.infn.it/ bonvini/hell/. We point out that the investigation of the impact of the resummation on the NNLO fixed-order splitting functions is potentially of great phenomenological interest. It is well known that, due to accidental zeros, the effect of small-x logarithms in the evolution is mild at NLO but stronger at NNLO, and will be even stronger (with two extra logarithms) at N^3LO. On top of this, comparison of theoretical predictions with experimental data suggests that fixed-order NLO theory describes the small-x region better than NNLO, see e.g. <cit.>. Indeed, the final resummed anomalous dimensions tend to have a shape which is somewhat closer to NLO than to NNLO, as the small-x growth is greatly reduced once resummation is performed, see e.g. <cit.>. Therefore, having the possibility of using NNLO theory stabilized at small x with high-energy resummation can provide a significant improvement in the description of the data. This is indeed observed in preliminary applications of our results <cit.>.Furthermore, a reliable theory of DIS at low x finds interesting applications beyond collider physics. For instance, it is a key ingredient in the description of ultra high-energy neutrinos from cosmic rays, see e.g. <cit.>.This paper is organized as follows. In Sect. <ref> we discuss the implementation of a variable flavour number scheme in the context of small-x resummation, while the details of resummation of DGLAP evolution and its expansion are collected in the two following sections. Specifically, in Sect. <ref> we present our new realization of the resummation of running coupling contributions, and later in Sect. <ref> we perform the perturbative expansion of the various ingredients and construct our final resummed and matched splitting functions. We present our numerical results in Sect. <ref>, before concluding in Sect. <ref>. In Appendix <ref> we provide analytical results for the off-shell DIS partonic cross section with mass dependence, while in Appendix <ref> we collect some technical details on the actual implementation of the resummation in .§ RESUMMATION OF DEEP-INELASTIC SCATTERING STRUCTURE FUNCTIONS The standard way of describingthe deep-inelastic scattering of an electron off a proton is to express the cross section in terms of structure functions that depend on the Bjorken variable x and the momentum transfer Q^2,F_a(x,Q^2)= x ∑_i∫_x^1dz/zC_a,ix/z, m_c^2/Q^2, m_b^2/Q^2,m_t^2/Q^2, f_i (z,Q^2),where sum runs over all active flavours, i.e. all the partons for which we consider a parton density. The number of active partons depends on the choice of factorization scheme and will be discussed in Sect. <ref>. Note that the coefficient functions also depend on the charges of the quarks that strike the off-shell boson (left understood), as well as on the heavy-quark masses, as indicated. For simplicity, in the above equation, we have set both renormalization and factorization scales equal to the hard scale, ^2=^2=Q^2, so =(Q^2).Finally, the index a denotes the type of structure function under consideration. In our case, we will mostly consider either F_2 or the longitudinal structure function F_L. Note that when Q^2 ∼^2, we also have a non-negligible contribution from the parity-odd structure function F_3. This contribution is not logarithmically enhanced at small-x in the massless case <cit.>; this remains true for neutral current DIS with massive quarks, however, in charged current DIS with massive quarks a small-x logarithmic enhancement appears. To our knowledge, the resummation of these logarithms in F_3 was never considered so far.In order to diagonalize the convolution in Eq. (<ref>) we consider Mellin moments of the structure functionsF_a(N,Q^2) ≡∫_0^1 dxx^N-1 F_a(x,Q^2)= ∑_i C_a,iN, m_c^2/Q^2, m_b^2/Q^2,m_t^2/Q^2, f_i(N,Q^2),where, as often done in the context of small-x resummation, we have introduced a non-standard definition for the moments of the coefficient functions and of the parton densities:C_a,iN, m_c^2/Q^2, m_b^2/Q^2,m_t^2/Q^2, = ∫_0^1 d z z^NC_a,iz, m_c^2/Q^2, m_b^2/Q^2,m_t^2/Q^2,, f_iN, Q^2 = ∫_0^1 d z z^Nf_iz, Q^2.The last equation implies that the DGLAP anomalous dimensions are defined as γ_ij(N,)=∫_0^1 d z z^NP_ijz ,,where P_ij are the usual Altarelli-Parisi splitting functions. In particular, in momentum space, the leading logarithmic (LL) behaviour at small-z to any order n>0 in perturbation theory is ^n P^(n-1)_ij∼^n 1/zln^n-11/z. In Mellin space these logarithms are mapped into poles in N=0, which results in the following LL behaviour for the anomalous dimension: ^n γ^(n-1)_ij∼/N^n. In practice, not every entry of the anomalous dimension matrix is LL at small-x.On the other hand, the behaviour of the DIS coefficient functions in Mellin space is ^n C^(n)_a,i∼/N^n-1, i.e. the enhancement is at most next-to-leading logarithmic (NLL). Note that some care has to be taken when considering the LO contribution C^(0)_a,i, which is an (^0) constant in Mellin space, and hence formally LL.In this paper we construct the NLL resummation of DIS structure functions at small-x. In order to achieve this goal, we resum the first two towers of logarithmic contributions to the splitting functions, while we consider only the first non-vanishing tower of logarithmic contributions to the partonic coefficient functions.Note that in a previous work <cit.> we called this NLL resummation in DIS coefficient functions just LL, underlining the fact that it is the leading non-vanishing logarithmic enhancement. We refer to the counting of this previous work as relative logarithmic counting (i.e. relative to the leading non-vanishing logarithmic enhancement), while the one adopted here as absolute counting (i.e. using the overall powers ofand 1/N).Henceforth, unless explicitly stated, we are going to work in Mellin space and leave the dependence on the N variable, as well as the other variables, understood.As common in studies of DIS, we perform a flavour decomposition. For our purposes it is enough to separate the structure functions into a singlet and non-singlet componentF_a= F_a^ S+ F_a^ NS.The singlet structure functions contain both gluon and quark (singlet) contributionsF_a^ S = C_a,gf_g + C^S_a,qf_S ,where f_S = ∑_i=1^n_f f_q_i+f_q̅_i , n_f denoting the number of active quark flavours. The non-singlet structure function instead reads, see e.g. <cit.>,F_a^ NS = C^ NS_a,q ∑_i=1^n_fe_i^2f_q_i +f_q̅_i-1/n_ff_S,where e_i is the electric charge of the quark q_i, i.e. its coupling to the photon.[Here we assume that the DIS interaction is mediated by a photon, thus ignoring a possible contribution from the Z boson, or the charged-current case in which a W is exchanged. This choice makes the discussion here and in the following somewhat simpler, and allows to focus on the details of the resummation. Most of what follows does not depend on this assumption, as small-x resummation affects just the singlet: in particular, the small-x logarithmic content of the singlet and pure-singlet coefficients, Eq. (<ref>), is identical (for a discussion about small-x enhancements in the non-singlet sector see, for instance, Ref. <cit.>).The generalization to a generic vector boson is rather straightforward, and will be presented in Sect. <ref>. ] Furthermore, we can collect the terms proportional to the singlet PDF, obtainingF_a= C_a,gf_g + C^ PS_a,qf_S + C^ NS_a,q ∑_i=1^n_fe_i^2f_q_i+f_q̅_i,where we have defined the so-called pure-singlet coefficient function,C^ PS_a,q = C_a,q^S - ⟨ e^2 ⟩ C^ NS_a,q,being ⟨ e^2 ⟩ the average squared charge. In this study we consider resummed structure functions matched to their fixed-order counterparts. To this purpose, we find useful to introduce resummed contributions, defined as the all-order results minus its expansion to the fixed-order we are matching to, Δ_n C = C^res- ∑_k=1^n ^k C^res,(k),where we are going to typically consider matching to NLO and NNLO structure functions, i.e. n=1,2, although DIS structure functions are also known, in the massless case, to three loops <cit.>. Moreover, many of the formulae involving the resummed contribution that we derive in what follows hold regardless of the order in perturbation theory we are matching to. For these cases we adopt a simplified notation that omits the index n: Δ_n C →Δ C. Note also that we will sometimes write the full resummed expression C^ res as Δ_0 C.§.§ Factorization schemes in presence of massive quarks In the context of collinear factorization, mass collinear singularities due to massless quarks must be factorized, such that perturbative coefficient functions are finite.This is the case for the up, down and strange quarks that we always consider massless. For massive quarks, i.e. charm, bottom and top, collinear singularities are regulated by the quark mass and they manifest themselves as logarithms of the ratio Q^2/m^2.Despite the fact that the factorization of these contributions is not necessary in order to obtain finite cross-sections, if Q^2≫ m^2, these logarithms become large and their all-order resummation becomes desirable in order to obtain reliable perturbative predictions.This resummation is obtained by factorizing the mass logarithms, in the very same way as done for the massless quarks.Whether mass collinear logarithms are factorized or not for a given massive quark is a choice of factorization scheme. A scheme where the collinear logarithms for the first n_f lightest quarks are factorized is called a scheme with n_f active flavours. In such a scheme, collinear logarithms are resummed for light quarks, while they are treated at fixed order for heavy quarks, thus defining the (relative) concept of light and heavy. Note that while obviously a heavy quark is massive, a light quark could be either massless, e.g. up, down, strange, or massive, e.g. charm and bottom. Thus, n_f can be 3, 4, 5. Note that n_f=6 is not phenomenologically relevant, especially in the context of DIS, because the hard scale of the process is at most comparable, but never much bigger, than the top mass and so the top will be always treated as a heavy flavour. In several cases, mostly in the contest of PDF fits where data span a large range in Q^2, it is convenient to define so-called variable flavour number schemes (VFNS), where the number n_f of active flavours varies as a function of Q, such that the collinear resummation for a given massive quark is turned on only for scales where it is needed. More specifically, a VFNS is a patch of factorization schemes with subsequent values of n_f, which switches from a value n_f to the next one (n_f+1) at a given “heavy quark threshold”μ_h, typically chosen of the order of the heavy quark mass. The relation between a scheme with n_f active quarks and a scheme where the mass logarithms of the (n_f+1)-th flavour are resummed, i.e. a scheme with n_f+1 active flavours, is at the core of the construction of a VFNS and provides the ingredients to resum the collinear logarithms of the (n_f+1)-th flavour. For a recent review see Ref. <cit.>.Let us consider DIS structure functions in a scheme with n_f (massless or massive) active flavours.As we increase the hard scale Q, we reach energy scales which are significantly bigger than the mass of the (n_f+1)-th quark flavour. In this situationthe n_f-flavour scheme is no longer appropriate, as potentially large collinear logarithms log(Q^2/m^2) are left unresummed.A more reliable framework is then provided by a factorization scheme in which n_f+1 flavours are considered active, i.e. they all participate to parton evolution, having factorized, and hence resummed, their collinear behaviour <cit.>.The PDFs in the two schemes are related by matching conditionsf_i^ = ∑_j=g,q_1,q̅_1,…, q_n_f,q̅_n_f K_ij^f_j^, i=g,q_1,q̅_1,…, q_n_f,q̅_n_f, q_n_f+1,q̅_n_f+1,where the sum runs over active flavours in the n_f scheme, and K_ij^ are matching functions. Note that we only consider here factorization schemes in which the matching coefficients depend on the heavy quark mass only through logarithms of Q^2/^2. In particular, we will consider only -like schemes, where all the PDFs f_i^ in the n_f+1 scheme evolve through standard DGLAP equations, i.e. as they all were PDFs of massless quarks. Note however that the heavy quark PDFs, being generated by the matching conditions Eq. (<ref>), have a purely perturbative origin, and depend effectively on the heavy quark mass.The structure functions Eq. (<ref>) can be written in either schemeF_a= F_a^≡ C^_a,g f^_g + C^ PS_a,q f^_S + C^ NS_a,q∑_i=1^n_fe_i^2f^_q_i+f^_q̅_i= F_a^≡ C^_a,g f^_g + C^ PS_a,q f^_S + C^ NS_a,q∑_i=1^n_f+1e_i^2f^_q_i+f^_q̅_i.To all orders in , the choice of factorization scheme is immaterial and the two expressions are identical. Truncating the perturbative expansion of the coefficients to any finite order makes the two expressions different by higher order terms. Requiring equivalence of the two expressions order by order allows to relate the various coefficients and to find the matching functions K_ij^.The coefficient functions in the n_f scheme are computed in standard collinear factorization with n_f active quarks, with the heavy quark(s) only appearing in the final state or through loops.In the n_f+1 scheme the coefficient functions generally differ as the collinear logarithms due to the heavy quark are also factorized. Their expressions, which include mass dependence, can be determined by the equality of the first and second line of Eq. (<ref>):C^_a,g = C^_a,g K_gg^ + C^ PS_a,q (n_fK_qg^+K_hg^) + C^ NS_a,q∑_i=1^n_fe_i^2 K_qg^ + e_n_f+1^2 K_hg^,and similarly for the quark coefficient functions. In the equation above, we have definedK_hg^≡ K_q_n_f+1g^ + K_q̅_n_f+1g^ and similarly for the gluon to light-quark matching function K_qg^, since these functions do not depend on the specific flavour but only on whether the final quark is light or heavy. Eq. (<ref>) and its quark counterparts can be solved to express the coefficient functions in the n_f+1 scheme in terms of those in the n_f scheme and the matching functions K_ij^, as we shall see in the next section. Furthermore, requiring that resummation of collinear logarithms due to the n_f+1 flavour is achieved in the n_f+1 scheme allows us to derive expressions for the matching functions as well. We will come back to this point in Sect. <ref>. For a detailed and general discussion, not limited to small x, see e.g. <cit.>. §.§.§ Heavy-flavour schemes at small-x We now focus our discussion of heavy-quark factorization scheme to the contributions that are enhanced in the small-x regime. This topic has been discussed at length in Refs. <cit.>, where explicit resummed results were presented in the DIS factorization scheme. Here, we extend the results to -like schemes.As previously mentioned, the resummed contribution Δ C can be either seen as a contribution to the singlet or to the pure singlet. At the accuracy we are considering here, it always comes from a gluon ladder which ends with a quark pair production, one of which is struck by the photon, as depicted in Fig. <ref>. In the case of a quark initiated contribution, the quark immediately converts to a gluon, which then starts emitting. Therefore, the resummed contribution to the singlet coefficient functions, denoted Δ C_a,i, always has the form (in the n_f scheme) Δ C_a,i^ = ∑_k=1^n_f e_k^2 _a,i(m_q_k) + ∑_k=n_f+1^6 e_k^2_a,i(m_q_k),i=g,q,where _a,i(m_q_k) is the resummed contribution in the case of a light active flavour being struck by the photon, and _a,i(m_q_k) is the resummed contribution in the case of a heavy flavour being struck by the photon.[In charged-current DIS, where the photon is replaced by a W boson, the quark flavour changes after hitting it, and so does its mass. Therefore, in this case, the coefficient functions _a,i and _a,i would also depend on the mass of the outgoing quark.] Recall that being active does not necessarily imply being massless and indeed both massless and massive flavours contribute to _a,i. Crucially though, in this contribution collinear logarithms are factorized and resummed and, consequently, the zero mass limit of _a,i is finite.On the other hand, _a,i(m_q_k) only contains massive quarks and no resummation of mass logarithms has been performed. Thus, the massless limit of this type of contributions is logarithmically divergent. In some simplified approaches, the mass of the heavy-quark is immediately neglected once it becomes active: this leads to what is sometimes called a zero-mass variable flavour number scheme (ZM-VFNS).Note that the massless contribution is identical for each massless quarks, so in a ZM-VFNS we would have ∑_k=1^n_f e_k^2 _a,i(0) = ⟨ e^2 ⟩ n_f _a,i(0) ,i=g,q.For this reason, a factor n_f is usually included in the definition of the massless singlet coefficient function, see e.g. Refs. <cit.>. Here instead, we wish to retain the mass dependence of the active flavours, if present. To this purpose, we adopt a factorization scheme akin to S-ACOT <cit.> or FONLL <cit.> (which are formally identical <cit.>) in which the mass dependence is retained in the coefficient functions.We now perform a logarithmic counting on Eq. (<ref>). None of the matching functions are LL, with the exception of the LO diagonal components, which are all equal to 1 at (^0), K_ii^(0)=1. Furthermore, all coefficient functions are NLL, except the non-singlet LO coefficient of F_2, which is C^ NS,(0)_2,q=(1) and thus LL. The leading non-trivial logarithmic contributions in the coefficient functions are then NLL, and their resummed contributions, Eq. (<ref>), are related in the two schemes byΔ C^_a,g = Δ C^_a,g + e_n_f+1^2 C^ NS,(0)_a,qΔ K_hg(),Δ C^_a,q = Δ C^_a,q + e_n_f+1^2 C^ NS,(0)_a,qΔ K_hq(),where Δ K_ij are the NLL resummed contributions to K_ij^. In the results above, we have neglected all contributions which are products of two NLL functions. We have also dropped K_qg^ (and K_qq^), which start beyond NLL, as they are given by conversions of gluons or quarks into quarks with the participation of the heavy quark, i.e. suppressed by at least two genuine powers of . Note that, for simplicity, we are not indicating in Δ K_ij the label ^, but we emphasize its (logarithmic) dependence on the heavy quark mass.Note that C^ NS,(0)_a,q in Eq. (<ref>) is the one in the n_f+1 scheme, so it is, in principle, an unknown of the problem. However, the requirement that in the n_f+1 scheme the resummation of collinear logarithms is achieved, forces it to be equal to the value computed in the limit where the heavy flavour is massless, up to possibly power-behaving mass corrections. This mass dependence can be arbitrarily fixed to be zero, as the original set of simultaneous equations, Eq. (<ref>), is undetermined: there are two more unknowns than equations <cit.>. This choice is the one leading to S-ACOT/FONLL <cit.> and TR <cit.> schemes, where we haveC^ NS,(0)_2,q(0) =1, C^ NS,(0)_L,q(0) =0.In alternative approaches, like ACOT <cit.> or, equivalently, a new incarnation of FONLL <cit.>, denoted here as FONLL_IC, which has been introduced to account for a possible intrinsic component of the charm PDF, the incoming heavy quark is treated as massive and therefore the mass-dependence is fully maintained in the coefficient function:C^ NS,(0)_2,q()=√(1+4^2/Q^2)2/1+√(1+4^2/Q^2)^N+1, C^ NS,(0)_L,q()=4^2/Q^2/√(1+4^2/Q^2)2/1+√(1+4^2/Q^2)^N+1.Note that the massless limit of these expressions reduces to the massless result, Eq. (<ref>). We stress that for most applications the simpler S-ACOT/FONLL option is formally, and practically, as good as the more complicated ACOT/FONLL_IC approach and hence we focus on it in the following. However, care must be taken when describing the charm structure functions in the case in which the charm PDF is fitted. In this case, the two approaches may lead to sizeable differences and the use of ACOT/FONLL_ICmight be advisable. We will come back to this point in Sect. <ref>.We now consider again the decomposition Eq. (<ref>). In the n_f+1 scheme it simply becomes Δ C^_a,i = ∑_k=1^n_f+1 e_k^2 _a,i(m_q_k) + ∑_k=n_f+2^6 e_k^2_a,i(m_q_k),i=g,q,where _a,i() are the small-x resummed contributions to the production of a heavy quark (pair) in the scheme in which such heavy quark participates to the parton dynamics and its collinear logarithms have been factorized.We now plug Eqs. (<ref>) and (<ref>) into Eq. (<ref>). All terms involving the lightest n_f flavours and the heaviest n_f+2,…,6 flavours cancel out, leaving only a relation between the coefficient functions for the n_f+1 flavour:e_n_f+1^2_a,i() = e_n_f+1^2_a,i() + e_n_f+1^2 C^ NS,(0)_a,qΔ K_hi(),i=g,q.The squared charge is the same in all terms and cancels out, thereby suggesting that this result will hold in general for more generic couplings, such as when the Z-boson exchange plays a role, as will be discussed in Sect. <ref>. We then find the (collinearly factorized) massive coefficients in the n_f+1 scheme for F_2, assuming Eq. (<ref>),_2,g()= _2,g() - Δ K_hg(),_2,q()= _2,q() - Δ K_hq(),and for F_L _L,g()= _L,g(),_L,q()= _L,q().Given the matching function Δ K_ij at NLL accuracy, the above results completely fix the relation of the NLL resummed contributions to the coefficient functions in n_f and n_f+1 schemes. In particular, Eq. (<ref>) shows that the resummed contribution to F_L is the same in either schemes.§.§.§ Computation of matching functions In the derivation above the matching functions K^_ij (or Δ K_ij) are assumed to be given as an input to the computation of coefficient functions in the n_f+1 scheme. However, the very same derivation also allows us to construct the matching functions themselves. This is true in general (see e.g. Ref. <cit.>), but we focus on the small-x limit for simplicity.The key observation is that, after scheme change, the coefficient functions in the n_f+1 scheme must not contain anymore the collinear logarithms associated to the (n_f+1)-th flavour. This is possible only if the matching functions subtract such collinear logarithms from the massive coefficients _a,i(), such that the massless limit →0 of the coefficient functions in the new scheme _a,i() is finite. If we further require that the n_f and n_f+1 scheme are both of the same type, e.g. -like, we also need to impose that the massless limit of the massive coefficient _a,i() is just what we would have computed if the (n_f+1)-th flavour were massless: lim_Q≫_a,i() = _a,i(0),i=g,q.(Note that, at the logarithmic accuracy we are interested in, _a,i(0) are the same in the n_f and in the n_f+1 schemes.[In the actual resummed expressions we compute, there is a non-zero n_f dependence in the coefficient functions _a,i(0), which is however subleading and can therefore be ignored.]) The massless limit Eq. (<ref>) ensures that in the n_f+1 scheme the collinear logarithms are properly factorized into the PDFs and resummed through DGLAP. It also fixes the “constant” (i.e., non mass dependent) part of the function _a,i(m_q_n_f+1). It does not tell us anything about the power corrections in /Q in the n_f+1 scheme, which have to be determined by the matching procedure.The results Eq. (<ref>) show that, since F_L has no collinear singularities at NLL, the massive coefficient in the n_f+1 scheme smoothly approaches the massless one at large Q, without any scheme change to be applied. On the other hand, in the case of F_2, Eq. (<ref>), which contains collinear singularities in the massless limit, the scheme-change effectively subtracts the matching function, and the difference will smoothly tend to the massless coefficient for Q≫.We can exploit the last consideration to derive the desired resummed expressions for the matching functions Δ K_hi, i=g,q. Indeed, Eq. (<ref>) can be inverted to give Δ K_hi in terms of the massive coefficient functions _2,i(m), which are known, and of the coefficient functions _2,i(m) in the n_f+1 scheme, the massless limits of which, _2,i(0), are known. We can therefore consider the massless limit →0 of this expression, keeping in mind that Δ K_hi and _2,i(m) are separately logarithmically divergent, and therefore the massless limit has to be intended as setting to zero power-behaving contributions while keeping the logarithms finite. Assuming, or better choosing, that Δ K_hi contain only logarithms of the massor mass independent terms, we then find Δ K_hi() = lim_Q≫_2,i() - _2,i(0) ,i=g,q,where the limit has to be intended as a formal expression as described above. Note that including power-behaved mass dependent terms is possible and would simply change the form of _2,i(m) through Eq. (<ref>), as well as the PDFs in the n_f+1 scheme according to Eq. (<ref>). However, this “factorization” of power behaved contributions is beyond the control of the collinear factorization framework, so the simplest choice Eq. (<ref>) is perfectly acceptable and therefore universally adopted.The definition Eq. (<ref>) also shows that matching functions at NLL satisfy colour-charge relations. Indeed, it is known that both massless and massive coefficient functions in the quark and gluon channels are related at NLL by[These colour-charge relations are strictly speaking only valid when the resummed contributions refer to the pure-singlet. If they refer to the singlet, in the massless case there is a fixed-order contribution which needs to be subtracted <cit.>. Alternatively, we can say that these relations are valid for k_a,i with k≥1.]_a,q(0) = C_F/C_A_a,g(0), _a,q(m) = C_F/C_A_a,g(m).From Eq. (<ref>) it then immediately follows Δ K_hq(m) = C_F/C_AΔ K_hg(m).Together, Eqs. (<ref>) and (<ref>) imply through Eqs. (<ref>) and (<ref>) a colour-charge relation _a,q(m) = C_F/C_A_a,g(m).for the coefficient functions in the n_f+1 scheme. §.§.§ Generalization to neutral and charged currents The previous results have been derived in the case of photon-exchange DIS. We now comment on their generalization to the more general case of neutral current, where also the Z boson contributes, and the case of charged current, where the exchanged boson is a W.Including a Z boson in the discussion is rather trivial. What changes is just the coupling to the quark, which is different to that of the photon both in the Z-only exchange and in the photon-Z interference. In Z-only exchange both vector and axial couplings contribute, which we denote generically as g_V and g_A. If we consider only massless quarks, the sum of the squares of each coupling, g_V^2+g_A^2, factorizes. The only subtlety regarding Z exchange is that in the massive case there is a contribution which is not proportional to the sum of the squares of the vector and axial couplings. However, in this case, one can still factor out g_V^2+g_A^2, but a contribution proportional to g_A^2/(g_V^2+g_A^2) appears, which is not present in the photon (or photon-Z interference) case. This term is not problematic, as it vanishes in the limit of massless quarks. Therefore, the whole construction of the previous sections, and in particular Sect. <ref>, remains unchanged, provided the photon couplings e_k^2 are replaced with the more general coupling (g_V^2+g_A^2)_k for each quark q_k. The charged-current case is less trivial.In this case, the quark flavour changes after interacting with the W and this results in two additional complications. Firstly, the quark mass changes. In fact, for all practical applications, one of the quarks interacting with the W can be considered as massless. Indeed, ignoring the top quark which is too heavy to give a contribution at typical DIS energies, the only two massive quarks are the charm and the bottom, but their interaction is suppressed by the V_cb CKM entry and can thus be neglected. Hence, the only remaining combinations involve either a massless and a massive quarks, or two massless quarks.The second complication arises due to the fact that the final state, composed by a quark q and an anti-quark q̅', is not self conjugate (as it is in the neutral-current case, where the final state contains qq̅). This means that the non-singlet coefficient in Eq. (<ref>) is different for q and q̅, depending on the process. In particular, this implies that when using Eq. (<ref>) to derive Eq. (<ref>), only either the heavy quark q_n_f+1 or the heavy anti-quark q̅_n_f+1 contributes: thus, the collinear subtraction, where present, contains only half of the matching function. Additionally, for the computation of the parity-violating F_3 structure function (which contains a singlet contribution in the massive charged-current case), the non-singlet LO coefficient has opposite sign depending on whether the initial state parton is a quark or an anti-quark, specifically C^ CC, NS,(0)_3,q=1 and C^ CC, NS,(0)_3,q̅=-1.[Note that the opposite sign in these two contribution makes the collinear subtractions in the fully massless case ineffective. Indeed, the collinear singularities in this case cancel automatically. In fact, after cancellation, the non-singular part vanishes, thus making the massless singlet contribution to F_3 zero (which remains true at higher orders, since the underlying reason is the antisymmetry of the F_3 contribution for the exchange of the two quark masses). The same mechanism holds in the neutral-current massive case.] Putting everything together, we have that the analogous of Eqs. (<ref>) and (<ref>) are_2,i^ CC()= _2,i^ CC() - Δ K_hi()/2,i=g,q,_L,i^ CC()= _L,i^ CC(),_3,i^ CC()=_3,i^ CC() - Δ K_hi()/2if final state is q Q̅, _3,i^ CC() + Δ K_hi()/2if final state is Q q̅,where we are denoting with Q the heavy massive quark and with q the companion massless quark appearing in the final state. The massless limit of the latter is just zero, _3,i^ CC(0)=0, since in the massless limit F_3 is non-singlet and therefore it does not contain logarithmic enhancement at small x (see App. <ref> for further details).§.§ Small-x resummation of coefficient functions and matching functions We are now ready to discuss the actual resummed expressions for coefficient functions both in the massless and massive case. Additionally, using Eq. (<ref>), we also determine the all-order behaviour of the matching functions.The all-order behaviour of partonic coefficient functions is obtained using the -factorization theorem. In this framework one computes the gluon-initiated contribution to the process of interest, keeping the incoming gluon off its mass-shell. To the best of our knowledge, only the off-shell coefficient functions that are necessary to perform the resummation of the photon-induced DIS structure functions have been presented in the literature, both in the case of massless and massive quarks, see e.g. <cit.>. However, in this study we want to resum all DIS coefficient functions, both the neutral-current and charged-current contributions. Therefore, we perform a general calculations of DIS off-shell coefficient functions considering the coupling to the electro-weak bosons and, where relevant, the interference with the photon-induced contributions. The calculation is detailed in Appendix <ref>, where we collect all the relevant results for the off-shell DIS coefficient functions. These results have been implemented in the public code , version , and are also accessible through the public code  <cit.>, to whichhas been interfaced, which directly constructs resummed structure functions. §.§.§ Resummed coefficient functions The resummation of the massless coefficient functions was originally performed to NLL in Ref. <cit.>. Following Ref. <cit.>, in our recent work <cit.> we have also included formally subleading, but important, contributions such as the ones related to the running of the strong coupling.In our approach, the partonic massless coefficient function C_a(N,ξ,0,0), which is calculated with an incoming off-shell gluon (see App. <ref> for details about our notation), is convoluted with an evolution operator U(N,ξ),0_2,g(0)= -∫ dξ d/dξ C_2(0,ξ,0,0)U(N,ξ) - U_qg/n_f,0_L,g(0)= -∫ dξ d/dξ C_L(0,ξ,0,0)U(N,ξ),whereU(N,ξ)= exp ∫_1^ξd ζ/ζγ_+ N, (ζ Q^2) ,and γ_+ is the small-x resummed anomalous dimension.Note that the above expressions hold in a factorization scheme denoted Q_0 <cit.>, which differs fromat relative (^3). In the context of small-x resummation this scheme is preferred because it leads to more stable results <cit.>.Furthermore, since the explicit N dependence of the off-shell partonic coefficient function is subleading, we find advantageous to work at NLL with its N=0 moment.In the resummed expression of the F_2 contribution, the subtraction term U_qg appears. Its role is to cancel the collinear singularity of C_2(0,ξ,0,0). Its expression reads <cit.> U_qg = ∫dξ/ξγ_qg(N,(ξ Q^2)) θ(1-ξ) U(N,ξ)where γ_qg is the resummed qg anomalous dimension. All ξ integrals extend to ∞ and start from the position of the Landau pole, ξ_0=exp-1/β_0. In ξ=ξ_0 the evolution function U is supposed to vanish (this was e.g. a condition for neglecting a boundary term when integrating by parts in Ref. <cit.>). However, due to subleading contributions, this is not always true in our practical construction. While the induced effect is subleading, this fact is undesirable: we discuss in Appendix <ref> how we now deal with this issue.We can apply the same procedure to the case of a heavy (non-active) flavour. We start considering neutral currents. Note that in this case the mass of the quark acts as a regulator and no subtraction term U_qg appears, 0_a,g(m) = -∫ dξ d/dξ C_a0,ξ,ξ_m,ξ_mU(N,ξ),a=2, L,where we have defined ξ_m=m^2/Q^2 (see again App. <ref> for the precise definition of the off-shell coefficient and its arguments). The massive coefficient functions entering the above formula have been computed a long time ago <cit.> (see also Ref. <cit.>). We report them in Appendix <ref>, where we have also recomputed them in a more general set-up which covers the full neutral-current case in which the mediator can be a Z boson thus finding a new contribution proportional to the axial coupling, and the charged-current scenario in which the mass of the quark changes after interacting with the W boson. We have already commented in Sect. <ref> that for physically relevant processes only one of the quarks involved in charged current DIS is massive, and the other can be treated as massless. In this case the resummed coefficients read0_2,g^ CC(m)= -∫ dξ d/dξ C_20,ξ,ξ_m,0U(N,ξ) - U_qg/2n_f,0_L,g^ CC(m)= -∫ dξ d/dξ C_L0,ξ,ξ_m,0U(N,ξ) - ξ_m/1+ξ_m U_qg/2n_f,0_3,g^ CC(m)= -∫ dξ d/dξ C_30,ξ,ξ_m,0U(N,ξ) + 1/1+ξ_m U_qg/2n_f(qQ̅), -∫ dξ d/dξ C_30,ξ,0,ξ_mU(N,ξ) - 1/1+ξ_m U_qg/2n_f(Qq̅).Here, since one of the two quarks involved is massless, we need massless collinear subtractions, implemented through U_qg, to take care of the collinear singularity. Since only one out of two diagrams contains the singularity, there is a factor 1/2 for each subtraction. Each subtraction further multiplies the LO non-singlet diagram evaluated in N=0, corresponding to the process q+W→ Q (or its conjugate) with massive Q, which has non-trivial mass dependence for F_L and F_3 (and non-trivial sign for F_3). For F_L in particular, this term vanishes in the massless limit, consistently with Eq. (<ref>). In the last equation we are treating separately the cases in which the final state contains a massless quark plus a massive anti-quark and its conjugate process, for the same reason discussed in Sect. <ref>. Note that, according to the notation defined in App. <ref> where the third argument of the off-shell coefficient is the mass (squared divided by Q^2) of the anti-quark and the fourth the mass of the quark in the final state, the arguments are swapped in the two cases. Effectively, for F_3 swapping the arguments changes the sign, so the difference between the two cases is just an overall sign. For F_2 and F_L, instead, the coefficients are symmetric for final-state charge conjugation, and therefore the result does not change when swapping the arguments.We stress that in the massive case the partonic coefficients include non-trivial theta functions which restrict the available phase space. This is originally encoded in the N dependence of the off-shell coefficients, which we loose when setting N=0. As these theta functions are very physical, it is important to restore them. Details on how this is implemented in our resummed results are given in Appendix <ref>.The massless ξ_m→0 limit of all the off-shell coefficients is finite, because the off-shellness ξ regulates the collinear region, and gives the massless off-shell coefficients entering Eqs. (<ref>) (and zero for F_3). However, while the ξ_m→0 limit for F_L gives automatically the massless result, as it must according to Eqs. (<ref>), (<ref>), for F_2 and F_3 one further needs to subtract the matching condition, Eqs. (<ref>), (<ref>), (<ref>). Accordingly, the resummed massive coefficient functions for the massive active flavour are given in neutral current by0_2,g(m)= -∫ dξ d/dξ C_20,ξ,ξ_m,ξ_mU(N,ξ) - Δ_0 K_hg(m), 0_L,g(m)= -∫ dξ d/dξ C_L0,ξ,ξ_m,ξ_mU(N,ξ),and in charged current by0_2,g^ CC(m)= -∫ dξ d/dξ C_20,ξ,ξ_m,0U(N,ξ) - U_qg/2n_f - Δ_0 K_hg(m)/2, 0_L,g^ CC(m)= -∫ dξ d/dξ C_L0,ξ,ξ_m,0U(N,ξ) - ξ_m/1+ξ_mU_qg/2n_f,0_3,g^ CC(m)= -∫ dξ d/dξ C_30,ξ,ξ_m,0U(N,ξ) + 1/1+ξ_mU_qg/2n_f - Δ_0 K_hg(m)/2(qQ̅), -∫ dξ d/dξ C_30,ξ,0,ξ_mU(N,ξ) - 1/1+ξ_mU_qg/2n_f + Δ_0 K_hg(m)/2(Qq̅)(again, the last equation is split in two depending on whether the final state is qQ̅ or Qq̅, the difference being an overall sign). Comparison of Eq. (<ref>) (and equivalently Eq. (<ref>)) with Eq. (<ref>) shows that the massless limit does not commute with the on-shell limit in presence of collinear singularities. In particular, the ξ_m→0 limit applied to 0_a,g^ (CC)(m), a=2,3 does not commute with the ξ integration. §.§.§ Resummed matching functions We can now use Eq. (<ref>) to determine the resummed matching function Δ_0 K_hg(m). Using Eqs. (<ref>) and (<ref>) we can write[Note that this equation can be written in two alternative forms by comparing Eq. (<ref>) to Eq. (<ref>) or by noting that the massless limit of Eq. (<ref>) vanishes. Using the results of App. <ref> it is easy to verify that all these forms are equivalent and lead to the same result.] Δ_0 K_hg(m)= U_qg/n_f +∫ dξ d/dξ C_2(0,ξ,0,0)U(N,ξ) -lim_ξ_m→0∫ dξ d/dξ C_20,ξ,ξ_m,ξ_mU(N,ξ),where the last two terms are basically the commutator of ξ integration and massless limit. Computing this commutator in the general case is highly non-trivial. Therefore, we consider first the limit in which the coupling is kept fixed. In this limit the convolution over ξ becomes a Mellin transformation with moment M=γ_s/N, which is the the LL anomalous dimension, dual of the LO BFKL kernel. This Mellin transform can be performed analytically, so the massless ξ_m→0 limit can be safely taken afterwards. We have (see Appendix <ref>) Δ_0 K_hg^f.c.(m) = U_qg/n_f- /πm^2/Q^2^γ_s1-γ_s/γ_sΓ^3(1-γ_s)Γ(1+γ_s)/(3-2γ_s)Γ(2-2γ_s).A non-trivial check of the above expression can be performed by considering its perturbative expansion,Δ_0 K_hg^f.c. (m)= [h_0/n_f/γ_s+h_1/n_f+(h_2/n_f)γ_s+… -1/3πγ_s - 5+3log(m^2/Q^2)/9π - 56+30log(m^2/Q^2)+9log^2(m^2/Q^2)/54πγ_s +…] = - log(m^2/Q^2)/3/π- 28+30log(m^2/Q^2)+9log^2(m^2/Q^2)/18N/π^2 +(^3),where h_i are the (known) coefficients of the perturbative expansion of γ_s U_qgf.c.=γ_qg in powers of γ_s <cit.>. The first coefficients read h_0=n_f/3π, h_1=n_f/3π5/3, h_2=n_f/3π14/9. Note that the collinear pole 1/γ_s cancels out in the sum. The second equality is then obtained replacing γ_s= C_A/π N+(^2).The above expression can be compared with the fixed-order results presented in Ref. <cit.>. Theterm corresponds to the Mellin transform of the NLO result computed in N=0, and the ^2 term correctly reproduces the leading singularity of the NNLO contribution.Checking the above result one order higher in perturbation theory is less trivial because at this order we start to become sensitive to the choice of factorization scheme. After taking into account the conversion from Q_0 to , which affects the second term in Eq. (<ref>), we find full agreement with the high-energy limit of the ^3 result <cit.>.We can now restore the running-coupling effects in the resummation from the fixed-coupling result by computing Δ_0 K_hg(m) = U_qg/n_f - ∫ dξ d/dξ K_hg(ξ,ξ_m)U(N,ξ),where 𝒦_hg is obtained as the inverse Mellin transform of its fixed-coupling counterpart, second term in Eq. (<ref>). The computation of this inverse Mellin transform is done in App. <ref>, and its derivative is given byd/dξ𝒦_hg(ξ,ξ_m)= /3π 6ξ_m/ξ^2 1 - 4ξ_m/ξ√(ξ/ξ+4ξ_m)log√(ξ/4ξ_m)+√(1+ξ/4ξ_m) .Clearly, by construction, Eq. (<ref>) with Eq. (<ref>) reproduces the correct result in the fixed-coupling limit, Eq. (<ref>). Also, it clearly includes the correct resummation of the subleading running coupling effects, as the form Eq. (<ref>) is the standard expression for such resummation <cit.>. Eq. (<ref>) is a new result. It allows to resum the matching conditions in -like schemes with full inclusion of running-coupling effects. §.§.§ Matching to fixed order and construction of the VFNS: FONLL as an example We conclude the section by giving some details on how the resummed results presented above are combined with the fixed-order contributions to construct a VFNS.First, we have to subtract from the resummed coefficient functions their expansion up to the order we want to match onto. The () and (^2) contributions to the resummed matching function are explicitly given in Eq. (<ref>) in the fixed-coupling limit, but up to this order they are identical to their running coupling counterparts, and so they can be used straight away to construct Δ_1 K_hg(m) and Δ_2 K_hg(m). The same equation shows in the first line the expansion of U_qg/n_f, which is in turn needed for the massless collinear subtractions of DIS coefficient functions in Eqs. (<ref>), (<ref>) and (<ref>). To complete the list, one needs to expand the ξ-integrals of the off-shell coefficient functions with the evolution factor U(N,ξ). As for the matching function, up to (^2) this expansion can be simply obtained by working in the fixed-coupling limit. For this, we need the expansions in M of the Mellin transforms of the off-shell coefficients, given in App. <ref>, where M should be replaced by γ_s= C_A/π N+(^2) and expanded out. By doing so, all the 1's and 2's can be constructed, making the matching of each ingredient with the corresponding fixed order up to NNLO straightforward.The actual construction of a VFNS is more delicate. Indeed, there are at least two degrees of freedom that have been exploited in the literature to construct different incarnation of VFNSs (at fixed order). One degree of freedom is related to the inclusion of undetermined (by the matching conditions) power-behaving mass dependent contributions in some coefficient functions, as already discussed in Sect. <ref>. The second degree of freedom is related to how to combine the various ingredients at a given finite perturbative order.The approach adopted so far in our construction can be identified with a plain (i.e., without any χ-rescaling <cit.>) S-ACOT construction, with a canonical perturbative counting based on explicit powers of(at fixed /N for resummed contributions). This is equivalent to a plain (i.e., without damping <cit.>) FONLL construction, even though in FONLL the various ingredients are combined together with a different philosophy. In the following we will briefly review the FONLL construction, which is implemented in thepackage, with which our results can be directly used for resummed DIS phenomenology.The FONLL approach <cit.> is a standard combination of fixed-order and resummed contributions, in which these two ingredients are simply summed up and the double counting between them subtracted. In the FONLL case, the distinction between “fixed order (FO)” and “resummation (NLL)” refers to collinear logarithms due to massive quarks. In DIS, the FONLL construction <cit.> of structure functions, assuming a single heavy quark q_n_f+1 with mass m, is performed asF_a^ FONLL(m) = F_a^(m) + F_a^(0) - F_a^ d.c.(m),a=2,L,3,where F_a^(m) is the fixed-order (called massive) contribution, in which the collinear logarithms are not resummed and which retains the full mass dependence of the heavy quark, F_a^(0) is the resummed (called massless) contribution, computed assuming that the heavy quark is massless, and thus where the (singular) collinear logarithms are resummed, and finally F_a^ d.c.(m) is the double-counting term (called massive-zero), which can be either seen as the fixed-order expansion of F_a^(0) or the “massless limit” (in which divergent terms are kept finite) of F_a^(m). The combination F_a^(0) - F_a^ d.c.(m) can be seen as the resummed contribution to be added to the fixed order to resum the collinear logarithms, or equivalently F_a^(m) - F_a^ d.c.(m) can be interpreted as the power-behaving mass corrections to the (resummed) massless calculation.According to our nomenclature (extended to the structure functions) the FONLL result is just the massive result in the n_f+1 scheme, i.e. F_a^ FONLL(m) = F_a^(m).Thus, the double counting term, which is the only non-trivial ingredient in Eq. (<ref>), is given byF_a^ d.c.(m) = F_a^(m) + F_a^(0) - F_a^(m)The corresponding small-x resummed coefficient functions, analogous to _a,i and _a,i, can be obtained from Eq. (<ref>) using Eq. (<ref>) together with Eqs. (<ref>) and (<ref>), and are given by ^ d.c._a,i(m) = _a,i(0) + C^ NS,(0)_a,qΔ K_hi(m),i=g,qfor NC, and similarly for CC (with an extra factor 1/2 multiplying Δ K_hi). The discussion so far does not add anything to the results presented in the previoussections. However, having now the small-x resummation for each individual ingredient appearing in Eq. (<ref>), we can also consider the version of FONLL which includes a damping on the resummed contribution,F_a^ FONLL+damp(m) = F_a^(m) + θ(1-ξ_m)1-ξ_m^2 F_a^(0) - F_a^ d.c.(m) ,such that the resummation smoothly turns off at the scale Q=m. This variant is often used in PDF fits, and effectively corresponds to damping the collinear subtraction term -C^ NS,(0)_a,qΔ K_hi(m) in our resummed coefficients _a,i(m).A word of caution is needed when discussing the perturbative counting. The canonical counting would consist in including all contributions at () for NLO and all contributions at (^2) at NNLO (and so on). However, a non-standard counting is usually adopted at NLO (e.g. in NLO NNPDF fits), where the massless contribution F_a^(0) is retained at (), as well as the matching functions, but the massive contribution F_a^(m) is computed at one order higher, (^2) <cit.>.When this particular perturbative counting is adopted, the double-counting piece must be computed with care. In particular, only the definition of F_a^ d.c.(m) as the fixed-order expansion of F_a^(0) to (^2) gives the correct result.As far as small-x resummation is concerned, one has to use 2_a,i(m) for the massive part and 1_a,i(0) for the massless part, while for the matching at the heavy quark threshold in DGLAP evolution Δ_1 K_hi(m) is to be used. For the double-counting part, being it the expansion of the massless, the use of 1_a,i(0) and Δ_1 K_hi(m) in Eq. (<ref>) is needed. It can be explicitly verified that in this way the “resummed contribution”F_a^(0)-F_a^ d.c.(m) is indeed of (^3) and subleading at small x. Finally, we discuss a variant of the VFNS where the mass of the heavy quark is retained in all coefficient functions in the n_f+1 scheme. This is the original ACOT <cit.> construction, and it also corresponds to the variant FONLL_IC proposed in Refs. <cit.> to account for a possible intrinsic component of the charm PDF. Following the latter references, we defineδ F_a= F_a^ FONLL_ IC - F_a^ FONLL= F_a^ ACOT - F_a^S-ACOT,to be the term needed to upgrade S-ACOT/FONLL to ACOT/FONLL_IC (ignoring damping, rescaling, etc.). The small-x resummation of this term can be simply obtained by computing the difference between the resummation obtained with massive NS coefficients, Eq. (<ref>), and the one obtained with massless NS coefficients, Eq. (<ref>). We thus have (we apologize with the Reader for the awkward notation)Δδ c_a,i(m)=C^ NS,(0)_a,q(0) - C^ NS,(0)_a,q(m) Δ K_hi(m), a=2,L, i=g,q,Δδ c_a,i^ CC(m)=C^ CC,NS,(0)_a,q(0) - C^ CC,NS,(0)_a,q(m) Δ K_hi(m)/2, a=2,L,3, i=g,q,which are the resummed contributions to the singlet coefficient functions δ c_a,i making up δ F_a for neutral current and charged current respectively. Note that the massive NS coefficients, which have a non-trivial dependence on N, can be computed in N=0 in Eqs. (<ref>), as the N dependence is a subleading effect as small x. § A NEW APPROACH TO RUNNING-COUPLING RESUMMATION IN DGLAP EVOLUTION In the original ABF construction <cit.>, which we followed with minor modifications in our previous work <cit.>, the resummation of the anomalous dimension γ_+ (the largest eigenvalue of the singlet sector) is performed through the exploitation of the duality relation between DGLAP and BFKL evolution kernels, improved with symmetrization of the latter and the imposition of exact momentum conservation.This result is usually referred to as double-leading (DL) resummation. However, it was realized long ago <cit.> that running coupling corrections to fixed-order duality give rise to subleading terms which potentially spoil the perturbative stability of the result. Therefore, despite their formally subleading nature, the resummation of these effects is of utmost importancein order to obtain stable and reliable resummed anomalous dimensions. Additionally, the resummation of these terms changes the nature of the all-order small-N singularity, converting a square-root branch-cut into a simple pole. Therefore, the resummation of these contributions, known as running-coupling (RC) resummation, is usually added to the DL result.The RC resummation can be obtained by solving the BFKL equation with full running coupling dependence (see e.g. <cit.>), and then deriving from the solution (which is an eigenvector PDF) its anomalous dimension. If we were able to perform this procedure with the full DL BFKL kernel, the resulting anomalous dimension would just be the final result. However, solving such equation analytically is not possible, due to the complicated all-order -dependence and the non-trivial M-dependence of the DL kernel. In some approaches, e.g. <cit.>, the equation is thus solved numerically, and the resummed anomalous dimension derived in a numerical way. Instead, in Refs. <cit.> an approximate analytic solution, in which both - and M-dependencies of the kernel are simplified, was constructed and added to the DL anomalous dimension, subtracting the appropriate double counting.We find this second approach more convenient, and we keep adopting it here. However, in this section we critically review the approximations used in Refs. <cit.> and propose a new way of constructing and approximating the kernel from which the RC solution is computed. Our approach has various advantages, from purely practical ones related to the numerical implementation to most serious ones related to the physical nature of the solution and itsdependence. §.§ The choice of the kernel The core of small-x resummation of the largest eigenvalue γ_+ is encoded in the duality between the DGLAP and BFKL equations. Imposing that the corresponding eigenvector PDF is solution to both equations requires the duality relation χ_ DLγ_ DL(N,), = N,where χ_ DLM, is the BFKL kernel and γ_ DL(N,) the DGLAP anomalous dimension. The knowledge of the BFKL kernel at N^kLO provides by duality all the N^kLL contributions in the anomalous dimension, and vice versa. The name DL comes from the fact that both kernels are supposed to contain their fixed-order part at N^kLO, thus implying that they also contain (by duality with each other) all N^kLL contributions. Therefore, dual DL kernels obtained with fixed N^kLO in both, usually denoted DL-N^kLO, are matched results of the form N^kLO+N^kLL, and so they are both double (next-to-^k) leading order and log. The actual DL kernel and anomalous dimension further contain additional ingredients (from symmetrization and momentum conservation) which are required to make the result perturbatively stable.The duality relation Eq. (<ref>) assumes that the strong couplingdoes not run, namely it is Q-independent. When the running ofis taken into account, the duality equation receives additional corrections. If these corrections are included perturbatively, new singularities appear which make the result perturbatively unstable. For instance, at NLO+NLL one should include a purely NLL term of the form of a LL function of /N times an overall factor  <cit.>γ_ss^ rc(N,) = -β_0.χ_0”(M)χ_0(M)/2χ_0'^2(M)|_M=γ_s(/N),where γ_s(/N) is the dual of the LO BFKL kernel χ_0(M). The new singularity is obtained when χ_0'(M) in the denominator vanishes, i.e. in M=1/2. Higher-order corrections will have larger powers of χ_0'(M) in the denominator, leading to a perturbatively unstable singularity for N such that γ_s(/N)=1/2, i.e. N=χ_0(1/2). This instability goes away if RC corrections are included to all orders in the running-coupling parameter β_0 (i.e., at NLO+NLL one should include a term of the form of a LL function of /N times a function of β_0 to all orders), and the various singularities sum up to a simple pole, whose position is perturbatively stable. This is the main motivation for including the RC corrections to all orders.RC corrections are resummed by solving the BFKL equation with the DL kernel χ_ DL in whichis not fixed but is running. When the coupling runs, (Q^2) becomes a differential operator =/(1-β_0 d/dM) in Mellin space, with =(Q_0^2), Q_0 being a fixed scale, and where we have assumed 1-loop running. In principle, there can becomputed at different scales in the kernel, but one can always rewrite it as (Q^2) by evolving to that scale and expanding the relevant evolution factors. In this way, theoperators are placed on the left of all the M-dependent terms of the kernel, and act on everything to the right. This ordering is the one chosen by ABF to derive their solution of the RC equation. Two observations are now in order.*While at DL-LO all the running coupling evolution factors are higher orders, andcan be set equal to (Q^2) in all terms without modifying explicitly the kernel, at DL-NLO changing the argument of eachto (Q^2) in all LO contributions produces terms which are formally NLO and have to be included in the kernel. Because of the symmetry properties of the BFKL ladder, the DL-NLO kernel (see Eq. (<ref>) later) does not correspond, by construction, to a kernel in which all 's are (Q^2). Therefore, at NLO, the form of the kernel in which all 's are (Q^2) differs from that of the DL kernel by NLO terms. For this reason, a different kernel was used for DL and RC resummation at NLO in Refs. <cit.>.*Once all powers ofare computed at Q^2, which is equivalent to say that all powers ofhave been commuted to the left, the running coupling equation cannot be solved directly, because it is in principle a differential equation of infinite order. Therefore, in ABF a linear approximation of the kernel, in which at maximum one power ofis retained and all others are frozen to (Q_0^2), is used.The fact that the complicated all-orderdependence of the kernel is approximated with a linear one may seem too crude. However, the goal of the RC resummation is to resum to all orders a class of terms, behaving as powers of β_0 times a LL function of /N, which originates from 1-loop running at lowest order. The NLO contribution to the BFKL kernel would produce corrections which are of order (β_0)^n (/N)^k for all n,k, and therefore beyond the formal accuracy we aim to. This shows that the linear approximation suggested by ABF is in fact sufficient to the scope of this resummation.In fact, this argument also suggests that using a NLO kernel for RC resummation which differs from the DL one is unnecessary. Indeed, the ingredients which determine the leading RC corrections to all orders are contained in the LO part of the kernel. Thus, correcting the kernel by NLO terms will change subleading RC contributions which we do not aim to resum. Therefore, for the accuracy we are interested in, the NLO part of the RC kernel is immaterial, and there is no reason for using a different kernel for RC resummation and DL resummation.Using the same kernel for DL and RC resummations, as we now suggest, has an important consequence: we do not need any more the infamous function γ_ match introduced in Ref. <cit.> and used later in Ref. <cit.> to cure the mismatch of singularities between the DL part and the RC part of the result. This function was needed to effectively remove the square-root branch-cut of the DL solution, since the subtraction of the double counting term between DL and RC resummations does not cancel it, exactly because two different kernels were used. What we have realized only with this work is that the function γ_ match is not subleading as was originally claimed <cit.>, but rather it is NLL and therefore ruins the formal accuracy of the NLO+NLL result. We give more details about this function in Sect. <ref>.Having understood that the linearapproximation is a good approximation, and that as such the same BFKL kernel can be used for both DL and RC resummations, in order to be able to solve the RC BFKL equation we further need to specify the M functional form. The approximation adopted by ABF, which we followed in Ref. <cit.>, is a quadratic approximation around the minimum of the kernel. Indeed, the minimum encodes, by duality, the information on the leading singularity, and it is therefore sufficient to accurately describe the kernel and to perform the RC resummation. However, we argue that this quadratic approximation has subtle undesired properties which makes it not ideal for our purposes. For instance, the -expansion of the resulting anomalous dimension contains half-integer powers of . This is a direct consequence of the fact that a polynomial kernel, such as this quadratic approximation, is non-physical. Thus, here we propose a different approximation, which is physically motivated and which leads to an expansion in integer powers of . We discuss this new approximation, denoted collinear approximation, in the next section. §.§ Solution of the RC differential equation in the collinear approximation We are now going to derive the solution of the RC BFKL equation in the linearapproximation and collinear M approximation. The starting point is the on-shell BFKL kernel in symmetric variables <cit.> that we denote simply χ(M,), whosedependence is approximated as χ(M,) = χ(M,) + (-) χ'(M,) = χ̅(M,) + χ'(M,)where prime denotes derivative with respect to . It is important to observe that this approximation includes an (^0) term, χ̅, which is not physical and not present in the original kernel which is of lowest order (). For this reason the kernel Eq. (<ref>) does not go to zero as →0. One could therefore consider another linear approximation, χ(M,) = χ(M,)/,which does go to zero as →0, but does not reproduce the exact derivative in =. In fact, both approximations are equally valid for our purposes, as they would be identical (and exact) in the case of a LO kernel χ(M,)=χ_0(M). In the following, we will use the first approximation, Eq. (<ref>), to derive our results, which can then be easily translated into the second approximation, Eq. (<ref>), by simply letting χ̅=0, χ' = χ/. We stress that this simple translation rule could not be applied in the original ABF solution with quadratic kernel, as the limit χ̅→0 is not trivial in that case.The (homogeneous) RC BFKL equation with kernel Eq. (<ref>), from which the resummed anomalous dimension can be derived, is given by <cit.> N - χ̅(M,)f(N,M) = χ'(M,) f(N,M),where f(N,M) is the double Mellin transform of the eigenvector PDF. Assuming 1-loop running, and taking the logarithmic derivative of the solution, we arrive at the anomalous dimension γ_ rc(N,) =d/dtlog∫_M_1-i∞^M_1+i∞dM/2π i e^Mtexp∫_M_0^M dM' 1/β_01-χ'(M',)/N - χ̅(M',) _t=0,where M_0 and M_1 are free parameters which must be in the physical region 0<M<1 (they can be conveniently chosen to be equal to each other, and equal to the position of the minimum). In order to compute the integrals analytically, we need to specify the form of the kernels χ̅ and χ'. In the ABF construction, a quadratic approximation around the minimum of the actual kernel was considered, χ(M,) = c() + κ()/2M-M_ min()^2+ (M-M_ min)^3,where M_ min differs in general from 1/2 by terms of (). The polynomial form of the quadratic kernel is non-physical, as for instance the inverse Mellin transform of the n-th power of M corresponds to the n-th derivative of a δ function ofin momentum space. A better approximation, which we propose here, is inspired by a collinear plus anti-collinear approximation of the kernel <cit.> generalized to account for a minimum which is not in 1/2: χ_ coll(M,) = A() 1/M+1/2M_ min()-M+ B().Expanding around its minimum M=M_ min we find χ_ coll(M,) = B+2A/M_ min + 2A/M_ min^3M-M_ min^2 + (M-M_ min)^3,which leads to the identificationsA=M_ min^3κ/4,B=c - M_ min^2κ/2,such that the collinear kernel incorporates exactly the same information as the quadratic kernel. Therefore, from the point of view of the accuracy of the approximation, the new collinear kernel is as good as the old quadratic one. However, as its form resembles the leading collinear and anticollinear poles of the actual kernel, it performs better than the quadratic one, and leads to a solution with better features, as we shall now see. To compute the solution of the BFKL equation we need to specify thedependence of the collinear kernel Eq. (<ref>). For A() and B(), we use the very same linear decomposition Eq. (<ref>), while the position of the minimum M_ min() will not be considered as an operator.[Note that this assumption is less crude than the approach of previous works, where M_ min was simply approximated to be 1/2 in the RC equation.] The integrals in Eq. (<ref>) can be computed easily by noticing that the functional form of the integrand is identical to that of the quadratic kernel used by ABF, for which the solution is already known. Specifically, for quadratic kernel, the kernel-dependent part of the integrand of Eq. (<ref>) is given by χ'(M,)/N - χ̅(M,) = c'+κ'/2(M-M_ min)^2/N-c̅-κ̅/2(M-M_ min)^2,while for collinear kernel we findχ'(M,)/N - χ̅(M,) = 2M_ minA'+B'M(2M_ min-M)/(N-B̅)M(2M_ min-M)-2M_ minA̅= c'-B'/M_ min^2(M-M_ min)^2/N -c̅ - (N-B̅)/M_ min^2(M-M_ min)^2,having used in the last step Eq. (<ref>). By direct comparison, we find the translation rules κ'→ -2B'/M_ min^2 = κ'-2c'/M_ min^2,κ̅→ 2N-B̅/M_ min^2 = κ̅+2N-c̅/M_ min^2.Hence, the final solution is given by <cit.>[In order to account for a generalized position of the minimum, we have recomputed the solution analytically, thus providing a useful cross-check of the result presented in the literature.]γ_ rc(N,) = M_ min + β_0 z k_ν'(z)/k_ν (z) -1 ,where k_ν(z) is a Bateman function, with1/ = 1/ + κ'-2c'/M_ min^2/κ̅+2(N-c̅)/M_ min^2z= 1/β_0√(N-c̅/κ̅/2 +(N-c̅)/M_ min^2) ν = c'/N-c̅ + κ'-2c'/M_ min^2/κ̅+2(N-c̅)/M_ min^2 z.We immediately observe that the limit c̅,κ̅→0 of these expressions is finite and trivial, so the solution in the approximation Eq. (<ref>) is a trivial limit of this solution. This is in contrast with the analogous solution with a quadratic kernel, whose χ̅→0 limit is not trivial and leads to a solution in terms of Airy functions. This represents a first advantage of using the collinear kernel with respect to the quadratic one.Eq. (<ref>) with Eq. (<ref>) represents a new result with respect to the old “Bateman” RC solution of Refs. <cit.> and it generalizes it to the case in which the minimum is not in 1/2.In order to study the properties of this result, we start considering the saddle point expansion of Eq. (<ref>), which is equivalent to an expansion in powers of β_0 of Eq. (<ref>). This expansion is also needed to identify the proper double counting with the DL part. We findγ_ rc(N,)= M_ min - √(N-c/κ/2 +(N-c)/M_ min^2) - β_0 +1/4β_0^2 3κ'-2c'/M_ min^2/κ+2(N-c)/M_ min^2-c'/N-c +β_0^2^3 √(κ/2 +(N-c)/M_ min^2/N-c)c'κ-κ'(N-c)/32(N-c)^2[2(N-c)/M_ min^2+κ]^2×[16c^2/M_ min^2 + 8N(κ+2N/M_ min^2) - c(16 c' /M_ min^2+8κ-3κ'+32N /M_ min^2)+(5c'κ+16c'N /M_ min^2-3κ'N)]+ (β_0^3).This result shows a number of interesting features, especially when compared with the analogous expansion in the case of the quadratic kernel <cit.>. First, we note that at large N all terms go to zero except for the running coupling correction -β_0, which is finite. This is in agreement with the fact that the starting kernel had a pole in M=0, which by fixed-coupling duality leads to an anomalous dimension that goes to zero at large N. Note that this is in contrast with the case of the quadratic kernel, which diverges at large N as -√(2N/κ), due to the absence in the first square root of the term +(N-c)/M_ min^2 in the denominator, in agreement with it being derived from a BFKL kernel quadratic in M. This term is crucial for another reason, as it makes the denominator of (^0), while it would be of () if the denominator were just κ/2, as it happens with the quadratic kernel. Having a denominator of () produces anexpansion of this result which contains half-integer powers of .Instead, theexpansion of Eq. (<ref>), and hence of Eq. (<ref>), is perfectly acceptable with only integer powers of . These two differences represent two additional important benefits of using the collinear kernel rather than the quadratic one. §.§ Construction of matched resultsWe recall that the solution Eq. (<ref>), having been derived with an approximate M dependence, cannot be regarded as the full solution. Rather, it represents the all-order resummation of the β_0 terms which must be added to the DL result, after subtracting those contributions which are already included (and not approximated) in the DL result. In the following we will thus focus on the combination of our RC resummed anomalous dimension with the DL one, also providing theexpansions of the RC contributions which will be needed in Sect. <ref> for matching (N)LL resummation to (N)NLO.When matching the RC resummation to the DL-LO result, the first three terms of the singular expansion Eq. (<ref>) have to be subtracted (the first two because they are LL, and the third because it is of (), and hence already included in the DL-LO). After subtraction we have the expansionΔ_DL-LOγ_ rc(N,)≡γ_ rc(N,) -M_ min - √(N-c/κ/2 +(N-c)/M_ min^2) - β_0 = β_0^2 3κ_0/32-c_0/N + (^3),where κ_0 and c_0 are the derivatives of κ and c computed in =0, and are given byc_0 = C_A/π4log2, κ_0 = C_A/π28ζ_3.The careful Reader might wonder what are the consequences of starting with a BFKL kernel in symmetric variables. Indeed, when combined with the DL result, the RC result must be translated to DIS (asymmetric) variables. This amounts to adding N/2 to the RC anomalous dimension. However, such a term is automatically subtracted in the construction of the RC contribution to the DL result, Δ_DL-LOγ_ rc(N,) <cit.>. Therefore, the latter object is insensitive to the change of variables. For NLL resummation the RC result must be matched to DL-NLO, so we further need to subtract the (^2) of γ_ rc. However, we observe that at (β_0) the expansion of γ_ rc, Eq. (<ref>), contains terms which are formally NLL, and specifically given by β_0 times a LL function. These terms should be already included in the DL-NLO result. In fact, contributions of this form originate from running coupling corrections to the duality relation <cit.>, Eq. (<ref>), and are not automatically generated by fixed-coupling duality in the DL-NLO result. Rather, they have to be supplied to the DL-NLO result as an additive correction <cit.>,Δγ_ss^ rc(N,)= -β_0 χ_0”(M)χ_0(M)/2χ_0'^2(M) - 1 _M=γ_s(/N) = (^4),where χ_0(M) is the LO BFKL kernel and γ_s(/N) its dual. The -1 in square brackets represents the subtraction of the double counting with the fixed-order part of the DL-NLO; after subtraction, this function starts at (^4).Since the kernel used in the RC resummation is only approximate, the function γ_ rc does not correctly predict all the NLL contributions of Eq. (<ref>). Therefore, Eq. (<ref>) must be still added to the DL-NLO result, and the (β_0) part of γ_ rc has to be considered as a double counting term with respect to Δγ_ss^ rc, and hence subtracted. Thus, for RC resummation matched to DL-NLO, we further need to subtract the fourth term of Eq. (<ref>),Δ_DL-NLOγ_ rc(N,)≡γ_ rc(N,) - [M_ min - √(N-c/κ/2 +(N-c)/M_ min^2) - β_0+1/4β_0^23κ'-2c'/M_ min^2/κ+2(N-c)/M_ min^2-c'/N-c] = β_0^2^3 κ_0/16N + (^4),where we have also written itsexpansion in the last line. We observe that, formally, only the (β_0) times a LL function is really doubly counted, so in principle one could expand the last line of Eq. (<ref>) at NLL, and remove the NNLL terms. However, we think that it is most consistent to also remove these spurious higher logarithmic order contributions. Interestingly, the expansions of both Eq. (<ref>) and (<ref>) at lowest order only involve lowest order derivatives of c and κ, Eq. (<ref>), which in turn are determined from just the LO BFKL kernel, and hence do not depend on the actual construction of the DL kernel. Also, they are the same irrespectively of the kind of approximate dependence onis used, either Eq. (<ref>) or Eq. (<ref>).§.§ Singularity mismatch The functions Δ_DL-(N)LOγ_ rc(N,), Eqs. (<ref>) and (<ref>), contain the resummation of β_0 contributions which should be added to γ_DL-(N)LO, respectively. The square root term in the subtractions of Eqs. (<ref>) and (<ref>) contains the LL singularity which has to be removed from the DL result, and replaced with the pole singularity contained in the γ_ rc term. This cancellation is exact if the parameters of the minimum, c() and κ(), are computed from the same DL kernel used for the fixed-coupling duality which defines γ_DL-(N)LO. At LO+LL, the kernel is the one obtained putting on-shell Eq. (<ref>), so the singularity automatically cancels.[In fact, in Ref. <cit.> two different expressions of χ_s were used for computing the DL kernel and for the kernel used in RC resummation. Specifically, in the second case we used the dual of the exact LO anomalous dimension, which however could not be used for the DL kernel as the exact LO anomalous dimension γ_+ has a square-root branch-cut, due to the way the eigenvalue of the singlet anomalous dimension matrix is computed, which would produce a spurious oscillating behaviour. For this reason, for the DL we used an approximate LO anomalous dimension, thereby creating a mismatch in the singularities even at LO+LL. Here, thanks to the approximation discussed in App. <ref>, we use exactly the same kernel.] In Ref. <cit.> an intermediate result, denoted LO+LL^', was introduced to perform the resummation of quark entries of the anomalous dimension matrix and of coefficient functions. This anomalous dimension is formally LO+LL, but uses the RC parameters of the NLO kernel such that the position of the leading singularity is the same as that of the NLO+NLL result. For this result, the cancellation of the branch-cut cannot take place. In order to cure the mismatch in the singularities of the DL-LO result and the RC result with NLO parameters we need a matching function γ_ match. Its form must be γ_ match^ LO+LL'(N,) = γ_ mN,c^ NLO(),κ^ NLO(),M_ min^ NLO() - γ_ mN,c^ LO(),κ^ LO(),1/2,where the function γ_ m must reproduce the singular behaviour of the RC and DL parts, respectively, and the parameters c^ (N)LO, κ^ (N)LO and M_ min^ (N)LO are those obtained from the (N)LO kernel. For the case of collinear kernel, γ_ m may be simply given by the first two terms of Eq. (<ref>).However, we have some latitude with the definition of the matching function as far as subleading corrections are concerned. We can exploit this freedom to define a matching which numerically has a very moderate effect. We find that the choice γ_ m(N,c,κ,M_ min) = M_ min - √(N-c/κ/2+(N-c)/M_ min^2) - M_ min^3κ/4N,has the desired properties. We note that, in contrast with the case of the quadratic kernel <cit.>,here we do not need any further contribution, as this function already vanishes as 1/N at large N. Since the parameters in Eq. (<ref>) start differing at (^2), the function γ_ match^ LO+LL' is formally NLL. This is not a problem here, as the formal accuracy of the LO+LL^' result is just LL. The expansion of the matching function Eq. (<ref>) in powers ofis γ_ match^ LO+LL' (N,) = (^3).Hence, the choice of subleading terms in Eq. (<ref>) has the additional benefit that we do not need to keep the matching function into account when expanding the LO+LL^' result to (^2). At NLO+NLL, in Refs. <cit.> the kernel for RC resummation was constructed with a differentordering with respect to the DL one, which had different minima and thus created a singularity mismatch between the DL and RC anomalous dimensions, making it necessary the introduction of a matching function to cancel these singularities. We have already argued in Sect. <ref> that using different kernels was not necessary, and we can actually use the same DL kernel also for RC resummation, thereby ensuring automatic cancellation of the square-root branch-cut. Therefore, in this work we no longer need to patch the NLO+NLL result with a matching function.It is important to stress that, had we used two kernels for the DL and RC parts of the NLO+NLL resummation differing by (^2) terms, the analogous matching function would have necessarily been NLL (as in LO+LL^' case), thus contaminating the result which could not be claimed to be NLL anymore. This is indeed the case for the one used in Refs. <cit.>. We have verified that it is not possible to modify the function γ_ m, Eq. (<ref>), to make the function γ_ match NNLL without introducing new (uncanceled) singularities. To prove this statement, we consider a generalization of Eq. (<ref>)γ_ m(N,c,κ,M_ min) = M_ min - √(N-c/κ/2+(N-c)/M_ min^2) + η(N,c,κ,M_ min),where η(N,c,κ,M_ min) is a function to be determined, with the requirement that it must not introduce further leading singularities. Expanding Eq. (<ref>) to NLL, i.e. expanding in powers ofup to () at fixed /N, we findγ_ m(N,c,κ,M_ min)= 1/2 - √(N/-c_0/κ_0/2+4(N/-c_0)) + ηN/,c_0,κ_0,1/2+[m_1+c_1 κ_0 - c_0 κ_1 - 32 c_0^2 m_1 + κ_1 N/ + 64 c_0 m_1 N/ - 32 m_1 (N/)^2/√(2)√(N/-c_0)κ_0+8(N/-c_0)^3/2+ c_1∂_2+ κ_1∂_3+ m_1∂_3ηN/,c_0,κ_0,1/2] + NNLL,where the index in the derivatives indicates with respect to which argument the derivative is computed, and m_1 is the () contribution to M_ min. The expansion of the square-root term at NLL depends on the NLO coefficients c_1, κ_1 and m_1. If the two kernel used for RC and DL resummations differ at (^2), then these coefficients differ, and when building up the function γ_ match these NLL terms do not cancel among the two γ_ m's. Thus, to only way to make the function γ_ match a purely NNLL object is to choose the functions η such that the NLL expansion of γ_ m vanishes. However, the expansion of the square-root term is singular at NLL in N= c_0, so it is clear that in order to make γ_ match vanishing at NLL the derivatives of the function η, and thus the function itself, must be singular in N=c. But if this is the case, then the function γ_ m will contain additional singularities with respect to those which it is suppose to cancel. This violates our assumptions. We must thus conclude that it is not possible to cancel a NLO singularity mismatch and at the same time preserve NLL accuracy. This conclusion remains true also for the matching function used with the quadratic kernel. Therefore, the only way not to spoil the formal NLL accuracy of the NLO+NLL result is to use exactly the same kernel for the DL and the RC parts of the result: this is the main motivation for this choice, adopted in this work for the first time.In the NLO+NLL result, there is a different singularity mismatch coming from the function Δγ_ss^ rc, Eq. (<ref>), and the Δ_DL-NLOγ_ rc, Eq. (<ref>). The latter exhibits explicitly a pole in N=c, which is different from an analogous singularity in the former, Δγ_ss^ rc(N,) ∼ -1/4β_0^2c_0/N- c_0,which is in N= c_0, as one can easily verify from the definition. The singularities would be identical if the parameters of the RC kernel were those of the LO BFKL kernel, and would cancel in the sum. However, due to the higher orders contained in the parameters used to construct the RC kernel, the position of the singularity is shifted and the cancellation does no longer take place. We can solve the problem by introducing a new matching function to be added to the final result, which effectively replaces the singularity of Eq. (<ref>) with Eq. (<ref>). Being the singular contribution a NLL term, this matching function is formally NNLL, and therefore acceptable. However, as in the LO+LL^' case, it is convenient to subtract additional higher orders, such that the effect of this function is as moderate as possible. Our choice is γ^ss_ match(N,) = 1/4β_0^2c_0/N- c_0 - c'/N-c +c'-c_0/N= (^4),where the last term (which is formally NNLL) ensures cancellation of a number of subleading contributions from the difference between the first two terms which could potentially spoil the accuracy of the result. Additionally, because of the last term, the function γ^ss_ match starts at (^4) and therefore it does not contribute to the -expansion of the NLO+NLL result to (^3). Note that this singularity mismatch was present also in the original works using a quadratic kernel <cit.>; the problem there was solved by replacing by hand the singularity in Δ_DL-NLOγ_ rc with that of Δγ_ss^ rc, which effectively corresponds to using the same matching function Eq. (<ref>) but without the last term.§ RESUMMED DGLAP EVOLUTION MATCHED TO NNLO As already discussed in Sect. <ref>, the ABF construction of the resummation of the anomalous dimension γ_+ relies on a double-leading (DL) part, which is based on the duality between the DGLAP and BFKL kernels (at the core of the resummation), and on a running-coupling (RC) part, which includes a class of subleading but very important effects which change the nature of the small-x singularity. The DL resummation is naturally performed at LO+LL or at NLO+NLL, which are obtained by combining together the (N)LO DGLAP anomalous dimension and the (N)LO BFKL kernel. Therefore, previous results on small-x resummation have always been presented at these orders.As already mentioned in the introduction, it is of great importance being able to match the resummed result to fixed NNLO in order to obtain state-of-the art theoretical predictions.Matching the resummation to NNLO is in principle straightforward: starting from the NLO+NLL resummed result, one just needs to subtract itsexpansion up to (^3), and replace it with the exact NNLO expression. While subtracting the NLO from the NLO+NLL is trivial, further subtracting the (^3) term is not, due to the fact that the DL resummation is expressed in terms of implicit equations, which are usually solved numerically.One could think of different alternatives. One possibility is to expand the resummed result numerically, which however does not seem to be a reasonable option, as the numerical solution of the implicit equations is already challenging and slow, and one cannot hope in general to obtain sufficient precision in a reasonable amount of time from numerical techniques (unless further numerical developments are made[Some developments with respect to our previous implementations have been performed, which make the code faster and more reliable. See App. <ref> for further details.]). A second option is to construct a DL result starting from NNLO DGLAP and NLO BFKL, so that the result would be naturally NNLO+NLL. This option is itself non-trivial, as it requires the computation of a new class of double-counting terms between the two kernels, and has the undesirable disadvantage that the resummed result one obtains would differ by subleading NNLL terms from the original NLO+NLL.In this work we have opted for a third, and perhaps more natural, option, namely expanding the resummed result analytically. Despite the rather technical nature of this computation, we find it illustrative to give its details in the following Sect. <ref>. Indeed, for instance, this exercise allowed us to find a small mistake in the original ABF construction of the DL part <cit.>, which we also have inherited in our previous work <cit.>, and which we correct here. Then, in Sect. <ref>, we present all the final expressions for the resummed splitting functions, providing a detailed explanation of the implementation of small-x resummation that constitutes the backbone of , version .§.§ Expansion of the Double Leading anomalous dimension In the ABF construction, the DL resummed anomalous dimension γ_ DL, Eq. (<ref>), is obtained from an implicit equation of the form χ_Σγ_ DL(N,),N, = N,where the function χ_ΣM,N, is a so-called off-shell BFKL kernel <cit.>. The DL anomalous dimension obtained through Eq. (<ref>) assumes fixed coupling , and it thus receives a correction, Eq. (<ref>), due to running coupling effects. This correction starts at (^4) and it is therefore of no interest for the expansion of the DL result to (^3). In the following, we explain how to construct a perturbative expansion of γ_ DL(N,) defined by the implicit equation (<ref>), focussing first on the simpler LL case, and moving next to the NLL case.§.§.§ Expansion of the LL resummed result We start from LL resummation for simplicity. We seek its expansion to (^2), which would be needed to match LL resummation to NLO. The off-shell kernel at LO, needed for LL resummation, is given by <cit.>χ_Σ^ LO(M,N,) = χ_s/M + χ_s/1-M+N + χ̃_0(M,N) + χ_ mom^ LO(N,)where the function χ_s(/M) is defined as the dual to the LO anomalous dimension γ_0, γ_0χ_s/M = M ⇔χ_s1/γ_0(N)=N,the function χ̃_0(M,N) = C_A/π[ψ(1) + ψ(1+N) - ψ(1+M) - ψ(2-M+N)]is the off-shell extension of the LO BFKL kernel with double counting with χ_s subtracted, and the function χ_ mom(N,) = c_ mom() f_ mom(N),f_ mom(N) = 4N/(N+1)^2 restores momentum conservation, i.e. the constraint γ_ DL(N=1,)=0 which translates into χ_Σ(0,1,)=1, through a suitable coefficient c_ mom. Because of the definition Eq. (<ref>), χ_s(/M) in M=0 equals 1, so we havec_ mom^ LO() = -χ_s/2 - χ̃_0(0,1).Note that the LO anomalous dimension γ_0(N) that we use for the definition of χ_s does not necessarily need to be the exact LO anomalous dimension. In fact, it can be replaced with an approximate expression with the same qualitative features and which preserves its small-x behaviour. This was already done in both Refs. <cit.>, in slightly different ways, to cure a problem due to a branch-cut present in the n_f≠0 case. Here, we adopt another, simpler, approximation, which circumvents the same problem and also solves another issue. Additionally, it allows us to exploit duality relations analytically, which is a great advantage from the numerical implementation point of view. Further details are given in App. <ref>.In order to obtain the coefficient of the -expansion of the DL anomalous dimension, we substitute the formal expansion γ_DL-LO(N,) = γ_0(N)+^2γ̃_1(N)+… into Eq. (<ref>) with χ_Σ given in Eq. (<ref>), and expand the equation in powers of . We stress that we are implicitly assuming that γ_0(N) in Eq. (<ref>) is the same used in the definition of χ_s, Eq. (<ref>); this has to be correct because of the way χ_Σ^ LO Eq. (<ref>) is constructed, as we shall verify shortly. On the other hand, γ̃_1 is a prediction of the resummation (hence the tilde). The tricky part for performing the expansion is the first (collinear) χ_s in Eq. (<ref>), as its argument /M is of (^0). For this term we then compute the expansion asχ_s/γ_DL-LO(N,) = χ_s1/γ_0 1-γ̃_1/γ_0+(^2)= χ_s1/γ_0 - γ̃_1/γ_0^2χ_s'1/γ_0 + (^2)= N + γ̃_1/γ_0' + (^2),where in the last equality we have used the definition Eq. (<ref>), assuming that γ_0 is the one appearing in Eq. (<ref>), and the formula for the derivative χ_s'1/γ_0 = -γ_0^2/γ_0',which can be obtained by deriving both sides of the first of Eq. (<ref>) with respect to /M. Here γ_0' denotes a derivative with respect to N. All the other terms can be simply expanded in powers of . Up to the first non-trivial order we getN = N +γ̃_1/γ_0' + χ_01/1+N + χ̃_0(0,N) -χ_01/2 + χ̃_0(0,1) f_ mom(N) + (^2),where χ_01=χ_s'(0)=C_A/π, from which it immediately follows γ̃_1(N) = -γ_0'(N)χ_01/1+N + χ̃_0(0,N)-χ_01/2 + χ̃_0(0,1)f_ mom(N) .Note that the (^0) term cancels automatically, which confirms that indeed the LO part of γ_DL-LO is given by the same γ_0 appearing in Eq. (<ref>). Now, it happens that, due to the explicit form of χ̃_0(M,N), Eq. (<ref>), χ̃_0(0,N) = C_A/π ψ(1+N)-ψ(2+N)= -C_A/π1/1+N,and hence we find γ̃_1(N) = 0.This might come as a surprise, however it does not. Indeed, the LL pole of the exact NLO γ_1 is accidentally zero, so the only part which is supposed to be predicted correctly by this kernel had to be zero. In principle there could be non-zero subleading corrections, which in practice are absent (at DL level — RC contributions do produce extra terms, see Eq. (<ref>)). If we wish to match LL resummation to NNLO, we should expand to one extra order, but we are not interested in doing so, thus we move to the next logarithmic order. §.§.§ Expansion of the NLL resummed result For NLL resummation we need the NLO off-shell kernelχ_Σ^ NLO(M,N,)= χ_s, NLO(M,) + χ_s, NLO(1-M+N,) + χ̃_0(M,N)+ ^2 χ̃_1(M,N) + ^2χ_1^ corr(M,N) + χ_ mom^ NLO(N,).Here, χ_s, NLO(M,) is the generalization of χ_s, constructed as the exact dual of the NLO anomalous dimension[As before, we use an approximate NLO anomalous dimension, see App. <ref>.]γ_0(N)+^2γ_1(N), which is an input at this order. This kernel satisfies the formal expansion χ_s, NLO(M,) = ∑_j=0^∞^j ∑_k=1^∞χ_jk/M^k;the j=0 term corresponds to χ_s(/M), Eq. (<ref>). The kernel χ̃_1(M,N) was given in Eqs. (A.23)–(A.29) of Ref. <cit.>, and we do not report it here. The extra term χ_1^ corr(M,N) takes into account running coupling corrections; its correct expression is χ_1^ corr(M,N) = β_0 -C_A/πψ_1(2-M+N) - 1/(1-M+N)^2χ_s'/1-M+N + χ_0(M,N)/M-C_A/π M^2 .This equation corrects Eq. (A.18) of Ref. <cit.> (i.e. Eq. (6.19) of Ref. <cit.>), which did not contain the second term. In fact, the second term was not really necessary, as it is subleading, but then the argument of ψ_1 in the first term should be 1-M+N, as the 2 comes from the subtraction of double-counting with the second term. In practice, however, we have verified that neglecting the second term and correcting the argument of the first leads to a kernel which is unstable close to the anticollinear pole M=1, instability which is cured (resummed) by including the second term. We verified that the overall effect of this correction is mild, but not negligible. Finally, χ_ mom^ NLO(N,) restores momentum conservation, in the same form as Eq. (<ref>), withc_ mom^ NLO = -χ_s, NLO(2,) - χ̃_0(0,1) - ^2 χ̃_1(0,1) - ^2χ_1^ corr(0,1).Note that since χ_s, NLO(M,) is the exact dual of the NLO anomalous dimension, it equals 1 in M=0. Now we consider the expansion of the DL-NLO anomalous dimension γ_DL-NLO(N,) = γ_0(N)+^2γ_1(N)+^3γ̃_2(N) + …,where both γ_0(N) and γ_1(N) are assumed to be those used in the definition of χ_s, NLO (as before, this will be confirmed by the explicit computation), and γ̃_2(N) is what we aim to find. The expansion of χ_s, NLO is obtained by using the same technique used in Eq. (<ref>), and leads to χ_s, NLO(γ_DL-NLO(N,),) = N + ^2γ̃_2/γ_0' + (^3).Note that up to this order the expanded kernel χ_s(/M)+χ_ss(/M), corresponding to the first two terms in the j-sum of Eq. (<ref>) and originally used in ABF <cit.>, gives identical results. Substituting Eq. (<ref>) into Eq. (<ref>) with the NLO kernel Eq. (<ref>) and expanding in powers ofusing Eq. (<ref>) we find the following expressionγ̃_2(N) = -γ_0'(N)[ χ_11/1+N + χ_02/(1+N)^2 +χ̃_1(0,N) + χ_1^ corr(0,N)-χ_11/2 + χ_02/4 + χ̃_1(0,1) +χ_1^ corr(0,1)f_ mom(N) + C_A/π ψ_1(1+N)-ψ_1(1) γ_0(N) ].The coefficients of the expansion of χ_s, NLO are given by (see App. <ref>)χ_02 = -11C_A^2/12π^2 + n_f/6π^2(2C_F-C_A) χ_11 = -n_f/36π^223C_A-26C_F.Now, from the definition of χ̃_1 (see Ref. <cit.>) χ̃_1(M,N) = χ̃_1^ u(M,N) - χ̃_1^ u(0,N) + χ̃_1^ u(0,0),we immediately find χ̃_1(0,N) = χ̃_1^ u(0,0)which is N-independent, and thus also equal to the momentum conservation subtraction χ̃_1(0,1). Its value is (from Eq. (A.29) of Ref. <cit.>) χ̃_1(0,N) = 1/π^2 -74/27C_A^2+11/6C_A^2ζ_2+5/2C_A^2ζ_3 + n_f4/27C_A+7/27C_F-1/3C_Fζ_2 .On the other hand we have χ_1^ corr(0,N) = -β_0C_A/πζ_2,which is again N-independent. We can then rewrite Eq. (<ref>) asγ̃_2(N) = -γ_0'(N)[ ρ + χ_11/1+N + χ_02/(1+N)^2 -ρ + χ_11/2 + χ_02/4f_ mom(N) + C_A/π ψ_1(1+N)-ψ_1(1) γ_0(N) ]withρ = 1/π^2 -74/27C_A^2+11/12C_A^2ζ_2+5/2C_A^2ζ_3 + n_f4/27C_A+7/27C_F+1/6C_Aζ_2-1/3C_Fζ_2.This represents the final result for our expansion of the DL anomalous dimension.As a cross-check, we can now expand γ̃_2 about N=0. Given that γ_0(N) = C_A/π N + (N^0), γ_0'(N) = -C_A/π N^2 + (N^-1),we have that up to NLL the singular behaviour of γ̃_2 in N=0 is given by (using f_ mom(0)=0)γ̃_2(N)= C_A/π N^2 ρ+χ_11+χ_02-2ζ_3C_A^2/π^2 =C_A/π^3N^2C_A^2 (54 ζ_3 + 99 ζ_2 - 395) + n_f (C_A - 2 C_F) (18 ζ_2 - 71)/108,which indeed reproduces the correct NLL pole of the known three-loop anomalous dimension γ_2(N) <cit.>, while the LL 1/N^3 pole is accidentally zero. We stress that without the correction of the error in Eq. (<ref>) the constant in Eq. (<ref>) would change, and thus the NLL singularity of γ_2(N) would not be reproduced. §.§ Resummed splitting functions matched to NNLO In the previous sections we have computed the expansion of the DL result, and in Sect. <ref> we have presented a new running coupling resummation and provided itsexpansion. We are now ready to construct the final expressions for the resummed anomalous dimension γ_+, and write their expansions. With those, we can then also construct the resummed expressions of all the entries of the singlet anomalous dimension matrix in the physical basis <cit.>, which by Mellin inversion give the singlet splitting functions. §.§.§ Anomalous dimensions As a first step, we need to add the running coupling contribution to the DL result, with the proper matching functions to cure the singularity mismatches. Note that adding the RC resummed functions Δ_DL-(N)LOγ_ rc(N,) to the DL results violates momentum, which is further violated in the LO+LL^' by the matching function and in the NLO+NLL by Δγ_ss^ rc and its matching function. Momentum conservation can be restored by simply adding a function proportional to f_ mom(N), Eq. (<ref>). In summary, we have[Note that we are here using a notation for the resummed results, “res (N)LL^(')”, which differs from the one used in Ref. <cit.>, “(N)LO+(N)LL^(')”. The reason is that we will now use the latter name for the actual resummed results matched to any fixed order, while in these resummed results the fixed order is only approximate.] γ_+^ res LL (N,)= γ_DL-LO (N,) + Δ_DL-LOγ_ rc^LL(N,) - Δ_DL-LOγ_ rc^LL(1,) f_ mom(N),γ_+^ res LL'(N,)= γ_DL-LO (N,) + Δ_DL-LOγ_ rc^ NLL(N,) + γ_ match^ LO+LL' (N,)-Δ_DL-LOγ_ rc^ NLL(1,) + γ_ match^ LO+LL' (1,)f_ mom(N),γ_+^ res NLL(N,)= γ_DL-NLO(N,) + Δ_DL-NLOγ_ rc^ NLL(N,) + Δγ_ss^ rc(N,) + γ^ss_ match(N,)-Δ_DL-LOγ_ rc^ NLL(1,) + Δγ_ss^ rc(1,) + γ^ss_ match(1,)f_ mom(N),where the various functions have been introduced in Sect. <ref> and Sect. <ref>. Using the results in there, these expressions admit the followingexpansionsγ_+^ res LL(N,) = γ_0(N) + ^2 β_0 3/32κ_0-c_01/N - f_ mom(N) + (^3), γ_+^ res LL'(N,)= γ_0(N) + ^2 β_0 3/32κ_0-c_01/N - f_ mom(N) + (^3), γ_+^ res NLL(N,) = γ_0(N) + ^2 γ_1(N) + ^3 {β_0^2 κ_0/161/N - f_ mom(N) -γ_0'(N) [ ρ + χ_11/1+N + χ_02/(1+N)^2 -ρ + χ_11/2 + χ_02/4f_ mom(N) + C_A/π ψ_1(1+N)-ψ_1(1) γ_0(N) ] } + (^4),with coefficients defined in Eqs. (<ref>), (<ref>) and (<ref>).Note that the functions γ_0 and γ_1 are those entering the definition of χ_s and χ_s, NLO in the DL kernel, which are not the exact fixed-order results (see App. <ref>). Thus, it is convenient to introduce the pure resummed contributions, defined by the difference between the above expressions and their expansion up to a given order, e.g.Δ_nγ_+^ NLL(N,) = γ_+^ res NLL(N,) - ∑_k=1^n ^kγ_+^ res NLL(N,) _(^k),and similarly for the LL result. In this way, the resummed anomalous dimension matched to the (exact) fixed order is given by γ_+^N^nLO+N^kLL(N,) = γ_+^N^nLO(N,) + Δ_nγ_+^N^kLL(N,),where γ_+^N^nLO(N,) is the exact N^nLO anomalous dimension. In Ref. <cit.> we only considered the “natural” contributionsΔ_1γ_+^ LL^(')(N,) = γ_+^ res LL^(')(N,) - γ_0(N),Δ_2γ_+^ NLL(N,) = γ_+^ res NLL(N,) - γ_0(N) - ^2 γ_1(N).With the results of Eqs. (<ref>) we can now also computeΔ_2γ_+^ LL^(')(N,) = γ_+^ res LL^(')(N,) - γ_0(N) - ^2 β_0 3/32κ_0-c_01/N - f_ mom(N),Δ_3γ_+^ NLL(N,)= γ_+^ res NLL(N,) - γ_0(N) - ^2 γ_1(N)-^3 {β_0^2 κ_0/161/N - f_ mom(N) -γ_0'(N) [ ρ + χ_11/1+N + χ_02/(1+N)^2 -ρ + χ_11/2 + χ_02/4f_ mom(N) + C_A/π ψ_1(1+N)-ψ_1(1) γ_0(N) ] },which are the resummed contributions needed for NLO+LL and NNLO+NLL.The previous equations are the primary ingredients which allow us to match NLL resummation to NNLO. Having the expansion of the eigenvalue anomalous dimension γ_+, we can now construct the expansions for all the entries of the anomalous dimension matrix in the physical basis. For achieving this, the secondary ingredient is the expansion of the resummed γ_qg entry of the evolution matrix. We refer the Reader to Ref. <cit.> for all the details on its resummation, and we report here its expansion in powers of , γ_qg^ NLL(N,) =h_0 + ^2 h_1 γ_0^ LL'(N) + ^3h_2 γ_0^ LL'(N)γ_0^ LL'(N) -β_0 + h_1 γ_1^ LL'(N)+ (^4),where h_0=n_f/3π, h_1=n_f/3π5/3 and h_2=n_f/3π14/9 are numerical coefficients (already introduced in Sect. <ref>), γ_1^ LL'(N) is the (^2) term of γ^ LO+LL'(N,), which can be read off Eq. (<ref>), and γ_0^ LL'(N) is given (up to a factor ) by Eq. (B.10) of Ref. <cit.> which we report here for convenience (with the notations of App. <ref>) γ_0^ LL'(N) = a_11/N + a_10/N+1.Note that in Ref. <cit.> two forms for γ_qg^ NLL, differing by subleading terms, were considered and used to estimate an uncertainty of the resummation. Up to the order of Eq. (<ref>), both expressions give identical results, the difference starting at (^4). Using Eq. (<ref>) it is possible to construct the resummed contributionsΔ_2γ_qg^ NLL(N,)= γ_qg^ NLL(N,)-h_0 - ^2 h_1 γ_0^ LL'(N),Δ_3γ_qg^ NLL(N,)= γ_qg^ NLL(N,)-h_0 - ^2 h_1 γ_0^ LL'(N) - ^3h_2 γ_0^ LL'(N)γ_0^ LL'(N) -β_0 + h_1 γ_1^ LL'(N) ,which are needed for NLO+NLL and NNLO+NLL resummations, respectively. The resummed contributions for other entries of the anomalous dimension matrix can be constructed in terms of Δ_nγ_+^ NLL and Δ_nγ_qg^ NLL, as described in Ref. <cit.>.§.§.§ Splitting functions From the resummed anomalous dimension we can obtain resummed contributions for the splitting function matrix by Mellin inversion. Additionally, in order to ensure a smooth matching onto the fixed-order at large x, a damping is applied.Furthermore, we enforce exact momentum conservation on our final results by requiring the first moments of P_gg+P_qg and P_gq+P_qq to vanish. The final expressions are given byP_ij^N^nLO+N^kLL(x,) = P_ij^N^nLO(x,) + Δ_n P_ij^N^kLL(x,)withΔ_n P_gg^ NLL(x,)= (1-x)^2 1-√(x)^4Δ_n P_+^ NLL(x,) - C_F/C_AΔ_n P_qg^ NLL,nodamp(x,) - DΔ_n P_qg^ NLL(x,)= (1-x)^2 1-√(x)^4 Δ_n P_qg^ NLL,nodamp(x,)Δ_n P_gq^ NLL(x,)= C_F/C_AΔ_n P_gg^ NLL(x,)Δ_n P_qq^ NLL(x,)= C_F/C_AΔ_n P_qg^ NLL(x,)(and similarly at LL) where Δ_n P_+^ NLL and Δ_n P_qg^ NLL,nodamp are the inverse Mellin of Δ_nγ_+^ NLL and Δ_nγ_qg^ NLL, respectively. The constant D is given byD = 1/d(1)∫_0^1 dxx (1-x)^2 (1-√(x))^4Δ_n P_+^ NLL(x,) +1- C_F/C_AΔ_n P_qg^ NLL,nodamp(x,) ,where d(N) is the Mellin transform of (1-x)^2 (1-√(x))^4. Note that, with respect to Ref. <cit.>, we have introduced a further damping function, (1-√(x))^4, to ensure a smoother matching with the fixed-order at large x.From a numerical point of view, we proceed as follows:*With a new private code (available upon request) we produce the resummed anomalous dimension γ_+. Specifically, we output Δ_2γ_+^ LL^(') and Δ_3γ_+^ NLL, i.e. the ones with the expansion terms computed in this work subtracted, along the inverse Mellin integration contour, for a grid of values ofand n_f. This ensures that these contributions start at (^3) and (^4) respectively, as the subtraction is performed within the same code (subtracting later in a different code is of course possible, but subject to a higher chance of introducing bugs or numerical instabilities).*The output of the first code (in the form of publicly available tables) is then read by the public code , versiononwards. This code essentially computes the resummation of coefficient functions (and equally of the qg anomalous dimension) as described in Ref. <cit.>, and partly summarized in Sect. <ref>. The splitting function matrix is then constructed and momentum conservation imposed. The objects which are computed are Δ_2P_ig^ LL, Δ_3P_ig^ NLL (i=g,q), Δ_2K_hg, 2_a,g, 2_a,g(m) (a=2,L) and 2^ CC_a,g(m) (a=2,L,3)[Massive DIS coefficient functions are further sampled for various values of the quark massess.] on a n_f, , x grid. Coefficient functions for additional processes will be also added in the future in the same form.*These grids (again publicly available) are then read by the public code , versiononwards, where the ,x grid is interpolated (cubicly), the quark components of the splitting, matching and coefficient functions are computed by colour-charge relations, and Δ_1P_ij^ LL, Δ_2P_ij^ NLL, Δ_1K_hg, 1_a,i and 1^ (CC)_a,i(m) are also constructed by adding the respective lower orders directly in x-space. For this, we need the analytic inverse Mellin transforms of the non-trivial expansion terms in Eq. (<ref>), (<ref>) and (<ref>), as well as the analogous for the coefficient functions. The latter was already done in the massless case in Ref. <cit.>; explicit results for the massive case are presented in App. <ref>. Explicit results for the splitting functions are presented in App. <ref>.We underline that the second step is the slowest, as thecode needs to compute several integrals for the varius grid points and the various functions to be resummed. The last step, performed by thecode, is instead extremely fast, as it simply amounts to an interpolation and the evaluation of simple functions. For practical applications of our results, it is sufficient to use thecode, making the inclusion of small-x resummation very handy. As far as PDF evolution and construction of DIS observables is concerned, the codehas been included in thecode <cit.>, which can be used to access all our results and construct resummed predictions for physical observables. § NUMERICAL RESULTS In order to illustrate the capabilities of , we present here some representative results for the small-x resummation of splitting functions and DIS coefficient functions obtained with the techniques described in Ref. <cit.> and improved as described in the previous sections. Moreover, we also show new results for the coefficient functions with massive quarks.§.§ Splitting functions Let us start with DGLAP evolution. With respect to our previous work <cit.> we have made substantial changes in the resummation of the anomalous dimensions, mostly due to the treatment of running coupling effects, as described in Sect. <ref>.Additionally, we are now able to match the NLL resummation of the splitting functions to their fixed-order expressions up to NNLO, as presented in Sect. <ref>.In Fig. <ref> we show the fixed-order splitting functions at LO (black dotted), NLO (black dashed) and NNLO (black dot-dot-dashed) compared to resummed results at LO+LL (green dotted), NLO+NLL (purple dashed) and NNLO+NLL (blue dot-dot-dashed). In principle, we also have the technology for matching LL resummation to NLO, but this is of very limited interest, so we do not show these results here (they can be obtained from thecode).The gluon splitting functions P_gg and P_gq are shown in the upper plots, and the quark ones P_qg and P_qq are shown in the lower plots (the latter two start at NLL so the LO+LL curve is absent there). All splitting functions are multiplied by x for a clearer visualization. The scheme of the resummed splitting functions is Q_0 (the fixed-order ones are the same in bothand Q_0 at these orders). The number of active flavours is n_f=4, and the value of the strong coupling is =0.2, corresponding to Q∼6 GeV. Note that for such value of Q in a VFNS one usually has n_f=5 active flavours; however, the difference between the results in the n_f=4 and n_f=5 schemes at the same value of α_s is modest, and our choice allows to directly compare with previous results presented in the literature. The results of Fig. <ref> can be compared directly to the ones presented in our previous paper <cit.>. It can be noticed that the LO+LL result is rather different: in our new implementation this curve is lower than in the previous version. This is entirely due to the new treatment of running coupling effects, which clearly differs by subleading logarithmic terms.At the next order, NLO+NLL, there is again a difference with respect to our previous work. As before, this is due to subleading terms, which are now NNLL, and so, as expected, they lead to smaller discrepancies. Indeed, NLO+NLL results are much closer to the ones of our previous implementation. We recall that the new version of the NLO+NLL results also includes the correction of an error in the original expressions <cit.>, as detailed in Sect. <ref>, which has a non-negligible impact on the result, even though the effect is not as large as the one induced by the new treatment of running coupling effects.The notable novelty is the presence of the NNLO+NLL curve. The asymptotic small-x behaviour is identical to the NLO+NLL curve, except for a constant shift, which represents a term of the form ^3/x in the splitting functions. Indeed, this term is NNLL, and it was therefore not correctly captured by the NLO+NLL result.Its impact is larger in the gluon splitting functions P_gg and P_gq, while it is rather small for the quark splitting functions P_qg and P_qq. At larger x, the NNLO+NLL curves smoothly match onto the NNLO result. For P_gg and P_gq this happens already at x∼10^-2. This is due to the fact that the dip at x∼ 10^-3÷10^-4, which is a known feature of the resummed result at moderate x (see e.g. <cit.>), is determined by the NNLO logarithmic term, which goes down and dominates at moderate values of x, before the onset of the smaller x asymptotic behaviour, which goes up. Hence, when matching to NNLO, the initial descent of the splitting functions is automatically described, and the resummed result deviates only when the rise due to the asymptotic small-x behaviour sets in. Note that, because of this difference between NLO+NLL and NNLO+NLL, we expect the latter to be much more accurate, especially in the region x≳ 10^-5, where the majority of DIS data lie.The resummed curves are supplemented with an uncertainty, which aims to estimate the size of subleading logarithmic effects. As far as P_qg and P_qq are concerned, it is defined exactly as in Ref. <cit.>, namely symmetrizing the difference between our default construction of the γ_qg resummation which uses the evolution function in Eq. (<ref>), and what we obtain by switching to a simpler and formally equally valid version, Eq. (<ref>). The uncertainty band is bigger than in our previous work just because the LL^' anomalous dimension used to resum γ_qg differs in the treatment of running coupling effects. Because of the way P_gg and P_gq are constructed, Eq. (<ref>), these splitting functions inherit the uncertainty band of P_qg at NLL. However, because the bulk of resummed contribution to these entries comes from γ_+, we have decided to also account for the uncertainty due to subleading contributions to γ_+. This was not considered in our previous work.In order to construct this band, we use the same kind of variation used for γ_qg. Specifically, we symmetrize the difference between the result obtained using Eq. (<ref>) for the resummation of running coupling effects (our default) and the variant obtained using the simpler, yet equally valid Eq. (<ref>). The uncertainty bands from both sources are then combined in quadrature. At LL, there is no contribution from γ_qg, and the whole resummed curve is given by γ_+: in this case the uncertainty band is just determined from the variation in the construction of the latter. We note that there is nice overlapping between NLO+NLL and NNLO+NLL bands for the P_qg and P_qq splitting functions, giving us a good confidence that they appropriately represent the uncertainty from missing subleading logarithmic orders.In contrast, the uncertainty band on P_gg and P_gq does not fully cover the effects of higher orders in the initial small-x region, 10^-4≲ x≲10^-2, as demonstrated by the fact that NLO+NLL and NNLO+NLL do not overlap there. However, this effect is mostly driven by the largish NNLL effects at (^3), which are those that are included in the NNLO+NLL but not in the NLO+NLL results. At higher orders the effects of subleading logs in this region is likely to be smaller.In support of this hypothesis, we can note that the distance between NNLO+NLL and NNLO for x∼10^-2is significantly smaller than the distance between NLO+NLL and NLO, in the same region.Thus, we believe that, while the uncertainty on the NLO+NLL result is not satisfactory in the intermediate x region, the uncertainty on the NNLO+NLL should be reliable.These plots are also instructive to study the stability of the perturbative expansion. By looking at the fixed-order splitting functions, we see that small-x logarithms start being dominant already at x≲10^-2, where the logarithmic term of the NNLO contribution sets in. We note that the small-x growth could have been in principle much stronger. Indeed, the leading logarithmic contributions have vanishing coefficients both at NLO and NNLO and the sharp rise of the NNLO splitting function is driven by its NLL contribution ^3 x^-1log x. These accidental zeros are not present beyond NNLO and so we expect the yet-unknown N^3LO splitting functions to significantly deteriorate the stability of the perturbative expansion because of their ^4 x^-1log^3 x growth at small x. Therefore, we anticipate that the inclusion of the resummation to stabilize the small-x region will be even more crucial at N^3LO.§.§ DIS coefficient functionsWe now move to DIS coefficient functions and we first present updated predictions for the massless coefficients, i.e. assuming that there are only n_f massless quarks and no heavy quarks. The construction did not change compared to our previous work <cit.>, but the input LL^' anomalous dimension used for computing the resummed coefficients did, as explained in Sect. <ref>.[Additionally, we changed the overall large-x damping, which is now uniformly chosen to be (1-x)^2(1-√(x))^4, as for the splitting functions, Eq. (<ref>).] The updated results are shown in Fig. <ref> for C_L,g (left) and C_2,g (right). The quark contributions at small x are very similar (due to colour-charge relation) and are not shown.We observe some differences with respect our previous work, although within uncertainties, for the coefficient function C_2,g due to the modified running-coupling resummation. These changes appear to have instead a small numerical effect on C_L,g.The other noticeable difference with respect to our previous results is the size of the theoretical uncertainty, which is now larger: this effect is entirely due to the different LL^' used in the construction, and is therefore ultimately due to the treatment of running-coupling effects. We now move to the new results which include mass dependence. We first show in Fig. <ref> the analogous of Fig. <ref> for the massive unsubtracted coefficient functions, both for charm production and for bottom production close to the production threshold. As usual in theory papers, we define these contributions as the ones for which the heavy quark is struck by the photon (at these energies, the Z contribution in NC and the CC production mechanism are negligible). We call generically these contributions _a,i, with a=2,L and i=g,q, of which the functions _a,i defined in Sect. <ref> are the resummed contributions. For charm production (upper plots) we use =0.28, corresponding to Q∼2 GeV, which is a scale right above the charm mass assumed to be m_c=1.5 GeV, while for bottom production we use =0.20, corresponding to Q∼6 GeV, right above the bottom mass assumed to be m_b=4.5 GeV. The number of active flavours is set to be n_f=3 for charm production and n_f=4 for bottom production, i.e. the massive quark is treated as heavy and its collinear logarithms are not factorized. In particular, the massive coefficients for bottom production are those contributions which should be added to the corresponding massless coefficients in the same n_f=4 scheme, Fig. <ref>, which instead assumed only coupling to light quarks, to obtain a complete prediction (see e.g. the decomposition Eq. (<ref>) at resummed level). We observe that the effect of adding the bottom production contribution to the purely massless contributions is a rather small correction for F_L, while it is comparable in size to each individual massless contribution for F_2.The pattern observed in Fig. <ref> between fixed-order and resummed contributions is very similar to that of the massless results in Fig. <ref>. The most notable difference is the visibly larger effect of resummation for charm production, accompanied by a larger uncertainty band. This effect is entirely due to the smaller scale, i.e. the larger value of , used in the charm production plots. Another interesting feature of these massive coefficient functions is the very visible presence of the physical threshold for heavy quark production, which lies at x=x_ th≡ 1/(1+4m^2/Q^2). Because of our choice of scales, x_ th∼0.3 for both processes. The results presented so far do not include the resummation of collinear logarithms due to massive quarks. For the scales considered, which are in both cases larger than the heavy quark mass, these collinear logarithms are already usually resummed in most implementations of VFNSs. We now thus consider the scenario in which a VFNS is used and heavy-quark collinear logarithms are resummed. Since at fixed order there are various incarnations of VFNSs, differing just by subleading effects but nonetheless being practically different (see e.g. discussion in Sect. <ref>), we prefer to focus on the resummed contributions only. We focus on the charm-production case, with m_c=1.5 GeV. For the sake of this study, we find more interesting to show a plot as a function of the momentum transfer Q, in order to emphasize the importance of mass effects at different scales.Therefore, in Fig. <ref> weplot the resummed contributions 2_2,g(m_c) and 2_L,g(m_c) in NC and 2^ CC_L,g(m_c), 2^ CC_2,g(m_c) and 2^ CC_3,g(m_c) in CC[For CC, we assume the production of a charm quark together with a massless anti-quark. This fixes the sign of the F_3 contribution.] as a function of Q for two representative values of x, namely x=10^-6 (small) and x=10^-3 (moderate). The plot starts from Q=m_c, where n_f=4, and then it transitions to n_f=5 when crossing the bottom threshold (assumed to be at m_b=4.5 GeV). At the transition point a small discontinuity appears, due to the different value of n_f used in the computation of the LL^' anomalous dimension. This discontinuity is a standard consequence of the scheme change, and does not constitute any practical problem in the computation of physical observables.At large Q, the massive resummed coefficient functions (which are the collinear subtracted ones) tend to the massless results, shown in dotted style. It is clearly visible that charm mass effects are significant for small Q≲(10÷30) GeV, and are more pronounced at larger x, where however the effect of resummation is smaller. Charm mass effects are also stronger in the NC case than in the CC case. In practice, massive corrections on the resummed coefficient functions are a small effect on the full structure function, especially when resummation is matched to NNLO. Still, keeping into account these mass effects is important for an accurate description of the low-Q data, and in particular for the charm structure function F_a^c, a=2,L, which is entirely determined by the charm coefficient function. In the upper plots of Fig. <ref> we are showing the full NC coefficients, namely the sum of the contributions from photon, Z and photon-Z interference to the structure functions, normalized to the photon couplings. It is interesting to investigate how much the various terms contribute to the full result. To do so, we show in Fig. <ref> the ratio of the individual contributions to the resummed contribution to the structure function F_2, 2_2,g(m_c). We stress that if the axial contribution proportional to g_A^2 which remains after factoring out the g_V^2+g_A^2 coupling to the Z were absent, then the resummed coefficient function for the various contributions would be identical, up to the overall coupling, and the ratio of the various contributions would be independent of x and of the observable. However, since this axial contribution is non-zero, a small dependence remain. However, for x≲10^-3, the differences between F_2 and F_L are very small, and reducing with lowering x. Thus, the plot in Fig. <ref>, obtained from F_2 for x=10^-4, remains pretty much unchanged for lower x and for F_L. From the plot we see that the contribution from the Z boson is basically negligible for all Q≲10 GeV, and becomes of the same size of the photon contribution at the Z peak. The photon-Z interference dominates over the pure Z-exchange contribution below the Z peak. The axial contribution proportional to g_A^2, which is a new result of our computation, turns out to be mostly insignificant, as it gives a small contribution (a few percent at most) for scales Q where small-x resummation is further suppressed by a smaller strong coupling. § CONCLUSIONSIn this paper we have performed a comprehensive study of the high-energy, i.e. small-x, resummation in deep-inelastic scattering of a lepton off a proton. In particular, we have collected all the ingredients to perform NLL resummation matched to fixed-order up to NNLO. In order to achieve this we have considered the resummation of splitting functions, which govern DGLAP evolution of parton densities. With respect to our previous work, we have modified the way running coupling corrections are treated and we have managed to match the resummation to NNLO, thus obtaining state-of-the art NNLO+NLL results for the splitting kernels. While fixed-order predictions at NNLO exhibit instabilities at small x due to large logarithms, the resummed results are stable and at small x appear to be much closer to NLO than NNLO.Furthermore, our results can be easily extended to match NLL to fixed N^3LO, when it becomes available. In this case we expect the resummation to have an even more substantial effect because of the larger fixed-order instabilities at small x appearing at this order.We have also considered the resummation of DIS partonic coefficient functions. In order to obtain reliable results in a wide range of x and Q^2 we have studied small-x resummation in the context of a variable flavour number scheme in which heavy and light quark coefficient functions are matched together. In this context, we have considered mass effects originating from both the charm and the bottom quarks. We have produced NNLO+NLL results for the coefficient functions relevant for F_2 and F_L for neutral-current DIS, considering the effect of both a virtual photon or a Z boson exchange, as well as charged-current processes.If all quarks are massless, the structure function F_3 is purely non-singlet and therefore is not enhanced at small-x. However, we have found that in the charged-current case with W boson exchange, if at least one of the quarks interacting with the W is massive there is a non-zero contribution at small x to the parity violating structure function F_3.We have also noted that in neutral-current DIS with massive quarks there is a difference between the γ exchange or Zγ interference and the pure Z exchange, other than the overall coupling.We have implemented all these new results in a new version of our code,version . A fast interface to these results is available through a new version of its companion code,version . Both codes are publicly available at the webpage https://www.ge.infn.it/ bonvini/hell/. The fastcode can be directly used to compute PDF evolution and DIS cross sections through the public code  <cit.>.The main motivation behind this work was to compute and implement all the ingredients that are necessary to perform a state-of-the-art fit of parton distribution functions which consistently includes small-x resummation both in the evolution of the parton densities and in the coefficient functions. This task is being now pursued by the NNPDF collaboration. Preliminary and very encouraging results have been presented in <cit.> in the case of a PDF fit that includes DIS-only data. For the near future, we look forward to implementing other processes in , the most relevant of which in the context of PDF extractions is the production of lepton pair via the Drell-Yan mechanism. We conclude by noting that the resummed results produced by , both splitting functions and coefficient functions, are supplemented by a band representing the theoretical uncertainty due to missing higher-logarithmic orders. This information can (and should) be used in phenomenological studies and it could be also be fed into PDF fits together with other sources of theoretical uncertainty. However, the debate about how to best include theoretical uncertainties into PDF fits is not settled yet, see for instance <cit.>. We are always indebted to Richard Ball and Stefano Forte for many discussions and guidance on this topic.We are particularly grateful to Tiziano Peraro and Luca Rottoli for discussions and collaboration in the early stages of this project. We also thank Giovanni Ridolfi and Juan Rojo for discussions and comments on our manuscript. The work of MB was partly supported by the Marie Skłodowska Curie grant HiPPiE@LHC. § OFF-SHELL DIS COEFFICIENT FUNCTIONS In this section we report the computation and the results for the off-shell coefficient functions needed for the small-x resummation of DIS observables. We focus on the contributions with an incoming gluon, which is off its mass-shell, as this is what enters in the resummation formula. The relevant diagrams are shown in Fig. <ref>. The off-shellness of the incoming gluon regulates the collinear divergence of the produced quark pair, so the case with massless quarks is just a finite limit of the case with massive quarks. Therefore, we start considering the most general case, in which the gluon converts to a massive quark (pair), with mass m_1, and then after interacting with the vector boson the quark changes mass, m_2 (and of course the opposite setup, as in the right diagram of Fig. <ref>). This accounts for all possible type of interaction:* m_1=m_2 is the case in which the boson is either a photon or a Z (neutral current)* m_1≠ m_2 is the case in which the boson is a W (charged current).Additionally, we consider a generic coupling which includes vector and axial currents, even though we will see that the couplings will mostly factor out.Note that, in practice, the charged-current case is relevant only if one of the two quark is massive and the other massless. Indeed only charm, bottom and top masses cannot be neglected, but the top contribution to DIS is always negligible. Therefore, the only processes for which two different non-zero masses would be needed are c+W→ b and b+W→ c. But these processes are suppressed by the CKM matrix element V_cb∼ 410^-2, and by the phase-space restrictions due to the quark masses.Therefore, the only significant combinations in charged current will involve at most a single massive quark. These combinations are c+W→ s,d and s,d+W→ c, on top of the fully massless contributions u+W→ s,d and s,d+W→ u.§.§ Calculation of DIS off-shell partonic cross section In this section we summarize the computation of the DIS off-shell partonic cross section in the most general case in which the quarks in the final state have different masses and their coupling to the boson contains vector and axial components. We consider the processV^*q + g^*k→Q̅p_3+Q'p_4,where V^* is the generic off-shell vector boson, g^* is the off-shell gluon and Q and Q' are quarks. The final state quarks are on-shell, and their flavour is in general different (to cover the charged-current case), so their masses are p_3^2=m_1^2,p_4^2=m_2^2.The two diagrams contributing to this process are depicted in Fig. <ref>. The invariant matrix element for the two diagrams is given byiℳ^μρ=-i g_s t^c u̅p_4[γ^μg_V+g_Aγ^5kγ^ρ- 2 p_3^ρ/t-m_1^2 -γ^ρk-2p_4^ρ/u-m_2^2γ^μg_V+g_Aγ^5]vp_3,where g_s, g_V, g_A are the strong, vector and axial couplings respectively, and t=k-p_3^2=q-p_4^2 and u=k-p_4^2=q-p_3^2. For photon-induced DIS, the vector coupling is just the quark electric charge g_V^γ=e_Q and the axial coupling is zero. For Z-induced DIS, the vector and axial couplings depend on the quark isospin and are given byg_V^Z=+1/2-2e_Qsin^2θ_ w Q=u,c,t-1/2-2e_Qsin^2θ_ w Q=d,s,b , g_A^Z=+1/2 Q=u,c,t-1/2 Q=d,s,b ,where θ_ w is the weak mixing angle. For W-induced DIS the vector and axial couplings depend on both quark flavours and are given byg_V^W= 1/√(2)V_QQ', g_A^W= -1/√(2)V_QQ',being V_ij the CKM matrix. Note that we are assuming that the vector boson is just V, and so the computation will not cover explicitly the photon-Z interference: however, this case is easily obtained in the final results by simply replacing g_V^2→ 2g_V^Zg_V^γ and g_A^2→0 (the combination g_Vg_A appears only in F_3, the gluonic contribution of which is zero in neutral current).In the high-energy regime we are interested in, we decide to parametrize the kinematics of the process in terms of dimensionless variables z_1, z_2, τ, τ̅ and transverse vectors[A transverse vector p_⊥ is defined to have components only along directions orthogonal to the time and z directions, i.e. in components p_⊥=(0,p_x,p_y,0)=(0,𝐩,0), where we have also defined the two-vector 𝐩. Note that p_⊥^2=-𝐩^2.]q_⊥, k_⊥ and Δ_⊥ defined byq^μ = z_1 p_1^μ+ q_⊥^μ, k^μ = z_2 p_2^μ- k_⊥^μ, p_3^μ =1-τ z_1 p_1^μ + τ̅ z_2 p_2^μ + q_⊥^μ-Δ_⊥^μ,p_4^μ =τ z_1 p_1^μ + 1-τ̅ z_2 p_2^μ - k_⊥^μ-Δ_⊥^μ,where p_1=√(s)/21,0,0,1 and p_2=√(s)/21,0,0,-1 are light-cone vectors. We also define the momentum transferred Q by q^2=-Q^2. It is important to note that this definition is valid only in the high-energy limit, where s ≫ M_V^2.We define the parton-level off-shell hadronic tensor as the squared of the matrix element Eq. (<ref>) averaged over the off-shell gluon polarizations <cit.> W_μν=-k_⊥^ρ k_⊥^σ/k_⊥^2ℳ_νσ^†ℳ_μρ.This object contains all the information on the DIS cross-section from the hadronic side. It can be decomposed into different contributions with a given tensor structure as[Note that in the photon-mediated DIS the contributions W_4 and W_5 can be related to W_1 and W_2 through Ward identities. However, since we want to cover also the more general Z- and W-mediated DIS processes, we must keep them separate. Nevertheless, their contribution to the DIS cross section, as well as the one from W_6, is of the order of the lepton mass and thus negligible, and will not be further considered.] W_μν= -g_μν W_1 + k_μ k_ν W_2 -iϵ_μνρσk^ρ q^σ W_3 +q_μ q_ν W_4 +k_μ q_ν +q_μ k_ν W_5 +i k_μ q_ν -q_μ k_ν W_6.The various contributions to the structure functions F_1, F_2 and F_3 (and F_L=F_2-2xF_1) are obtained from the respective counterparts in the hadronic tensor W_1, W_2 and W_3. These, in turn, can be obtained from the full tensor W_μν using suitable projector operators. To this end, we defineA_2^2 =q_⊥^μ q_⊥^ν/Q^2 W_μν, 2A_2^2 -3A_L^2 = A_g^2 =-g^μν+q^μ q^ν/q^2 W_μν,A_3^2 =-iϵ_μναβp_2^α q^β/4p_2 · q W^μν,where A_2^2, A_L^2 and A_3^2 are directly related to F_2, F_L and F_3, respectively. Using the definitions of the kinematic variables in the high-energy limit, Eqs. (<ref>), the squared amplitudes Eqs. (<ref>) can be rewritten asA_2^2 =8g_s^2[p_1 · kp_2 · q/p_1 · p_2]^2 ×{g_V^2+g_A^2/m_1^2-tm_2^2-u[1-1/q^2 k^21-2p_1· p_3p_2 · p_4/m_2^2-u-2p_1 · p_4p_2 · p_3/m_1^2-t^2] -g_A^2m_1+m_2^2/q^2-g_V^2m_1-m_2^2/q^21/m_1^2-tm_2^2-u},A_g^2 =8g_s^2 p_1· k^2/p_1 · p_2^2{g_V^2+g_A^2[p_2 · p_3^2+p_2 · p_4^2+m_1^2+m_2^2/2q^2p_2 · q^2/m_1^2-tm_2^2-u+m_1^2+m_2^2-2q^2/2k^2+m_1^2-m_2^2^2/2k^2 q^2p_2 · p_3/m_1^2-t-p_2 · p_4/m_2^2-u^2] -g_V^2-g_A^2[3 m_1 m_2/k^2p_2 · p_3/m_1^2-t-p_2 · p_4/m_2^2-u^2+m_1 m_2/q^2p_2 · q^2/m_1^2-tm_2^2-u]},A_3^2 = 4 g_s^2 2 g_V g_Ap_1 · k^2/p_1 · p_2^2×{p_2 · p_3-p_2 · p_4/p_2 · q[q^2/k^2p_2· p_3/m_1^2-t-p_2 · p_4/m_2^2-u^2-p_2· q^3/m_1^2-tm_2^2-u] -m_1^2-m_2^2/k^2p_2 · p_3/m_1^2-t-p_2 · p_4/m_2^2-u^2}.These squared amplitudes must be integrated over the final state two-body phase space. In terms of the kinematic variables Eqs. (<ref>) we can express it asdΦ =ν/2π^2 dτ dτ̅ d^2Δ δτ̅1-τν-𝐪-Δ^2-m_1^2 δτ1-τ̅ν-𝐤-Δ^2-m_2^2=1/8π^2dτ/τ1-τd^2Δ̃ δν-Δ̃^2+1-τm_2^2+τ m_1^2/τ1-τ-𝐪-𝐤^2,where we have further definedν =2z_1 z_2p_1· p_2, Δ̃ =Δ-τ𝐪-1-τ𝐤.We recall that bold symbols represent the two-dimensional components of a transverse vector. The partonic off-shell coefficient functions (in Mellin space) for the three structure functions we are interested in are given in terms of the squared amplitudes of Eqs. (<ref>) byC_2(N,ξ,ξ_m_1,ξ_m_2) =1/4π^2 g_V^2+g_A^2∫_0^1 dη η^N∫ dΦ A_2^2,C_L(N,ξ,ξ_m_1,ξ_m_2) =1/4π^2 g_V^2+g_A^2∫_0^1 dη η^N∫ dΦ 1/3[2A_2^2- A_g^2],C_3(N,ξ,ξ_m_1,ξ_m_2) =1/4π^2 2 g_V g_A∫_0^1 dη η^N∫ dΦ A_3^2,where we have introduced the variable η=Q^2/ν and we are expressing the result in terms of dimensionless ratiosξ =𝐤^2/Q^2, ξ_m=m^2/Q^2.Note that, as explained in the text, we are only interested in the N=0 Mellin moment.To carry out the integrations in Eqs. (<ref>) it is useful to express the following combinationsm_1^2-t =1/τ[1-τm_2^2+τ m_1^2+Δ̃-τ𝐤^2+τ1-τQ^2],m_2^2-u =1/1-τ[1-τm_2^2+τ m_1^2+Δ̃+1-τ𝐤^2+τ1-τQ^2], τ̅ =[1-τ𝐪-𝐤-Δ̃]^2+m_1^2/1-τν, 1-τ̅ =[Δ̃+τ𝐪-𝐤]^2+m_2^2/τν,in terms of the phase-space variables. In addition, it is convenient to use the following Feynman parametrizations1/m_1^2-tm_2^2-u =∫_0^1 dyτ1-τ/[1-τm_2^2 +τm_1^2+Δ̃+y-τ𝐤^2+τ1-τ𝐪+y1-y𝐤^2]^2, 1/m_1^2-t^2m_2^2-u^2 =∫_0^1 dy6τ^21-τ^2 y1-y/[1-τm_2^2 +τ m_1^2+Δ̃+y-τ𝐤^2+τ1-τ𝐪+y1-y𝐤^2]^4.In this way, most integrations are easy to perform,[Note that using the δ function in Eq. (<ref>) to perform the η integration imposes restrictions on the remaining integrations. In particular, the restriction can be cast as a lower integration limit for Δ̃^2.However, in the high-energy regime, this lower bound is immaterial, and the integral can be extended down to zero.] leaving the results in the form of a double integral in y and τ. In order to simplify this integration (and to make contact with previous literature) it is also convenient to perform the change of variablesx_1 =4y1-y,x_2=4τ1-τ,and express the results as integrals over x_1 and x_2. General results for C_a0,ξ,ξ_m_1,ξ_m_2, a=2,L,3 are rather long and will not be reported here; in the next section we focus on the physical combinations that are relevant for neutral and charged currents, where the masses are either equal or at least one of them is vanishing. §.§ Results In this section we collect the results of the off-shell coefficient functions for neutral and charged currents, contributing to the three structure functions F_2, F_L and F_3. The fully massless case m_1=m_2=0, which is common to neutral and charged currents, is of course the simplest limit and yieldsC_20,ξ,0,0 =/3π∫_0^1dx_1/√(1-x_1)∫_0^1dx_2/√(1-x_2)3/8ξ x_1+x_2^3× (2-x_1) x_2^2+ x_1 x_2 ξ (3 x_1+3 x_2-4 x_1 x_2)+(2-x_2)x_1^2 ξ^2 C_L0,ξ,0,0 =/3π∫_0^1 dx_1/√(1-x_1)∫_0^1dx_2/√(1-x_2)x_2^2x_2+x_1ξ5-4x_1/4ξ x_1+x_2^3C_30,ξ,0,0 =0.These results coincide with those presented in Ref. <cit.>, even though the longitudinal coefficient function was not written explicitly there. An equivalent (but simpler) integral form for C_L(0,ξ,0,0) was also given in Ref. <cit.>.We now move to the case in which both masses are equal, m_1=m_2≡ m, which is relevant for neutral current. The results readC_20,ξ,ξ_m,ξ_m =/3π∫_0^1dx_1/√(1-x_1)∫_0^1 dx_2/√(1-x_2)3/84ξ_m+ξ x_1+x_2^3×{[2ξ^2 x_1^2-4ξ x_1^2x_2^2-ξ^2 x_1^2 x_2+3ξ x_1^2 x_2+3ξ x_1 x_2^2-x_1 x_2^2+2x_2^2+4ξ_m-ξ x_1^2 x_2+4ξ x_1-x_1 x_2^2-ξ x_1 x_2 -x_1 x_2+4x_2+16ξ_m^22-x_1 x_2]+8 g_A^2 ξ_m /g_V^2+g_A^24ξ_m+ξ x_1+ x_2^2}, C_L0,ξ,ξ_m,ξ_m =/3π∫_0^1dx_1/√(1-x_1)∫_0^1 dx_2/√(1-x_2)1/44ξ_m+ξ x_1+x_2^3×{[x_2^2x_2+x_1ξ5-4 x_1+2ξ_m x_2ξ x_1^2-2ξ x_1-3 x_1 x_2 +4 x_2+8ξ_m^2 x_22-3 x_1]+ 6 g_A^2 ξ_m/g_V^2+g_A^2[24ξ_m+ξ x_1^2+2+x_1 x_2^2+x_2ξ8-3x_1x_1+4ξ_m4+x_1]}, C_30,ξ,ξ_m,ξ_m =0.Note the presence in the above expressions of a term proportional to g_A^2/(g_V^2+g_A^2). This contribution is present only when the vector boson is a Z, while for photon and photon-Z interference this term is zero. This axial contribution is a new result. The remaining of the expressions were already known <cit.>, however an explicit integral form of this kind for C_L is presented here for the first time. Of course, the massless ξ_m→0 limit of Eqs. (<ref>) reduces to Eqs. (<ref>).Finally, we move to the case in which one quark is massless (say, m_2=0) and the other is massive (m_1≡ m), which is relevant for charged current. According to the definition of the process, Eq. (<ref>), this choice corresponds to the production of a heavy anti-quark. Here, it is most convenient to leave integration over τ untouched and to change variable only from y to x_1. We thus obtainC_20,ξ,ξ_m,0 =/3π∫_0^1 dτ∫_0^1 dx_1/√(1-x_1)3/24τξ_m+ξ x_1 +4τ1-τ^3×{16ξ_m+1τ^2ξ_m+1-τ^2+ξ^2 x_1^2ξ_m+1-τ^2+τ^2+2ξ x_1^2 1-ττ3ξ_m^2+ξ_m6-16τ+16τ^2-16τ+3+8τ^2 x_1 ξξ_m^2+ξ_m4-3τ+31-τ^2-ξ_m+1^21-τξ_m+1-τ }, C_L0,ξ,ξ_m,0 =/3π∫_0^1 dτ∫_0^1dx_1/√(1-x_1)1/4τξ_m+ξ x_1 +4τ1-τ^3×{16τ^2ξ_m+1-τ^23ξ_m+41-ττ+3ξ^2 x_1^2ξ_m+2ξ x_1^2 1-ττ9ξ_m^2+ξ_m9-32 τ-32τ1-τ+8τ^2 x_1 ξ3ξ_m^2+10ξ_m1-τ+101-τ^2-3ξ_mξ_m+11-τξ_m+1-τ }, C_30,ξ,ξ_m,0 =/3π∫_0^1 dτ∫_0^1 dx_1/√(1-x_1)3/24τξ_m+ξ x_1 +4τ1-τ^3×{16τ^22τ-1ξ_m+1-τ^2+ξ x_1^2ξ2τ-1-61-ττξ_m+1-2τ+8τ^2 x_1 1-τξ_m^2+ξ_m2-3τ+2τ^2-3τ+1+ξξ_m }.To the best of our knowledge, these results are all new. Most notably, C_3 does not vanish in this case. Note that choosing m_2=m, m_1=0 corresponds to charge-conjugating the final state, thus producing a heavy quark. Therefore, C_a0,ξ,0,ξ_m =C_a0,ξ,ξ_m,0 for a=2,L, while there is a sign change in the parity-violating coefficient, C_30,ξ,0,ξ_m = - C_30,ξ,ξ_m,0 (see also Ref. <cit.>). As expected, the massless limit of Eqs. (<ref>) reduces to Eqs. (<ref>). We now consider the Mellin transform with respect to ξ of these results. This is particularly useful for studying the massless limit of the resummed result, and for asymptotic expansions. We denote the Mellin transform with a tilde, and replace the argument ξ with the Mellin moment M: C̃_aN,M,ξ_m_1,ξ_m_2 = M ∫_0^∞ dξ ξ^M-1 C_aN,ξ,ξ_m_1,ξ_m_2,a=2,L,3.In the massless case we findC̃_20,M,0,0 = /3π Γ^3(1-M)Γ^3(1+M)/Γ(2-2M)Γ(2+2M) 3(2+3 M-3 M^2)/2M(3-2M) , C̃_L0,M,0,0 = /3π Γ^3(1-M)Γ^3(1+M)/Γ(2-2M)Γ(2+2M) 3(1-M)/3-2M ,C̃_30,M,0,0 = 0,which reproduce the results of Ref. <cit.>. In the massive case in neutral current we obtainC̃_20,M,ξ_m,ξ_m =/3π 3/2ξ_m^MΓ^3(1-M ) Γ(1+M )/(3-2M)(1+2M)Γ(2-2M )×{ 1+M- 1+M-2+ 3 M -3 M^2/2ξ_m_2F_1 (1-M,1,3/2; -1/4ξ_m)-2g_A^2/g_A^2+g_V^21+2M/1-2M[8ξ_mM-2ξ_m-2+4ξ_m+1^2_2F_11-M,1,-1/2,-1/4ξ_m]},C̃_L0,M,ξ_m,ξ_m =/3π 3/2ξ_m^MΓ^3(1-M ) Γ(1+M )/(3-2M)(1+2M)Γ(2-2M )4ξ_m/1+4ξ_m×{3+1-M/2ξ_m - 3+1-M/ξ_m(1- M/4ξ_m)_2F_1 (1-M,1,3/2; -1/4ξ_m)-g_A^2/g_A^2+g_V^21+4ξ_m1+2M/12ξ_m^2[6ξ_m2M-3 _2F_12-M,1,3/2,-1/4ξ_m+3M-4 _2F_12-M,2,5/2,-1/4ξ_m]},C̃_30,M,ξ_m,ξ_m =0.Apart from the contribution proportional to the axial coupling g_A^2, which is new, the other terms reproduce the results of Refs. <cit.>. Finally, the massive case in charged current is given byC̃_20,M,ξ_m,0 = /3π3/4ξ_m^M Γ1-M^3 Γ1+M/ M3-2M4M^2-1Γ2-2M1/ξ_m×{MM^2ξ_m^2-2ξ_m-3-Mξ_m^2-4ξ_m-3+ξ_m+2+1-M1+ξ_mM^2ξ_m+3-3M-2 _2F_12M-1,1,M+1,1/1+ξ_m},C̃_L0,M,ξ_m,0 = /3π3/4ξ_m^M Γ1-M^3Γ1+M/M3-2M1+2M1-2MΓ2-2M1/ξ_m×{2M^33ξ_m+1-M^2ξ_m^2+11 ξ_m+2-M ξ_mξ_m-2+2ξ_m^2+M-12M^2ξ_m+1+Mξ_m^2-5ξ_m-2+2ξ_m^2×_2F_12M-1,1,M+1,1/1+ξ_m},C̃_30,M,ξ_m,0 =/3π3/2ξ_m^M1-MΓ1-M^3Γ1+M/M3-2MΓ2-2Mξ_m/1+ξ_m×[_2F_12M,1,1+M,1/1+ξ_m-1-1/ξ_m].§.§ Special limits We find instructive to study the above expressions in two particular limits, namely the massless limit and the limit M→ 0. The latter is useful to construct the fixed-order expansion of the resummed result.§.§.§ Massless limitWe can use the Mellin forms to study the massless limit ξ_m→0 of the resummed expressions. We have for neutral currentlim_ξ_m → 0C̃_20,M,ξ_m,ξ_m = C̃_2(0,M,0,0) + K̃_hg(M,ξ_m),lim_ξ_m → 0C̃_L0,M,ξ_m,ξ_m = C̃_L(0,M,0,0),and for charged currentlim_ξ_m → 0C̃_20,M,ξ_m,0 = C̃_2(0,M,0,0)+ 1/2K̃_hg(M,ξ_m),lim_ξ_m → 0C̃_L0,M,ξ_m,0 = C̃_L(0,M,0,0), lim_ξ_m → 0C̃_30,M,ξ_m,0 = C̃_3(0,M,0,0)+ 1/2K̃_hg(M,ξ_m),lim_ξ_m → 0C̃_30,M,0,ξ_m = C̃_3(0,M,0,0) - 1/2K̃_hg(M,ξ_m),where we have defined K̃_hg(M,ξ_m) = - /πξ_m^M1-M/MΓ^3(1-M)Γ(1+M)/(3-2M)Γ(2-2M).The function K̃_hg(M,ξ_m) contains the collinear singularity, appearing as a M=0 pole, and produces the logarithmic mass contributions when expanded in powers of M. In a sense, it represents the conversion from the collinear singularity regularized by the off-shellness and the very same singularity regularized by the mass, see Eq. (<ref>). The inverse Mellin of Eq. (<ref>), needed for the running coupling resummation Eq. (<ref>), can be obtained in the following way. We first split the function as the product of three different factors which we write in integral form:Γ(1-M)Γ(1+M)= M∫_0^∞ dt t^M-1/1+t, 4^1-M(1-M)Γ^2(1-M)/(3-2M)Γ(2-2M) = ∫_0^1 dx x^1-M/√(1-x),(4ξ_m)^M/M = ∫_0^1 dy/y(4yξ_m)^M.Then we change variable from t to ξ=4ξ_mty/x to write the product in the form of a Mellin transfrom:K̃_hg(M,ξ_m)= - /4π M∫_0^∞ dt t^M-1/1+t∫_0^1 dx x^1-M/√(1-x)∫_0^1 dy/y(4yξ_m)^M = - /4π M∫_0^∞ dξ ξ^M-1∫_0^1dx/√(1-x)∫_0^1dy/yxy4ξ_m/xξ+y4ξ_m.At this point we can simply read off the inverse Mellin transform in integral form𝒦_hg(ξ,ξ_m)= - /4π∫_0^1dx/√(1-x)∫_0^1dy/yxy4ξ_m/xξ+y4ξ_m= - /3π 4ξ_m/ξ+logξ_m/ξ+ 2-4ξ_m/ξ√(1+4ξ_m/ξ)log√(ξ/4ξ_m)+√(1+ξ/4ξ_m) .which has been computed explicitly in the second line. The ξ-derivative of this expression is Eq. (<ref>). §.§.§ Fixed-order expansion and collinear singularities The Mellin form of the off-shell coefficients can be also used to compute expansions in M=0, which are needed for computing theexpansion of the resummed results. For matching up to NNLO, i.e. (^2), we need the expansions up to (M). In the massless case we haveC̃_20,M,0,0 =/3π[1/M+13/6+71/18-ζ_2M+M^2], C̃_L0,M,0,0 =/3π[1-M/3+M^2],while in the massive case in neutral current we obtainC̃_20,M,ξ_m,ξ_m =/3π[1+4√(1+4ξ_m)^-12√(ξ_m)/2+g_A^2/g_A^2+g_V^212ξ_m ^-12√(ξ_m)/√(1+4ξ_m)+M/6√(1+4ξ_m){√(1+4ξ_m)3lnξ_m+5+2^-12√(ξ_m)×13-10ξ_m+61-ξ_mlnξ_m-61-ξ_m H_-,+-1/√(1+4ξ_m)+12ξ_mg_A^2/g_A^2+g_V^2(8ln(4ξ_m)-16ln1+√(1+4ξ_m)+^-12√(ξ_m)6lnξ_m+28-3H_-,+-1/√(1+4ξ_m))}+M^2],C̃_L0,M,ξ_m,ξ_m =/3π[√(1+4ξ_m)1+6ξ_m-8ξ_m1+3ξ_m^-12√(ξ_m)/1+4ξ_m√(1+4ξ_m)+g_A^2/g_A^2+g_V^22ξ_m47ξ_m+2^-12√(ξ_m)-√(1+4ξ_m)/31+4ξ_m√(1+4ξ_m)+M/31+4ξ_m√(1+4ξ_m){√(1+4ξ_m)12ξ_m-1+31+6ξ_mlnξ_m+^-12√(ξ_m)6+81-6ξ_mξ_m-24ξ_m1+3ξ_mlnξ_m+12ξ_m1+3ξ_m H_-,+-1/√(1+4ξ_m)+2ξ_m g_A^2/g_V^2+g_A^2(2^-12√(ξ_m)23+82ξ_m+67ξ_m+2lnξ_m-√(1+4ξ_m)8+3lnξ_m-6ξ_mln√(1+4ξ_m)-1/√(1+4ξ_m)+1-62+7ξ_mH_-,+-1/√(1+4ξ_m))}+M^2],having defined the harmonic polylogarithmH_-,+z=_21-z/2-_21+z/2+1/2ln1-z^2/4ln1-z/1+z.In the charged-current case, with only one massive quark, we have instead the following expansion:C̃_20,M,ξ_m,0 =/3π[1/2M+8-3lnξ_m+6ln1+ξ_m/6+M/36(86-3lnξ_m10+3lnξ_m+6ln1+ξ_m13+3ln1+ξ_m-36_21/1+ξ_m)+M^2],C̃_L0,M,ξ_m,0 =/3π[ξ_m/1+ξ_m1/2M +6+14ξ_m+3ξ_m1+2ξ_mlnξ_m-6ξ_m^2ln1+ξ_m/61+ξ_m+M/361+ξ_m(-12+92ξ_m-18ξ_m^2ln^21+ξ_m+36ξ_m^2_21/1+ξ_m+9ξ_m1+2ξ_mln^2ξ_m+6ξ_m7ξ_m-1lnξ_m+66+ξ_m15-7ξ_mln1+ξ_m)+M^2],C̃_30,M,ξ_m,0 =/3π[-1/1+ξ_m1/2M-5+3+6ξ_mlnξ_m-6ξ_mln1+ξ_m/61+ξ_m-M/361+ξ_m(56+301+2ξ_mlnξ_m+91+2ξ_mln^2ξ_m-60ξ_mln1+ξ_m-18ξ_mln^21+ξ_m+36ξ_m_21/1+ξ_m)+M^2].In each of the above equation, the pole in M=0, where present, identifies the collinear singularity and is given by the LO P_qg in Mellin space times the LO non-singlet process q+W→ q'. The latter has non-trivial mass dependence in charged current, Eqs. (<ref>), since in this case the final state quark q' is massive. Note that in this case even F_L has a non-vanishing contribution at LO, which is proportional to the mass and thus vanishes in the massless limit. The (M^0) terms in these expansions, after subtraction of massless collinear singularities as in Eqs. (<ref>) and (<ref>), reproduce the known () contributions <cit.>.Higher-order corrections exist both for neutral and charged currents (see e.g. <cit.>), however not in a form we could compare our expansion to.§ DETAILS ON NUMERICAL IMPLEMENTATION§.§ The evolution function U A key ingredient of the formalism for resumming coefficient functions is the evolution function U(N,ξ), defined in Eq. (<ref>). As discussed in Ref. <cit.>, computing it exactly with the resummed anomalous dimension γ_+ (specifically, with the LL^' anomalous dimension) requires integrating γ_+ over all values offrom 0 to ∞, which is numerically inconvenient. Therefore, following Refs. <cit.>, we use an approximate expression for the evolution factor, where the anomalous dimension is assumed to depend ononly at LO, with 1-loop running. This leads to the ABF evolution factor <cit.> U_ ABF(N,ξ) = (1+r(N,)logξ)^γ_+(N,)/r(N,) withr(N,) = -Q^2 d/dQ^2γ_+(N,)/γ_+(N,) = ^2 β_0 d/dγ_+(N,)/γ_+(N,).Note that the ratio r(N,) is such that the approximation reproduces the correct derivative of γ_+ in . However, this effect is strictly speaking beyond the formal accuracy we work with, so one could ignore it and replacer(N,)→β_0.This variant is used to construct the uncertainty band on our resummed predictions.As a simpler approximation we could also consider the fixed-coupling limit, in which all the scale dependence inis ignored and the evolution factor becomes simplyU_ f.c.(N,) = ξ^γ_+(N,).In this case our formula for computing the resummation of coefficient functions simply reduces to a Mellin transformation with moment γ_+.The integration range in the off-shellness ξ in the resummation formula extends to all accessible values between 0 and ∞. In the running-coupling case,is computed at ξ Q^2 in U, Eq. (<ref>), so at some small value of ξ the Landau pole is hit, and the integration must stop there. With 1-loop running (and also at higher loops if the expanded solution for the running coupling is used) the position of the Landau pole is given by ξ_0=exp-1/β_0,and ξ-integration must be limited to the region ξ≥ξ_0. In the fixed-coupling limit,is frozen at its value in Q^2, so all values of ξ are in principle accessible, i.e. ξ_0f.c.=0.In Ref. <cit.> we derived the resummation formula which had originally the form (schematically)C_ res(N) = ∫_ξ_0^∞ dξC(0,ξ)d/dξU(N,ξ).Then, for convenience in the numerical implementation, we integrated by parts to getC_ res(N) = -∫_ξ_0^∞ dξ d/dξ C(0,ξ)U(N,ξ),where the boundary term at infinity vanishes thanks to C, and the boundary term in ξ_0 is assumed to vanish becauseU(N,ξ_0) = 0.The latter assumption is not always true. However, at the leading logarithmic accuracy, which is the only accuracy on which we have control at the moment, the resummed result is only governed by the fixed-coupling anomalous dimension γ_s, dual of the LO BFKL kernel. Thus, the formula Eq. (<ref>) applies, with γ_+=γ_s. To obtain the resummed coefficient function in momentum space, an inverse Mellin transform has to be computed. This amounts to integrating in N over an imaginary contour with abscissa to the right of the small-x singularity, which in the case of γ_s is placed in N= c_0, with c_0 given in Eq. (<ref>). Along such contour, the real part of γ_s is always positive, and therefore U_ f.c.(N,ξ_0=0)=0. Therefore, at the accuracy we are working with, the boundary term indeed vanishes.In practice, however, we include in our resummation subleading contributions which spoil the condition Eq. (<ref>). Indeed the anomalous dimension γ_+ that we use has a more complex structure than γ_s. Additionally, we use the approximation Eq. (<ref>), which is typically finite in ξ=ξ_0:U_ ABF(N,ξ_0) = (1-r(N,)/β_0)^γ_+(N,)/r(N,).Note that using the variant Eq. (<ref>)U_ ABF(N,ξ_0) is either 0 or ∞ depending on the sign of the real part of γ_+. This implies that the two formulations Eqs. (<ref>) and (<ref>) will give in general different results, due to the neglected non-zero boundary term. We have indeed verified this numerically. Despite the fact that this difference is subleading log, and hence either result is formally equally valid, this difference in the formulations is undesirable.In this work we propose to solve the ambiguity by modifying the evolution function with suitable higher-twist terms such that we always have U(N,ξ_0)=0. To do so, we use the evolution functionU(N,ξ) = U_ ABF(N,ξ)D_higher-twist(ξ),withD_higher-twist(ξ) =1-logξ/logξ_0^1+1/β_0ξ<1 1ξ>1.It is easy to verify that the damping function D_higher-twist(ξ) vanishes in ξ_0 and smoothly tends to 1 in ξ=1, with all derivatives vanishing in ξ=1. Moreover, it is clearly higher-twist, i.e. non analytical in the coupling , so it does not influence the perturbative expansion of the evolution factor (which is used for the matching of the resummed expressions to fixed-order).Using this new damped evolution function, we find that the results obtained using Eq. (<ref>) are indistinguishable from those obtained with the undamped function, which confirms that the results of Ref. <cit.> are unaffected (from the point of view of U). On the other hand, results obtained using Eq. (<ref>) are now identical to those obtained using Eq. (<ref>), as they must, since now the boundary term is identically zero by construction.From the point of view of the numerical implementation, we observe that the N dependence of the resummed coefficient functions is all contained in U, Eqs. (<ref>) or (<ref>). Therefore, we can first compute the inverse Mellin transform of U(N,ξ) Eq. (<ref>) as function of ξ, U(x,ξ), which we tabulate for various values of , x and ξ, and we subsequently use it to compute the ξ integration for each observable. Inboth the old methodology (which integrates first in ξ and then in N) and the new one (integration order inverted) are implemented, and give of course identical results (within numerical integration errors). The new implementation is faster. §.§ Implementation of kinematic theta functions at resummed level In massive coefficient functions, kinematic constraints for the production of the massive final state are implemented through theta functions appearing in the coefficient functions. For instance, in the case of DIS neutral-current structure functions, the theta function has the form θ(X-x), with X = 1/(1+4m^2/Q^2) and where x is the Mellin integration variable.[In the charged-current case, when the mass of the quark before and after hitting the W is different, the form of X generalizes to X=1/(1+(m_1+m_2)^2/Q^2).] The very same theta function appears also in the off-shell coefficient, as it depends only on the kinematics of the final state. In the resummation procedure, the off-shell coefficient function is Mellin transformed with respect to x and then the result is evaluated in Mellin moment N=0. The last step (which is strictly speaking not necessary) loses track of the theta function, and the inverse Mellin transform of the resummed on-shell coefficient is non-zero also in kinematically unaccessible regions.A possible solution to restore the kinematic theta function is simply to avoid computing the off-shell coefficient in N=0. This is possible, however, it is not convenient for at least two reasons: the first is that all expressions and calculations become significantly more complicated, and the second is that for consistency this should be done also in the massless case. The latter requirement is necessary in the construction of the collinearly resummed coefficient functions, otherwise the massless limit of the (collinear subtracted) resummed massive on-shell coefficients would not tend to the massless ones, Eq. (<ref>).Therefore, we seek a solution which restores the theta function in the resummed approach, while keeping using the off-shell coefficient in N=0. The implementation must satisfy three requirements:*the theta function should be restored without affecting the logarithmic accuracy of the result;*the x→ X limit must be smooth;*in the massless X→1 limit the effect must disappear completely.The first requirement is obvious. The second one is perhaps not mandatory, but it is satisfied in fixed-order results, and avoids sharp transitions between results. The latter requirement is instead needed for a correct implementation of the resummation of collinear logarithms at small-x resummed level.We have investigated different options for the restoration of the theta function such that the requirements above are satisfied. We report here the two main alternatives that we consider, which act on N space and on x space, respectively. The N-space approach consists in multiplying the integrand of Eqs. (<ref>), (<ref>) and (<ref>) by a term of the form Θ_N(N,X) = X^N/(1-X)N+1 = 1+(N)As explicitly indicated, this term is manifestly subleading, and reproduces the theta function θ(X-x) thanks to the X^N term. In the massless limit it reduces to Θ_N(N,1)=1, as required. It can be also verified that, in full generality, the inverse Mellin transform of the resummed coefficient function obtained with this extra function vanishes smoothly as x→ X. The alternative implementation in x space is obtained by multiplying the final resummed coefficient function in x space by the function Θ_x(x,X) = θ(X-x)1-x/X^1/1-X .The function in squared brackets ensures smooth x→ X limit, and it is clearly subleading. In the massless limit X→1 it reduces to Θ_x(x,1) = θ(1-x), as required.The two alternatives are formally equally valid, but lead in general to different numerical results. For practical reasons, we opt for the x-space implementation, Eq. (<ref>). In this way restoring the theta function can be done at the very end, giving full flexibility for the implementation of the resummation. For instance, it is possible to precompute the inverse Mellin transform of the evolution function, U(x,ξ), as described in Sect. <ref>, speeding up the computation of resummed massive coefficient functions. This would not be possible using the N-space formulation, Eq. (<ref>), as in this case the N dependence of ξ-integrand of resummed coefficient functions would include the Θ_N(N,X) term, so the Mellin inversion would not act on U(N,ξ) only. §.§ A convenient approximate form for the fixed-order anomalous dimension In the construction of the off-shell kernel for LL and NLL resummation, we need the dual of the fixed LO or NLO anomalous dimension, denoted χ_s and χ_s, NLO, respectively. These two functions provide the resummation of collinear (and anticollinear) contributions in the DL BFKL kernel. If the duals are computed from approximate expressions of the fixed-order anomalous dimensions, the resummation in the BFKL kernel is only approximate, and one cannot claim to have exact leading or next-to-leading logarithmic accuracy in BFKL. However, our goal is to reach LL and NLL accuracy in the resummed anomalous dimension, not in the BFKL kernel. For this, the BFKL kernel has just to be correct at fixed LO or NLO, since by duality this fully determines the LL and NLL contributions of the DGLAP anomalous dimension. The reason why also the BFKL kernel is being resummed is just that the resummation stabilizes its perturbative expansion, which is otherwise highly unstable close to the collinear and anticollinear poles. Therefore, from the point of view of the accuracy of the result, it is well possible to use an approximate expression for the LO and NLO anomalous dimensions, provided their LL and NLL parts are correct, as they correspond by duality to contributions of () and (^2) in the BFKL kernel, which need to be correct. Then, once the fixed-order part of the resummed (N)LO+(N)LL anomalous dimension is subtracted, the resummed contributions Eqs. (<ref>), (<ref>) can be added to the exact fixed-order anomalous dimension, restoring the correct fixed-order part.This approach was already considered in both Refs. <cit.>, with two different implementations. The basic motivation was that the exact fixed-order anomalous dimension γ_+, being the eigenvalue of a matrix, contains a square-root branch-cut, which is inherited by the DL anomalous dimension and would give rise to a spurious oscillating behaviour when performing an inverse Mellin transformation. The approximate γ_+ implemented in Ref. <cit.>, which is a somewhat simplified version of the one originally proposed in Ref. <cit.>, was simply obtained by taking γ_gg computed for n_f=0 (which is then also the eigenvalue, as there are no quarks and the matrix reduces to a single entry) and adding the n_f-dependent contributions of the exact γ_+ restricted to LL and NLL. This procedure ensures that the resulting anomalous dimension reproduces the LL and NLL behaviour of the exact one, but it behaves as γ_gg elsewhere in the N plane. This implies in particular that the anomalous dimension grows (negatively) as log N at large N. Note also that this construction violates momentum.We observe that the large-N logarithmic growth of γ_+ is problematic. Indeed, the dual function χ_s (or χ_s, NLO) grows exponentially for negative M as |M| gets larger. The DL kernel, by duality, should then be able to reproduce the logarithmic growth at large N. However, the DL kernel does not only contain χ_s, but also the fixed-order BFKL kernel, which contains poles for all integer values of M. Therefore, by duality, the large N behaviour of the DL anomalous dimension is determined by the rightmost M pole for negative M, which is in M=-1, which implies that the DL anomalous dimension tends to -1 as N→∞. This problem was ignored in previous works, as the M=-1 pole represents a higher twist contribution, and the practical effect is almost negligible. However, it would be ideal to avoid this issue. One option would be to act on the DL BFKL kernel, hacking it such that the poles for negative M are no longer present. While we have tried this solution, we think that it is not the best approach. A significantly better solution is obtained if the anomalous dimension used in the duality relation does not grow at large N, but rather it goes as a constant larger than -1, such that χ_s never hits the rightmost negative M pole: in this way, the large-N behaviour of the DL anomalous dimension is determined by χ_s itself, and hence corresponds to the one of the input anomalous dimension.Thus, here we propose a new approximation for the fixed-order anomalous dimension. We require that the LL and NLL behaviour is reproduced, that momentum conservation is preserved exactly, and that at large N it behaves as a constant greater than -1. Given that the LO and NLO anomalous dimensions behave close to N=0 asγ_0(N)= a_11/N + a_10 +(N)γ_1(N)= a_22/N^2 + a_21/N + a_20 +(N)where a_22=0 (accidental zero), we propose the following approximate expression, γ(N) = a_1/N + a_0 - (a_1+a_0)2N/N+1,which is valid both at LO and NLO with appropriate coefficients. At NLO they are given bya_1=a_11 + ^2 a_21, a_0=a_10 + ^2 a_20,and at LO one simply neglects the (^2) terms; the coefficients are given bya_11 = C_A/π, a_21 = n_f26C_F-23C_A/36π^2, a_10 = -11C_A + 2n_f(1-2C_FC_A+4C_F^2)/12π, a_20 = 1/π^2 1643/24 - 33/2ζ_2 - 18 ζ_3 + n_f4/9ζ_2-68/81 + n_f^213/2187 .Note that a_20 is formally NNLL, so it could in principle be ignored (similarly, for LL resummation one could ignore a_10); in practice, including both a_11 and a_10 at LO and both a_21 and a_20 at NLO provides an excellent approximation of the anomalous dimension in the small-N region. The last term in Eq. (<ref>) is a subleading (N) contribution which ensures momentum conservation γ(1)=0. At large N, Eq. (<ref>) behaves as a constant, γ(N) N→∞→ -2a_1-a_0.We verified that -2a_1-a_0>-1 for all values of ,n_f that we consider. In particular, the worst case is obtained at NLO with n_f=3, where the condition γ(N→∞)>-1 is satisfied for < 0.558. For all other values of n_f, and at LO, therange of validity is larger. Clearly, this is more than enough, as for such values ofthe perturbative hypothesis is lost. Indeed, in our current numerical implementation we never consider energy scales for which >0.35.Another very important advantage of the approximation Eq. (<ref>) is that its inverse function (i.e. the dual) can be computed analytically:χ_s(M)= a_1+a_0-M + √((M+a_1-a_0)^2 + 8a_1(a_1+a_0))/2(2a_1+a_0+M)(here we generically use the name χ_s for representing both the dual to the LO anomalous dimension and the dual to the NLO one, previously called χ_s, NLO). This represents a great advantage from the point of view of the numerical implementation, as in the general case one would have to compute the inverse function by means of zero-finding routines, which are typically slow and do not ensure convergence, especially when working in the complex plane. Consider also that the DL anomalous dimension from Eq. (<ref>) is itself obtained by means of zero-finding routines, applied to the DL kernel which is given in terms of χ_s, giving rise to nested zero-finding which clearly cannot guarantee best performance. Therefore, using the analytic expression Eq. (<ref>) for χ_s allows to have a single layer of zero-finding routines, improving significantly the numerical stability and the speed of the code.[A word of caution is needed in the choice of the branch of the square-root. Along the Mellin inversion integration path, which is the only place in N space where the DL anomalous dimension is computed, the standard branch of the square-root is suitable for the collinear χ_s. However, for the anti-collinear χ_s, a different branch is needed, where the cut is placed on the negative imaginary axis, which avoids crossing the cut during integration.] The expansion of χ_s in power ofis given by χ_s(M) = a_11/M + a_11a_10^2/M^2 + a_21/M + (^3),which implies, according to Eq. (<ref>),χ_01 = a_11 = C_A/π,χ_02 = a_11a_10 = -11C_A^2 + 2n_f(C_A-2C_F)/12π^2,χ_11 = a_21 = n_f26C_F-23C_A/36π^2.Up to this order, these are identical to the dual of the exact anomalous dimension, as an obvious consequence to the fact that the approximate anomalous dimension is constructed to preserve LL and NLL accuracy. §.§ Inverse Mellin transforms Here we compute the various ingredients for the inverse Mellin transforms of Eqs. (<ref>)–(<ref>). Using the approximate form of γ_0 given in App. <ref>, we haveγ_0 (N)= a_11/N -(a_10+2a_11) +2(a_11+a_10)/N+1,γ_0'(N)= -a_11/N^2 -2(a_11+a_10)/(N+1)^2.We can also write the function Eq. (<ref>) asf_ mom(N) = 4/N+1 - 4/(N+1)^2.In Eq. (<ref>) a number of products appear. When a power of 1/N multiplies a power of 1/(N+1), it is always possible to write it as a sum of powers of individual poles in either N or N+1. Therefore, most terms can be computed by means of the following inverse Mellin transforms,M^-1 1/N^k+1= (-1)^k/k!log^kx/x, M^-1 1/(N+1)^j+1= (-1)^j/j!log^jx,where the ^-1 symbol is a shorthand notation for representing the Mellin inversion, ^-1 f(N)= ∫_1/2-i∞^1/2+i∞dN/2π ix^-N f(N).Additionally, the function ψ_1(N+1), multiplied by powers of 1/N or 1/(N+1), appears. The computation of these inverse Mellin is trickier. We start fromM^-1 ψ_1(N+1)= log x/x-1.To compute inverse Mellin of ψ_1(N+1) with products of 1/N and 1/(N+1), we compute consecutive convolutions of Eq. (<ref>) with the inverse Mellin of a single power of 1/N and 1/(N+1), which are given respectively by 1/x and 1. Starting fromM^-1 1/Nψ_1(N+1)= ∫_x^1dz/xlog z/z-1 = _2(1-x)/x, M^-1 1/N+1ψ_1(N+1)= ∫_x^1dz/zlog z/z-1 = ζ_2 - _2(x) + 1/2log^2x-log(1-x) log x,we can easily compute successive integrals by just integrating these results (as functions of z) in dz/x or dz/z. The relevant results areM^-1 1/N^2ψ_1(N+1)= 2[_3(x)-ζ_3]-[_2(x)+ζ_2]log x/x, M^-1 1/(N+1)^2ψ_1(N+1)= 2[_3(x)-ζ_3]-[_2(x)+ζ_2]log x-1/6log^3x, M^-1 1/N^3ψ_1(N+1)= 3[ζ_4-_4(x)]+[_3(x)+2ζ_3]log x + 1/2ζ_2log^2x/x, M^-1 1/(N+1)^3ψ_1(N+1)= 3[ζ_4-_4(x)]+[_3(x)+2ζ_3]log x + 1/2ζ_2log^2x + 1/24log^4x.Because thecode, where these expressions are implemented, has to be fast, as it is meant to be used in PDF fits, the appearance of polylogarithms is not ideal. Therefore, we can consider a small-x approximation of these expressions. After all, the complicated structure of the (^3) contribution in Eq. (<ref>) comes from the complicated all-order structure of the resummed result, but what really matters in the resummation is the prediction of small-x contributions, while uncontrolled terms which vanish as x→0 are irrelevant. Hence, we approximate the expressions above asM^-1 1/N^2ψ_1(N+1)-ζ_2= -2ζ_3/x + 2 - log x +(x) =M^-1 -2ζ_3/N + 2/N+1 + 1/(N+1)^2+(x)M^-1 1/(N+1)^2ψ_1(N+1)-ζ_2= -2ζ_3 - 1/6log^3 x +(x) =M^-1 -2ζ_3/N+1 + 1/(N+1)^4+(x)M^-1 1/N^3ψ_1(N+1)-ζ_2= 2ζ_3/xlog x + 3ζ_4/x -3 + log x +(x) =M^-1 -2ζ_3/N^2+3ζ_4/N - 3/N+1 - 1/(N+1)^2+(x)M^-1 1/(N+1)^3ψ_1(N+1)-ζ_2= 3ζ_4 + 2ζ_3log x + 1/24log^4 x +(x) =M^-1 3ζ_4/N+1-2ζ_3/(N+1)^2 + 1/(N+1)^5+(x)In these equations we have also provided the Mellin transform of the approximate expressions, which is needed for the analytic computation of the momentum conservation, Eq. (<ref>). We verified that using these approximate expressions leads only to tiny deviations with respect to the exact expressions, and all in a region of x which is not under control of small-x resummation (specifically x>10^-2). On the other hand, the speed-up is significant, fully justifying their use.tocsectionReferencesjhep
http://arxiv.org/abs/1708.07510v2
{ "authors": [ "Marco Bonvini", "Simone Marzani", "Claudio Muselli" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170824180000", "title": "Towards parton distribution functions with small-$x$ resummation: HELL 2.0" }
We provide a topological procedure to obtain geometric realizations of both classical and `exotic' G-manifolds, such as spheres, bundles over spheres and Kervaire manifolds. As an application, we apply the process known as Cheeger deformations to produce new metrics of both positive Ricci and almost non-negative curvature on such objects. The first author is financially supported by CNPq, grant number 131875/2016-7. The second author was financially supported by FAPESP, grant numbers2009/07953-8 and 2012/25409-6 and CNPq 404266/2016-9. Towards non-invasive cancer diagnostics and treatment based on electromagnetic fields, optomechanics and microtubules J. A. Tuszynski December 30, 2023 =====================================================================================================================§ INTRODUCTION Compared with other curvature conditions, examples of manifolds with positive sectional curvature are still sparse in literature (see <cit.> for a survey). For instance, there are still no obstructions that distinguish the class of simply connected compact positively curved manifolds from non-negatively curved ones. In parallel to this situation, it is still not known the existence of metrics of positive sectional curvature in interesting manifolds, such as exotic spheres and sphere bundles over spheres. Easier to tackle is the existence of metrics of positive Ricci and positive scalar curvatures. Many geometric/topological constructions are well succeed in this aim. For instance, the topological constructionsin <cit.> and the constructions based on symmetriesin <cit.>. Here we provide a common ground for <cit.>, geometric realizing `exotic' manifolds as isometric quotients of principal bundles over `standard' ones. The realization is incarnated in the form of a cross-diagram: G@..[d]^∙ G@..[r]^⋆P[d]^π[r]^π' M'M Special properties of the constructed diagrams allow one to compare the geometries of M and M' (see section <ref> and <ref>). Former instances of diagram (<ref>) are found in <cit.>, mainly exploiting the differential topology of exotic spheres. On the other hand, the construction explicitly realizes Morita equivalences between related transformation groupoids[The authors thank O. Brahic and C. Ortiz for pointing it out] which might be of independent interest. As an application, we establish the existence of metrics of positive Ricci curvature and almost non-negative curvature in new examples (to the best of our knowledge). The construction works specially realizing some connected sums (see section <ref> or Theorem <ref>). We recall that a manifold M has almost non-negative sectional curvature if it admits a sequence of Riemannian metrics {g_n}_n∈ N satisfying _g_n≥ -1/n and diam_g_n≥ 1/n (see <cit.> for recent developments). Let Σ^7, Σ^8 be any homotopy sphere in dimensions 7 and 8, Σ^10 any homotopy 10-sphere which bounds a spin manifold and Σ^4m+1 the Kervaire sphere in dimension 4m+1. Then the manifolds below admits a sequence {g_n}_n∈ N of metrics with positive Ricci curvature satisfying _g_n≥ -1/n and diam_g_n≥ 1/n: * M^7#Σ^7, where M^7 is any 3-sphere bundle over S^4, * M^8#Σ^8, where M^8 is either a 3-sphere bundle over S^5 or a 4-sphere bundle over S^4 * M^10#Σ^10, where * (M^8× S^2),M^8as in item (2) * M^10 is any 3-sphere bundle over S^7, 5-sphere bundle over S^5 or 6-sphere bundle over S^4 * M^4m+1#Σ^4m+1 where * S^2m⋯ M^4m+1→ S^2m+1 is a sphere bundle representing any multiple of the unitary tangent of S^2m+1 * CP^m⋯ M^4m+1→ S^2m+1 is the CP^m-bundle associated to anymultiple of U(m)⋯ U(m+1)→ S^2m+1 * U(m+2)/SU(2)× U(m) * (M^8m+k× N^5-k)#Σ^8m+5 where N^5-k is any manifold with positive Ricci curvature and * S^4m+k⋯ M^4m+k→ S^4m+1 is a sphere bundle representing the k-th (fiberwise) suspension of any multiple of the unitary tangent of S^4m+1 * k=1, HP^m⋯ M^8m+1→ S^4m+1 is the HP^m-bundle associated to anymultiple of Sp(m)⋯ Sp(m+1)→ S^4m+1 * k=0, M=Sp(m+2)/Sp(2)× Sp(m) * k=1, M=M^8m+1 as in item(<ref>) We recall that a homotopy n-sphere is a manifold homotopy equivalent to the unitary sphere of R^n+1. For n>4, homotopy spheres are homeomorphic to standard spheres (Smale <cit.>). It is exotic if it is not diffeomorphic to the standard sphere. As computed by Kervaire and Milnor <cit.>, there are (up to orientation) exactly 14 exotic spheres in dimension 7, one in dimension 8 and one in dimension 10 which bounds a Spin manifold (there are two more that does not). Kervaire spheres are the boundary of P^2n(𝐀_2) (in Bredon <cit.> notation), the plumbing of two copies on the (disc bundle associated to the) tangent bundle of S^n, n=2m+1 (see section <ref> for a more detailed description). The Kervaire sphere is known to be exotic for all but finitely many (known) values of m (see Hill–Hopkins–Ravenel <cit.> and Wang–Xu <cit.>). When n is even, the resulting boundary, Σ^4m-1=∂ P^4m(𝐀_2), has H_2m(Σ^4m-1)≅ Z_3. As an extra contribution, we provide metrics of positive Ricci curvature on quotients of Σ^4m-1. Whenever n is even or odd, ∂ P^2n(𝐀_2) posses a (bi-axial) O(n)-action which restricts to a free SO(2) (respectively, SU(2)) action when n is even (respectively, when n=0 4). We denote the quotient by Σ P C^n (respectively, Σ P H^n/2). Σ P C^n (respectively, Σ P H^n/2) is an U(n)-manifold (respectively, a Sp(n/2)-manifold). Σ P C^n and Σ P H^n/2 posses invariant metrics with positive Ricci curvature. The restriction/variety of examples in Theorem <ref> is related to the presence of symmetries and explicit realizations of spheres. To give a more precise statement, we fix the representations below: (ρ_7) n=7, G=S^3: ρ=ρ_1⊕ρ_1⊕ρ_0, where ρ_1:S^3→ O(3) is therepresentation induced by the double cover S^3→ SO(3) and ρ_0:S^3→ SO(1) is the trivial one (ρ_8) n=8, G=S^3: ρ=ρ_⊕ρ_1⊕ρ_0, where ρ_:S^3→ O(4) is the representation induced by right (or left) multiplication by a quaternion (ρ_10) n=10, G=S^3: ρ=ρ_⊕ρ_1⊕ 3ρ_0 or ρ=ρ_1⊕ρ_1⊕ρ_1⊕ρ_0 (ρ_4m+1) n=4m+1, G=U(m): ρ=ρ_U(m)⊕ρ_U(m)⊕ρ_0 where ρ_U(m):U(m)→ O(2m) is the standard representation (ρ_8m+5) n=8m+5, G=Sp(m): ρ=ρ_Sp(m)⊕ρ_Sp(m)⊕ 5ρ_0 where ρ_Sp(m):Sp(m)→ O(4m) is the standard representation Given a G-invariant metric on M, we recall that M/G is a metric space with orbit distance. The G-orbits have constant dimension on an open and dense set of M^*, called the principal part (see Bredon <cit.>). The subset M^*/G has a natural manifold structure that makes the quotient M^*→ M^*/G a Riemannian submersion. Let M^n be a G-manifold that posses a fixed point whose isotropy representation is (ρ_n). Then: * if the orbits on M have finite fundamental group and M posses a G-invariant metric such thatRicci_M^*/G≥1, then M^n#Σ^n has a metric with positive Ricci curvature * ifM posses a family of G-invariant metrics such that M^*/G has almost non-negative curvature, then M^n#Σ^n has almost non-negative curvature * if M posses a family of metrics satisfying (1) and (2), then M^n#Σ^n admits a family of metrics {g_n}_n∈ N with positive Ricci curvature satisfying _g_n≥ -1/n and diam_g_n≥ 1/n For Theorem <ref>, we construct a diagram as in (<ref>) with G compact and M' diffeomorphic to M^n#Σ^n. For G compact, we prove that, given a G-invariant metric on M, there is a G-invariant metric on M'such that M'/G is isometric to M/G (see section <ref> for the G-action on M'). Thus, if M satisfies the hypothesis in Theorems A, B or C in Searle–Wilhelm <cit.>, so does M'. Using diagram (<ref>), we also provide a self-contained proof of Theorem <ref> (the greater machinery in <cit.> can be avoided in Theorem <ref> since diagram (<ref>) gives a better control of the geometry on the non-principal part of M). Some prior works related to Theorem <ref>: the works of Nash <cit.> and Poor <cit.> provides metrics of positive Ricci curvature on sphere bundles over spheres. The procedure in Fukaya–Yamaguchi <cit.> is consistent with Nash <cit.> and Poor <cit.>, thus, providing metrics which are simultaneously Ricci positive and almost non-negative on sphere bundles over spheres. Crowley–Wraith <cit.> considers positive Ricci curvature on connected sums of highly connected odd-dimension manifolds with exotic spheres, covering the Ricci curvature in example (i) and (possibly partially) in example (iv). Almost non-negative curvature in homotopy 7-spheres was established by Searle–Wilhelm <cit.>. The connected sum M^n#Σ^n does not always results in a new manifold. Following de Sapio <cit.>(recall from <cit.> and <cit.> that, in de Sapio's notation, Σ^8∈σ(π_3SO(4),π_4SO(3)), Σ^10 is an order 3 generator of σ(π3SO(6),π_6SO(3)) and Σ^4m+1=σ(τ_2m,τ_2m), where τ_n:S^n→ SO(n+1) is a transition function for the tangent bundle of S^n+1), one concludes that M^n#Σ^n≅ M^n if M^n is the non-trivial SU(2)-principal bundle over S^5, the SU(2)-principal bundle over S^7 corresponding to a generator of π_6SU(2)≅ Z_12 and the unit sphere bundle T_1S^2m+1. M^n#Σ^n M^n for the following M^n: any product of standard spheres; any 4-sphere bundle over S^4; Σ^8× S^2; the SU(2)-principal bundles corresponding to three times a generator in π_6SU(2). All other cases are undecidable within the authors reach. §.§ Cross Diagrams and Notation The present construction encompasses the constructions in <cit.>. The aim is to produce a diagram as in (<ref>) out of a givenG-manifold M and equivariant bundle data (see Definition <ref>for details). For short, we denote diagram (<ref>) by Mπ←Pπ'→M'. In diagram (<ref>), P is a (special) G-G-bundle: following <cit.>, a G-G-bundle π:P→ M is a principal G-bundle equipped with an extra G-action, which we call the ⋆-action. Both G-actions on P are assumed to commute, making P a G×G-manifold. When dealing with the G× G-action, we use G×{𝕀} as the ⋆-action and {𝕀}× G as the principal action of π. We denote the ⋆-action (respectively, the principal action of π) by left juxtaposition (respectively, right juxtaposition). That is, for every (r,s,p)∈ G× G× P, (r,s,p)↦ rps^-1 (we use the inverse of s to keep the `left action' convention). Since the ⋆-action commutes with the principal action of π, it descends to a G-action on M which we denote by (r,x)↦ r· x. The action makes π equivariant: π(rps^-1)=rπ(p). In particular, if (r,s)∈ G× G fix p, r fixes π(p). On the other hand, if r∈ G fixes x∈ M, since the principal action of π acts freely and transitively on the fibers of π, for every p∈π^-1(x), there is a unique s_p∈ G such that rps_p^-1=p. Note that s_pg^-1=gs_pg^-1. We denote by (G× G)_p and G_x the isotropy subgroups on P and M. As in <cit.>, we consider only special G-G-bundles (and, by doing so, omit the adjective `special') by imposing two further conditions: the ⋆-action must be free; for every p∈ P, there is g∈ G such that[In practice we use Definition <ref>, based on transition functions. An equivalence between (<ref>)and Definition <ref> follows from Proposition <ref>.] (G× G)_p={(h,ghg^-1) | h∈ G_π(p)}. Observe that, if Mπ←Pπ'→M' is a special G-G-bundle, so it is M'π'←Pπ→M (by interchanging the wholes of the two actions). In particular, the principal action of π defines a G-action on M'. Most manifolds will be constructed by patching together subsets. For A_i⊂ X_i, i∈Λ, andbijections f_ij:A_i→ A_j satisfying f_ikf_kjf_ji(x)=f_ii(x)=x (for any x it make sense), we denote {f_ij:A_i→ A_j}_i,j∈Λ, we denote ∪_f_ijX_i=∪X_i/∼, where ∼ is the equivalence relation whose non-trivial relations are A_i∋ x∼ f_ij(x)∈ A_j. Of particular interest are the cases of `twisted manifolds' and of principal bundles: let {X_i=U_i}_i∈Λ be an open cover for a manifold M and f_ij:U_i∩ U_j→ U_i∩ U_j diffeomorphisms. The new object ∪_f_ijU_i is a manifold locally resembling M, but with possibly different global nature; to define a principal bundle, it is sufficient to provide a collection of transition functions. That is,a collection {ϕ_ij:U_i∩ U_j→ G}_i,j∈Λ satisfying the cocycle condition: ϕ_ijϕ_jkϕ_ki(x) =ϕ_ii(x)=id∈ G,  ∀ x ∈ U_i∩ U_j ∩ U_k. The total space P=∪_f_ijX_i is recovered by setting X_i=U_i× G and f_ij(x,g)=f_ϕ_ij(x,g)=(x,gϕ_ij(x)).The projection π : P→ M and the principal action (s,p)↦ ps^-1 are defined on U_i× G by π(x,g)=x, (x,g)s^-1=(x,sg). Both π and the multiplication (as defined above) commute with f_ϕ_ij, defining global maps. Although, only open covers are considered above, the casewhere M is the union of two manifolds by their boundaries is present in the examples. Let M=X∪ Y and f:∂ X→∂ Y a diffeomorphism. The manifold M'=X∪_f Y admits a unique smooth structure such that the inclusions X,Y⊂ M' are smooth. Such smooth structure is obtained through a collaring argument: using the collaring theorem (Theorem (3.3) in Kosinski <cit.>) one can extend X,Y to open manifolds X',Y' (without boundaries) whose 'ends' are diffeomorphic to ∂ X× (-1,1), ∂ Y× (-1,1) with embeddings j_X:∂ X× (-1,1)→ X', j_Y:∂ Y× (-1,1)→ Y'. Then M' is diffeomorphic to X'∪_f' Y' where f'(j_X(x,t))=j_Y(f(x̃),-t). Isotopies of f defines isotopies of f' implying that the manifold structure of M' does not depend on the choices of j_X,j_Y or the isotopic class of f. A sphere bundle S^k-1⋯ M→ B is called linear if O(k) (acting in the usual way on S^k-1) is a structure group. Equivalently, if there is a set of transition functions {ϕ_ij:U_i∩ U_j→ O(n)}. Recall that linear sphere bundles can always be suspended: consider s_k:O(k)→ O(k+1)the inclusion of O(k) as the subgroup of O(k+1) with 1 in the upper-left corner. If {ϕ_ij:U_i∩ U_j→ O(k)} are transition functions for S^k-1⋯ Mπ→B, then {s_kϕ_ij:U_i∩ U_j→ O(k+1)} are transition functions for a linear S^k-bundle over B. We denote the quaternionic field as H and its subspace of pure imaginary quaternions as Im H. The norm, inverse and conjugate of a quaternion x are denoted as |x|, x^-1, x̅, respectively. Some standard spaces (such as discs and spheres) are described in coordinates. When we denote M⊂ V_1× V_2× V_3 we mean that a point in M is a triple (x_1,x_2,x_3) where x_i∈ V_i – we observe there might be relations among the x_i's. For instance, S^7⊂ H× H denotes S^7={(x,y)∈ H× H |  |x|^2+|y|^2=1}. We denote by R_g the Riemannian tensor of the metric g, adopting the sign convention R_g(X,Y)Z=∇_X∇_YZ-∇_Y∇_XZ-∇_[X,Y]Z. We denote K_g(X,Y)=g(R_g(X,Y)Y,X), the unreduced sectional curvature of g and by ||X∧ Y||^2_g=||X||^2_g||Y||^2-g(X,Y)^2. The Ricci tensor of g is denoted by _g(X,Y)=∑_i=1^ng(R(e_i,X)Y,e_i), where {e_1,...,e_n} is an orthonormal basis for g. The associated quadratic form is denoted _g(X)=_g(X,X). §.§ AcknowledgmentsPart of this work was accomplished during the first author's PhD under Prof. A. Rigas and Prof. C. Durán and a postdoc period under Prof. L. A. B. San Martin. The first author thanks all of them. Both thank Prof. Lino Gramafor suggestions and comments. The second author also thanks Prof. L. Grama for teaching significant part of his geometrical background. § CONSTRUCTING G-G-BUNDLES Consider a G-manifold M, an open cover {U_i} and a set of transition functions {ϕ_ij:U_i∩ U_j→ G}. By imposing conditions on U_i and ϕ_ij, one can end up with extra structure on the bundle π:P→ M defined by {ϕ_ij}. Let M be a G-manifold and {U_i} a G-invariant open cover (i.e., U_i is G-invariant for every i∈Λ). A collection {ϕ_ij:U_i∩ U_j→ G} is called a ⋆-collection (or, for short, a ⋆-cocycle) if ϕ_ij satisfies the cocycle condition (<ref>) and ϕ_ij(g· x) = gϕ_ij(x)g^-1, for all i,j and x∈ U_i∩ U_j. Condition (<ref>) is just equivariance with respect to conjugation on G. Given ϕ_ij, we define the adjoint map ϕ_ij : U_i ∩ U_j → U_i ∩ U_j x ↦ϕ_ij(x)· x. ϕ_ij is a G-equivariant diffeomorphism of U_i∩ U_j whenever ϕ_ij satisfies (<ref>) (Lemma <ref>). The core of the paper resides on the next Theorem. Let π:P→ M be the principal G-bundle associated to a ⋆-collection {ϕ_ij:U_i∩ U_j→ G}. Then P admits a new action ⋆:G× P→ P, such that * ⋆ is free and commutes with the principal action of π * the quotient P/⋆ is a G-manifoldG-equivariantly diffeomorphic to M':= ∪_ϕ_ijU_i, where the G-action on U_i is the restriction of the G-action on M. Let P=∪_f_ϕ_ij U_i× G be as in section <ref>. We define the ⋆-action as r(x,g)=(r· x,gr^-1). The action is globally well-defined since, forx∈ U_i∩ U_j, (q· x,gq^-1ϕ_ij(qx)) = (q· x,gϕ_ij(x)q^-1). The definition of M' implicitly requires the cocycle conditionϕ_jkϕ_ij=ϕ_ik. Moreover, its smooth G-manifold structure requiresϕ_ij to be equivariant diffeomorphisms. Next Lemma guarantees both conditions (see also Bredon <cit.>). Let U be a smooth G-manifold and θ, θ': U → G be smooth maps satisfying (<ref>). Then θ,θ':U→ U are G-equivariant diffeomorphisms and θθ'=θ'θ, where θθ': U → G is the pointwise multiplication θθ'(x) = θ(x)θ'(x). For x ∈ U, (θ'θ)(x) = θ'(θ(x)x)θ(x)x= θ(x)θ'(x)θ(x)^-1θ(x)x=θ(x)θ'(x)x=θθ'(x). In particular, taking θ'(x)=θ(x)^-1, one gets θ'=θ^-1. G-equivariance of θ is straightforward. To identify P/⋆ with M', we defineπ':P→ M' in eachU_i× G by π'(x,g)=g· x. (compare Cheeger <cit.>) π' is well defined on P since, for all x∈ U_i∩ U_j, π'(x,gϕ_ij(x))=gϕ_ij(x)x=ϕ_ij(gx)gx=ϕ_ij(π'(x,g)), Moreover, π'(rp)=π'(p) for every p∈ P and ι:U_i→ P defined by ι_i(x)=(x,𝕀)defines a local section for both π and π'. In particular, π' is a submersion whose fibers are the ⋆-orbits. §.§ Induced Bundles Before proceeding to examples, we provide a method to produce new G-G-bundles out of old ones. As one can see, a G-G-bundle naturally lies on the category of G-spaces and G-equivariant maps. Therefore, one could expect that the pullback construction could be carried out by equivariant maps onthe set of G-G-bundles. Here we provide details of this procedure (compare <cit.>). Let M, N be smooth G-manifolds andf : N → M a smoothG-equivariant map, i.e., f(g· x) = g· f(x) for every (x,g) ∈ N× G. Let {ϕ_ij:U_i∩ U_j→ G} be a ⋆-collection of transition functionsassociated to the covering M=∪ U_i and consider π : P → M the ⋆-bundle associated to such collection. We define the induced bundle (or pullback bundle) π_f : f^∗P → Nby f^∗P := {(x,p) ∈ N× P : f(x) = π(p)}, with projection π_f(x,p) := x and principal action (x,p)s^-1 := (x,sp). Standard arguments shows that f^*P is a submanifold of N× P and π_f is a smooth principal submersion. The ⋆-action can also be pulled back to π_f. Define r(x,p) := (r· x,gr^-1). Equation (<ref>) produces an action on N× P that leaves f^∗P invariant. π_f is clearly equivariant with respect to (<ref>). The induced bundle construction becomes a more interesting construction due to its relation with the original bundle π:P→ N. Let f:N→ M be a smooth G-equivariant map and Pπ→ M the G-G-bundle associated to {ϕ_ij:U_i∩ U_j→ G}. Then, * π_f:f^*P→ N is equivariantly isomorphic tothe G-G-bundle associated to the ⋆-collection {ϕ_ij∘ f : f^-1(U_i) ∩ f^-1(U_j) → G}; * the quotient f^*P/⋆ is G-equivariantly diffeomorphic to N' := ∪_ϕ_ij∘ ff^-1(U_i); * there is a well-defined map f':N'→ M' such that, f'|_f^-1(U_i)=f|_f^-1(U_i). The G-equivariance of f guarantees the invariance of the sets f^-1(U_i) and equivariance of ϕ_ij∘ f: ϕ_ij(f(g· x))) = ϕ_ij(g· f(x)) = gϕ_ij(f(x))g^-1. To show that f^*P is the bundle associated to {ϕ_ij∘ f}, consider P=∪_f_ϕ_ij U_i× G and define a map Φ_i:π_f^-1(f^-1(U_i))→ f^-1(U_i)× G via the expression (x,(f(x),g))↦ (x,g). The maps Φ_iclearly patch together to define a G×G-equivariant diffeomorphism between f^*P and ∪_f_ϕ_ijf (f^-1(U_i)×G). Item (ii)follows fromitem (i) and the second item in Theorem <ref>.For item (iii), consider f^*:f^*P→ P, f^*(x,p)=p. f^* is the map the makes the following pullback diagram commutative: f^*P[r]^f^*[d]^π_fP[d]^πN[r]^f M One observes that f^* is G× G-equivariant. In particular, it defines a map f':f^*P/⋆→ P/⋆. It follows from (<ref>) that f^*(ι_i(x))=ι_i(f(x)), where ι_i is defined in (<ref>). Therefore, for x∈ f^-1(U_i), f'(x)=π(f^*(ι_i(x)))=π(ι_i(f(x)))=f(x). § EXAMPLES In this section we provide basic examples of (special) G-G-bundles. We start with the Hopf fibration h:S^7→ S^4 and linear S^3-bundles over S^4, followed by the bundles in <cit.> that realizeshomotopic 8-, 10- and Kervaire spheres (see also <cit.>). Examples <ref> and <ref> can be found in <cit.>. The bundles in Wilhelm <cit.> also can be described as G-G-bundles. §.§ The Hopf S^3-S^3-bundle Consider S^7,S^4 asthe unitary spheres on the quaternionic plane ℍ× H and on R× H, respectively. Define the Hopf map h:S^7→ S^4 by h[ x; y ]=[ |x|^2-|y|^2;2xy̅ ]. Let S^3 denote the unitary sphere on H and observe that, for r,s∈ S^3, h[ rxs̅; rys̅ ]=[ |x|^2-|y|^2; r2xy̅r̅ ]. Condition (<ref>) is easily verified (notice that it is sufficient to consider x∈ R). Therefore h defines an S^3-S^3-bundle with ⋆-action defined by the r-multiplication. Since quaternionic conjugation on S^7 interchanges r and s, the quotient S^7/⋆ is diffeomorphic to S^4. Consider D^4_±={(λ,x)∈ S^4 | ±λ≥ 0}.Local trivializations of h are given by the maps Φ_±:D^4_±→ S^7, Φ_+[ [ cos (t); sin (t) ξ ], q ]=[cos (t/2)q̅; sin (t/2) ξ̅q̅ ],Φ_-[ [-cos (t); sin (t) ξ ],  q ]=[ sin (t/2)ξq̅; cos (t/2) q̅ ], where ξ is an unitary quaternion. Taking U_0=D^4_+ and U_1=D^4_-, we have ϕ_01:U_0∩ U_1={(0,ξ)∈ S^4 |ξ∈ H}→ S^3 as ϕ_01(0,ξ)=ξ. By identifying U_0∩ U_1=S^3 (dropping the first coordinate), we denote ϕ_01=I:S^3→ S^3, the identity map. §.§ The Gromoll-Meyer sphere We recall the definition of the Lie group of quaternionic matrices Sp(2) Sp(2) = {[ a c; b d ]∈ S^7× S^7 |  c̅a + d̅b = 0}. The projection onto the first column pr:Sp(2)→ S^7 is a principal S^3-bundle with principal action: [ a c; b d ]q̅ = [a cq;b dq ]. Gromoll and Meyer <cit.> introduced the ⋆-action q [ a c; b d ] = [ qaqqc; qbqqd ]. whose quotient is an exotic 7-sphere, concluding their celebrated result on the existence of an exotic sphere withnon-negative sectional curvature: The corresponding action on S^7 can be ready from the first column of (<ref>): q·[ x; y ]=[ qxq̅; qyq̅ ] The S^3-S^3-bundle defined by (<ref>),(<ref>) gives rise to the cross-diagram in Durán <cit.> which is used to geometrically produce an explicit clutching diffeomorphism b̂:S^6→ S^6 for Σ^7=Sp(2)/⋆: S^3@.[d]^∙ S^3@..[r]^⋆ Sp(2)[d]^pr[r]^pr' Σ^7S^7 In order to produce b̂, the geometry of Σ^7 is explored through (<ref>). This paper is dedicated to further advance the geometrical/topological relations started in <cit.>. As an S^3-S^3-bundle, pr can be realized as a pullback from h:S^7→ S^4. In <cit.>, this pullback realization is used to recover the identification of Sp(2)/⋆ with the Milnor bundle M_1,-2≅ M_2,-1 (as in Gromoll–Meyer <cit.>) which isan exotic sphere. §.§ Milnor bundles The usual boundary map in the long homotopy sequence of the fibration EG→ BG, G=SO(k) provides a bijection between the set o linear S^k-1-bundles over S^n and π_n-1SO(k). The linear S^3-bundles over S^4 are usually called Milnor bundles. As in Milnor <cit.>, observe that t_mn:S^3→ SO(4), t_mn(x)v=x^mvx^n,v∈ H are representatives of π_3SO(4)≅ Z⊕ Z. We define M_m,n=D^4× S^3∪_f_t_m,nD^4× S^3. Milnor observed that M_m,n is homeomorphic to S^7 whenever m+n=1, but not diffeomorphic when m=2, for example (see <cit.> for a complete classification). On the other hand, the bundles P_n=M_0,n≅ M_-n,0 are S^3-principal. We use this section to show that every Milnor bundle can be obtained out of some P_n. Consider a pair of S^3-principal bundles π_k:P_k→ S^4, π_r:P_r→ S^4. π_k is a S^3-S^3-bundle with the⋆-action r(x,q)=(rxr̅,qr̅). We consider P_r as the S^3-manifold with action r· (x,q)=(rxr̅,rqr̅). Observe that both (<ref>) and (<ref>) commutes with f_t_n,0 for every n, defining global actions on P_n. Given π_k, its unique trivialization function is ϕ_01(x)=x^k, where both U_0 and U_1 are identified with D^4, the unit disc on H. Consider the S^3-S^3-bundle given by π_r^*P_k→ P_r. Each copy D^4× S^3⊂ P_r is invariant with respect to(<ref>). The only transition function of π^*_rP_k associated to the cover {D^4× S^3, D^4× S^3} is t_k,0∘π_r|_S^3× S^3:S^3× S^3→ S^3.. From Theorem <ref>, the resulting manifold is M'=D^4× S^3∪_ψ D^4× S^3 where ψ is the composition of f_0,r with (t_k,0π_r|_S^3× S^3). A straightforward computation gives ψ=f_t_r,k-r. We conclude that the bundle M_m,n can be realized by a S^3-S^3-bundle P_r← P→ M_m,n, where r=m. One sees that r parametrizes the Euler class and k the third homology of M_r,k-r (see Milnor-Stasheff <cit.>). It is worth noticing that (S^4)'=S^4 in S^4← P_k→ (S^4)' and that the map (π_r)':M_r,k-r→ S^4 coincides with the bundle projection π_(r,k-r).The full diagram is: @R=9pt@C=7pt@1 S^3 @..[dd]S^3 @..[dd] S^3@..[rd]S^3@..[rd] π_r^*P_k[rd][rr][dd]P_k[rd][dd] M_r,k-r@-[r] [r]S^4P_r[rr]^π_r S^4 The Gromoll–Meyer sphere happens with r=1, k=-1 (in this case, P_1=P_-1=S^7 and π_1=-π_-1=h – it is well known that the pull back of h by -h has total space Sp(2), see <cit.>). Another way to define a S^3-S^3-bundle over P_r is to consider P_r as the S^3-manifold with action (<ref>). A straightforward computation shows that the resulting diagram is P_r←π_r^*P_k→ P_r-k. §.§ Other exotic spheres In <cit.>, G-G-bundles were used to give geometric presentations of exotic spheres: There are explicit (special) G-G-bundles: * S^8 π^11⟵E^11→Σ^8, where G=S^3 and Σ^8 is the only exotic 8-sphere * S^10 π^13⟵E^13→Σ^10, where G=S^3 and Σ^10 is a generator of the order 3 group of homotopy 10-spheres which bound spin manifolds. * S^2n-1 l_n⟵L_n→Σ^2n-1, where G=O(n) and Σ^2n-1=∂ P^2n(𝐀^2) π^11 and π^13 are pulled back from pr:Sp(2)→ S^7 and l_n' from the frame bundle pr_n:O(n+1)→ S^n. As in the case of pr, we consider O(n+1) as a matrix group and pr_n as the projection to the matrix's first column. We give a brief description ofπ^11,π^13,l_n. Consider S^8 ⊂ℝ×ℍ× H endowed with the action q·[ λ; x; y ] = [ λ;qx; qyq ]. Observe that f_8 : S^8 → S^7, defined by f_8[ λ; x; w ] = 1√(λ^2 + |x|^4 + |w|^2)[ λ + xix; w ], is equivariant with respect to (<ref>) and (<ref>). The bundle π^11 is defined as thepullback of pr:Sp(2)→ S^7 by f_8. We present π^10 with two different actions. Let S^10⊂Im H× H× H be the S^3-manifold with one of the following actions[Although action (II) is not considered in <cit.>, going through the proof of Theorem 1 in <cit.>, one verifies that the sphere we get using action (II) is diffeomorphic to Σ^10#Σ^10#Σ^10#Σ^10, which turns to be diffeomorphic to Σ^10.] (I) q·[ p; w; x ] = [ p;qw; qxq ], (II) q·[ p; w; x ] = [ qpq̅; qwq̅; qxq̅ ]. For the pulling back map, f_10:S^10→ S^7, define the Blakers-Massey element b: S^6 → S^3 as in <cit.>: b(p,w) := w/|w|exp(π p)w/|w|,  if w ≠ 0, -1,  if w = 0. Let φ:[0,1]→[0,1] be a smooth non-decreasing function that is the identity on [1/4,3/4] and constant near 0 and 1, fixing 0 and 1. Set f_10[ ξ; w; x ]=[ √(1-φ(|x|)^2)b(ξ/√(|ξ|^2+|w|^2),w/√(|ξ|^2+|w|^2)); φ(|x|)x/|x| ]. Note that f_10 is equivariant with respect to (<ref>) and (<ref>). Define π^13 is the pullback ofpr:Sp(2)→ S^7by f_10. Action (<ref>)-(I), can be extended to the effective SO(4)=S^3× S^3/{±(1,1)}-action (q,r)·[ p; w; x ] = [ rpr̅; qwr̅;qxq ] which can be used to realize Σ^10=(S^10)' as a SO(4)-manifold. To this aim, one can eitherobserve that f_10 is invariant with respect to the r-coordinate (therefore the clutching diffeomorphism will have the desired equivariance) or consider Sp(2)× S^3/{±(1,1)} as an SO(4)-SO(4)-bundle with the r-coordinate acting only on the S^3-factor. According to a Conjecture 2 in Straume <cit.>, the SO(4)-manifold Σ^10 should be a peculiarly highly symmetric sphere among homotopy spheres that not bound parallelizable manifolds (see <cit.>). The O(n)-O(n)-bundle π_n is realized as the pullback of pr_n:O(n+1)→ S^n by Jτ_n (defined below). Observe that pr_n is an O(n)-O(n)-bundle with ⋆-action given by left multiplication of matrices: consider O(n) acting through the morphism s_n:O(n)→ O(n+1)and define pr_n(A)=Ae_0, where e_0=(1,0,0,...,0)^T. The O(n)-action r⋆ A=s_n(r)A is free and satisfies pr_n(rA)=s_n(r) pr_n(A). An equivariant transition function for pr_n is τ_n:S^n-1→ O(n) defined as τ_n(x)v=2x,v-v. We consider S^2n-1⊂ R^n⊕ R^n and the map Jτ_n:S^2n-1→ S^n, Jτ_n(x_1,x_2)= exp_e_0(πτ_n(x_2/|x_2|)x_1). Jτ_n is equivariant with respect to (<ref>) and the biaxial action r·[ x; y ]=[ rx; ry ]. The resulting O(n)-O(n)-bundle l_n: L_n=Jτ_n^*O(n+1)→ S^2n-1 gives rise to the manifold Σ^2n-1=L_n/⋆. Following <cit.>, Σ^2n-1 is homeomorphic to S^2n-1 for n odd and has H_n-1(Σ^2n-1)≅ Z_3 whenn is even (as mentioned in the introduction,Σ^2n-1=∂ P^2n(A_2) in Bredon <cit.>). Next we highlight two phenomena depending on the parity of n. If n=2m, O(2m) admits a subgroup S^1⊂ O(2m) acting freely on S^2n-1: consider S^1 as the subgroup of block diagonal matrices diag(A,A,...,A), where A∈ SO(2) (i.e.,the exponential of the standardcomplex structure of R^2m). From Theorem <ref>, item (ii), the resultingS^1-action is free on Σ^2n-1. The resulting quotient Σ^4m-1/S^1, denoted by Σ P C^2m-1, has π_k Σ P C^2m-1≅π_k CP^2m-1 for k≠ 2m and π_2mΣ P C^2m-1≅ Z_3. When n=4m, one gets a free S^3-action by considering A∈ Sp(1)⊂ O(4) in diag(A,...,A). The resulting quotient ,Σ^8m-1/S^3:=Σ P H^2m-1, satisfies π_k Σ P H^2m-1≅π_k HP^2m-1 for k≠ 4m and π_4mΣ P H^2m-1≅ Z_3. We also observe that there are subgroups U(m)⊂ O(2m), Sp(m)⊂ O(4m) commuting with the S^1-, respectively S^3-action, defining Σ P C^2m-1 as a U(m)-manifold and Σ P H^2m-1 as a Sp(m)-manifold. In section <ref>, we provide invariant metrics of positive Ricci curvature on both Σ P C^2m-1 andΣ P H^2m-1. If n is odd, n=2m+1, weconsider the bundle reduction pr_m^ C:U(m+1)→ S^2m+1. Considering U(m+1)⊂ O(n+1), we take pr_m^ C as the restriction pr_n|_U(m+1). By observing that both right and left multiplication by U(m)⊂ O(n) leaves U(m+1) invariant, one concludes that pr_m^ C is a U(m)-U(m)-bundle. An equivariant transition map τ_m^ C:m:S^2m→ U(m) is presented in <cit.>: τ_m^ C[ y; z ]= (𝕀 -z/|z|(1+e^π y)z̅^t/|z|), where y∈ i R and z∈ C^m. We can therefore consider A U(m)-reduction of L_n can be realized by the pull back of Jτ^ C_m:S^4m+1→ S^2m+1, Jτ^ C_m(x_1,x_2)= exp(πτ^ C_m(x_2/|x_2|)x_1). Jη_m is equivariant with respect to (<ref>) and the U(m)-action defined by restricting(<ref>) to U(m)⊂ O(n-1). If n=4m+3, the analogous reduction Sp(m+1)⊂ O(n+1) works along the same lines, with transition map τ^ H_m:S^4m+2→ Sp(m), obtained by replacing i R by Im H and C^m by H^m in (<ref>). The advantage of the U(m), Sp(m) realization of Σ^4m+1 is the presence of fixed points, allowing us to perform equivariant connected sums (see section <ref>). § CONNECTED SUMS Here we realizediagrams of the form M← f^*P→ M#Σ^n, where Σ^n is an exotic sphere. Insection <ref>, the diagram is used to study the Ricci curvature of M#Σ^navoiding the intricate geometry of a connected sum. Let M be a G-manifold with a fixed point p∈ M. AssumeG is compact and that M has a G-invariant Riemannian metric. The differential of the action at p induces a morphism ρ:G→ O(T_pM) called the isotropic representation of G at p. On the other hand, we consider S^n⊂ R× T_pM as a G-manifold with action g· (λ,x)=(λ,ρ(g)x). Note thate_0=(1,0)^T is a fixed point. Consider D_±={(λ, x)∈ S^n |±λ≥ 0} and S^n-1=D_+∩ D_-, observing that D_+,D_-,S^n-1 are G-invariant subsets. If ϕ:S^n-1→ G satisfies the equivariantcondition (<ref>), one constructs the G-G-bundle P→ S^n P=D_+× G⋃_f_ϕ D_-× G, where f_ϕ(x,g)=(x,gϕ(x)) and the ⋆-action is defined as in Theorem <ref>: r(x,g)=(r· x,gr^-1). Using (<ref>), Theorem <ref> identifies(S^n)' as the twisted sphere (S^n)'=D^n∪_ϕ̂ D^n. As a next step, we pull back P to M. The pullback function f:M→ S^n can be obtained along the lines of the Thom–Pontrjagyn construction (see Kosinski <cit.>, sections IX.4 and IX.5): let D_ϵ be an ϵ-disc around the fixed point p such that exp_p|_D_ϵ is a diffeomorphism. Define f:M→ S^n as f(x)=exp_e_0(πφ(|exp_p^-1(x)|/ϵ)exp_p^-1(x)/|exp_p^-1(x)|),   x∈ D_ϵ-e_0, x∉ D_ϵ . As P is trivial along D_±, Proposition <ref>, item (i), gives f^*P=f^-1(D_+)× G⋃_f_ϕ∘ f f^-1(D_-)× G=(M-D_ϵ/2)× G⋃_f_ϕ D_ϵ/2× G where D_ϵ/2 is the ϵ/2-disc around p and we identify S^n-1 with ∂ D_ϵ/2. Therefore, M'=(M-D_ϵ/2)⋃_ϕ̂ D_ϵ/2. M' is easily seem to be diffeomorphic to M# (S^n)'. We have: Let M^n beG-manifolds and p∈ Mwith isotropic representation ρ:G→ O(n). Then, given a map ϕ:S^n-1→ G satisfying ϕ(ρ(g)x)=gϕ(x)g^-1, there is a G-G-bundle M← P→ M#Σ^n, where Σ^n=D^n∪_ϕ̂D^n. Following Lemma <ref>, of ϕ satisfies the hypothesis on Theorem <ref>, so does ϕ^k, for every k∈ Z. Since #_kΣ^n=D^n∪_ϕ^kD^n, the resulting M' gives the k-fold connected sum M#Σ^n#...#Σ^n. One can use the Thom–Pontrjagyn to provide a more general construction: suppose N^k⊂ M^n+k is a G-invariant manifold and F: R^n× N→ M is a frame of ν N such that G satisfies gF(v,x)=F(ρ(g)v,gx) for some linear representation ρ:G→ O(n). One considers thef:M→ S^n+1, f(z)=exp_e_0(πφ(|v|)v/|v|),   z=F(x,v)-e_0, z∉ F( R^k× N) . Given a function ϕ: S^n-1→ G as in Theorem <ref>, one can again consider the G-G-bundle (<ref>) and its pull-back f^*P. The resulting manifolds, M', resembles Bredon's pairing <cit.> as a version of M `twisted' along the boundary of a tube around N. Examples based on this construction will be presented elsewhere. It followsthatthere are functions ϕ:S^n-1→ G realizingP=D^n× G∪_f_ϕ D^n× G and, therefore, Σ^n=D^n∪ _ϕD^n, whereP is E^11, E^13 and L_2m+1 in Theorem <ref> (Proposition <ref> or Theorem 4.2 and Proposition 4.4 in <cit.>). Explicit formulas for ϕ in the case of E^11 and E^13 are presented in <cit.>. For b:S^6→ S^3 in <ref>, Σ^7=D^7∪_b̂ D^7 is a generator of the group of homotopy 7-spheres (<cit.>). Observe that the representations (ρ_n) in the introduction are related to the fixed points of the S^3-manifolds Σ^7,Σ^8,Σ^10, the U(m)-manifold Σ^4m+1 and the Sp(m)-manifold Σ^8m+5, respectively. To realize the examples in Theorem <ref> we just need to observe that M^nin Theorem <ref> has a G-action carrying a fixed point with isotropy representation (ρ_n). The reducibility of the representations (ρ_7)-(ρ_8m+5) allows us to explore G-manifolds which are locally products, such as (trivial and non-trivial) bundles. In what follows, we list some examples. As `building blocks', we usehomogeneous manifolds such as S^nand projective spaces CP^m, HP^m. If H/K is an homogeneous manifold, the pair (H/K,ρ') is to be interpreted as the manifold H/K endowed with the G-actioninduced by ρ'(G)⊂ K. Thus, ρ' is the isotropy representation of the G-manifold H/K at the `base point'K∈ H/K. In general, (N,ρ') will denote a manifold that posses a fixed-point with isotropic representation ρ'. For the representations, we use the notation in the list (ρ_7)-(ρ_8m+5). §.§.§ Product manifolds * (S^3,ρ_1)× (S^4,ρ_1⊕ρ_0) * (S^3,ρ_1)× (S^3,ρ_1)× (N^1,ρ_0) * (S^6,ρ_1⊕ρ_1)× (N^1,ρ_0) * (S^3,ρ_1)× (S^5,ρ_⊕ρ_0) * (S^3,ρ_1)× (S^4,ρ_)× (N^1, ρ_0) * (S^7,ρ_⊕ρ_1)× (N^1,ρ_0) * (N^8,ρ_⊕ρ_1⊕ρ_0)× (N^2,2ρ_0) * (S^2m,ρ_U(m))× (S^2m+1,ρ_U(m)⊕ρ_0) * (S^4m,ρ_U(m)⊕ρ_U(m))× (N^1,ρ_0) * ( CP^m,ρ_U(m))× (S^2m+1,ρ_U(m)⊕ρ_0) * ( CP^m,ρ_U(m))× (S^2m,ρ_U(m))× (N^1,ρ_0) * ( CP^m,ρ_U(m))× ( CP^m,ρ_U(m))× (N^1,ρ_0) * (U(m+2)/SU(2)× U(m),ρ_U(m)⊕ρ_U(m)⊕ρ_0) * (U(m+2)/U(2)× U(m),ρ_U(m)⊕ρ_U(m))× (N^1,ρ_0) * (S^4m,ρ_Sp(m))× (S^4m+5,ρ_Sp(m)⊕ 5ρ_0) * (S^8m,ρ_Sp(m)⊕ρ_Sp(m))× (N^5,5ρ_0) * ( HP^m,ρ_Sp(m))× (S^4m+l,ρ_Sp(m)⊕ lρ_0)× (N^5-l,(5-l)ρ_0), for 0≤ l≤ 5 * ( HP^m,ρ_Sp(m))× ( HP^m,ρ_Sp(m))× (N^5,5ρ_0) * (Sp(m+2)/Sp(2)× Sp(m),ρ_Sp(m)⊕ρ_Sp(m))× (N^5,5ρ_0) As mentioned, T_1S^2m+1#Σ^4m+1≅ T_1S^2m+1 (see de Sapio <cit.>). §.§ Bundles over spheres To consider actions on bundles, we rely on explicit equivariant expressions of transition functions, such as (<ref>). §.§.§ Milnor bundlesA family of examples are given by the Milnor bundles:action (<ref>)is well defined in every M_m,n and(0,1)^T is a fixed point with isotropy representation (ρ_7). §.§.§ Explicit non-linear S^6-bundles over S^1Given a diffeomorphism h:S^6→ S^6, a S^6-bundles over S^1 is defined by E_h=[0,1]× S^6∪_f_h[0,1]× S^6 where f_h:{0,1}× S^6→{0,1}× S^6 is defined by f_h(0,x)=(0,x), f_h(1,x)=(1,h(x)). There are interesting choices we can make for h: by taking h=α, the antipodal map in S^6 one gets the non-trivial linear S^6-bundleover S^1. Moreover, <cit.> provides the family S^3-equivariantfixed-point free involutions θ_k=αb̂^k:S^6→ S^6. It is also proved in <cit.> that α=θ_0,...θ_27 parametrizes the 28 connected components of Diff^-(S^6), the set of orientation-reversing diffeomorphisms of S^6. Therefore, E_k=E_θ_k is isomorphic to a linear bundle if and only k≡0 28. One can define E_k as an S^3-manifold using action (<ref>) on S^6 (taking S^6=S^7∩{(x)=0}). It has a fixed point ((0,(0,1)^T) in any copy of [0,1]× S^6) with isotropy representation (ρ_7). The authors don't known whenever E_k is diffeomorphic to E_0 or not. The advantage of using θ_k instead of the orientation-preserving b̂^k is that θ_k, being a fixed-point free involution, defines a free Z_2-action on E_k. The quotient is the product of a fake projective plane F RP^6 and S^1. Although the observations above does provide S^3-S^3-bundles E_k← P→ E_k#Σ^7, we are not able to give any new relevant information about E_k#Σ^7 since the geometry of E_k is itself unknown to the authors (for instance, E_k can't have positive Ricci curvature since it has infinite fundamental group). One might believe that E_k≅ E_0#_kΣ^7. In this case, there is a cross-diagram E_0← P→ E_k and, possibly, something could be said about E_k. §.§ 8-dimensional bundles Linear S^3-bundles over S^5 and S^4-bundles over S^4 are parametrized by π_4SO(4)≈ Z_2+ Z_2 and π_3SO(5)≈ Z, respectively. Define: η:S^4 → S^3[ λ; x ] ↦λ +xix̅/|λ +xix̅ | Then η_L=ρ_L∘η and η_R=ρ_R∘η are generators for π_4SO(4), where ρ_L,ρ_R denotes the 4-dimensional representation of S^3 defined by left, respectively right, multiplication by quaternions. ρ_1⊕ 2ρ_0=I_5:S^3→ SO(5) generates π_3SO(5). Note thatη_L,η_Rare equivariant with respect to ρ_1⊕ 2ρ_0 and I_5(qxq̅)=I_5(q)I_5(x)I_5(q)^-1. We get the bundles P_η_ϵ=D^5× S^3⋃_f_η_ϵD^5× S^3, P_I_5^k=D^4× S^4⋃_f_I_5^kD^4× S^4, where ϵ=L,R and I_5^k(q)=I_5(q)^k. Both bundles admit the actions q·([ λ; x ], g)=([λ; qx ], qgq̅), q·([ λ_1; x_1 ],[ λ_2; x_2 ])=([λ_1; qx_1q̅ ], [λ_2; qx_2 ]). Therefore P_η_ϵ,P_I_5^k have fixed points with isotropy representation (ρ_8). The analogous bundle P_η_Lη_R, whose transition function is given by the product η_Lη_R(z)=η_L(z)η_R(z) and P_I_5^k, k≠ 0 2 are stabilized by Σ^8, i.e., M#Σ^8≅ M, for M=P_η_Lη_R,P_I^k_5 (see de Sapio <cit.> – recall that Σ^8=σ_4,3(η_Lη_R,I_5) and that θ^8≈ Z_2). If k is even, P_I_5^k#Σ^8 P_I_5^k (see de Sapio <cit.>). The authors does not known whenever P_η_ϵ is stabilized or not by Σ^8. Another S^3-manifold with istropy(ρ_8) is HP^2 with an S^3=Sp(1)-action derived from a suitable subgroup of Sp(2)× Sp(1),its standard isotropic representation. However, the promising manifold HP^2#Σ^8 is diffeomorphic to HP^2 (as pointed out to the first author by D. Crowley – see Kramer–Stolz <cit.> for a reference). §.§.§ 10 dimensional bundles The case of S^3-bundles over S^7 is of special interest. We deal with the principal case for simplicity. It is known that b:S^6→ S^3, defined in (<ref>), is a generator of π_6S^3 (it is obtained as an explicit clutching map of pr:Sp(2)→ S^7 in <cit.>). It admits the following SO(4) symmetry: b[ rpr̅; qwr̅ ]=qb[ p; w ]q̅. Therefore, the map f_b:S^6× S^3→ S^6× S^3 is equivariant with respect to the (S^3)^3-action: (q,r,s)·([ p; w ], g)=([ rpr̅; qwr̅ ], sgq̅), defining P_b^k=D^7× S^3∪_f_b^kD^7× S^3 as a (S^3)^3-manifold. P_b^k satisfies (c) by restricting the action to the diagonal {(r,r,r) | r∈ S^3}. A generic linear S^3-bundles is obtained using the transition map L_b^n∘ R_b^m. Denote this bundle by P^10_m,n→ S^7. Since f_L_b^n∘ R_b^m is still equivariant with respect to the diagonal action,P^10_m,n satisfies (c). Recalling from <cit.> that Σ^10=σ_3,6(I_6,L_b∘ R_b), one concludes that P_b^k#Σ^10≅ P_b^k if and only if k≠ 0 3 (see de Sapio <cit.>). The authors were not able to find out whether P_m,n^10#Σ^10 is diffeomorphic to P_m,n^10 or not. The diffeomorphism Θ_10 in <cit.> provides an explicit representative of a class in π_3(Diff^+(S^6)) not represented by π_3(O(7)). More specifically, it furnishes a non-linear S^6-bundle over S^4 whose total space is an S^3-manifold with a fixed point with isotropy (ρ_10). Note that the works of Nash <cit.> or Poor <cit.> does not provide positive Ricci on this bundle (supposing its total space is not diffeomorphic to a known space). § THE GEOMETRY OF CROSS-DIAGRAMS Let Mπ←Pπ'→ M' be a G-G-bundle. An efficient way to compare geometries in M and M' is to endow P with a G× G-invariant metric. In this case, the space H”⊂ TP, orthogonal to the G× G-orbits on P, descends isometrically to both H and H', the spaces orthogonal to the G-orbits on M and M', respectively. In what follows, we provide more details about existence of G× G-invariant metrics on P and explore the transversal geometry induced by the isometries Hdπ← H”dπ'→ H'. We supposeG compact with a bi-invariant metric originating from an adjoint invariant inner product Q on g. There are three different ways to endow P with a G× G-invariant metric: * Since G× G is compact, averaging any initial metric g_0 on P gives an invariant metric g (see Bredon <cit.> for reference) * A more concrete metric can be obtained our examples: given a map f:N→ M, f^*P is naturally a submanifold of N× P (equation (<ref>)). If M and P are equipped with a G-invariant and a G× G-invariant metric, respectively, the induced metric on f^*P is G× G-invariant. Most of our examples are the pullback of either pr:Sp(2)→ S^7 or the frame bundle pr_n:O(n+1)→ S^n, which admit natural G× G-invariant metrics * Given a connection 1-form ω:TP→ g and a metric g_M on M, one can endow P with the Kaluza-Klein metric ⟨ X, Y⟩ := g_M(dπ X, dπ Y) + Q(ω(X),ω(Y)). If both g_M and ω are G-invariant (in the sense that ω_gp(gX)=ω_p(X) for all X∈ TP) then, , is G× G-invariant We focus on (3) and prove: There exists a connection 1-form ω:TP→ g such that ω_rp(rX)=ω_p(X) for all X∈ TP and r∈ G. Moreover, if g_M is a G-invariant metric, then the metric (<ref>) is G× G-invariant. It follows immediately: Let M← P→ M' be a G-G-bundle. Then there is a G-invariant metric on M' such that M/G and M'/G are isometric as metric spaces. First observe there are bijections between the set of orbits of M, M' and P: if x=π(p) and x'=π'(p), then π^-1(Gx)=(G× G)p and π'((G× G)p)=Gx'. The choice of p∈π^-1(x), x∈ Gx and x'∈ Gx' areirrelevant. Moreover,if γ: R→ M is a geodesic orthogonal to orbits in M, the identification H← H”→ H' sends γ to a geodesic orthogonal to orbits in M'. We make it explicit in the following proposition (see Proposition 4.4 in <cit.> for the case of a fixed point x). There is an isomorphism Φ:ν Gx→ν Gx' satisfying:if O⊂ν Gx is such that exp|_ O is a diffeomorphism, then exp|_Φ( O) is a diffeomorphism. §.§ Proof of Proposition <ref> We recall that a G-connection on a G-principal bundle is a differential 1-form on P with values on the Lie algebra 𝔤, which is G-equivariant and recognizes the Lie algebra generators of the fundamental vector fields on P. That is, for every ξ∈ g, X∈ TP and q∈ G, * ω(pξ)=ξ * ω_pg(X g)=_g^-1ω_p(X) Direct averaginga connection 1-form by the ⋆-action gives an invariant form: let ω_0 be a connection 1-form for π:P→ M. Givena Haar measure μ on G with unitary volume, define ω_p(X) := ∫_G(ω_0)_gp(gX)dμ. Since the ∙- and ⋆-action commutes, it is immediate that ω is a connection 1-form for π and that ω_qp(qX)=ω_q(X) for all X∈ TP, q∈ G. §.§ Proof of Proposition <ref> Given x∈ M, let us describe an isomorphism between ν Gx and ν Gx'. Given x∈ M, there is p_0∈π^-1(x) such that the isotropy subgroup (G× G)_p_0 is the diagonal Δ G_x={(q,q) | q∈ G_x}. First of all, since π(rps^-1)=rp, if (r,s)∈ (G× G)_p, then r∈ G_x. On the other hand, we can write P=⋃ U_i× G, as the bundle definedby the ⋆-cocycle {ϕ_ij:U_i∩ U_j→ G }. Suppose x∈ U_i. By definition, the G× G-action on U_i× G is given by r(x,g)s^-1=(rx,sgr^-1). Therefore, for all r∈ G_x, r(x,1)r^-1=(rx,rr^-1)=(x,1). Given p_0 as in Claim <ref>,let Ψ:Gx→ P be defined as Ψ(rx)=rp_0r^-1.Given X∈ TM and p∈π^-1(x'), denote by L_p(X)∈ H_p the unique horizontal vector such that dπ( L_p(X))=X. The maps Ψ and L_p satisfies: Ψ(ry)=rΨ(y)r^-1, L_rps^-1(rX)=r L_p(X)s^-1. Define the morphism Φ:ν Gx→ν Gx' as Φ_rx(X)=dπ'( L_Ψ(rx)(X)). Φ_rx is an isometry between H and H'. It is sufficient to show that L_p( H_x)= H”_p, since H”⊂ dπ' and both L_p and dπ'_( dπ')^ are isometries. However, since π(rps^-1)=rx, L_p sends vectors tangent to Gx to vectors tangent to (G× G)p. A dimension count shows that L_p( H_x)= H”_p. Let O⊂ν Gx be such that exp|_ O is a diffeomorphism.Then Ψ̃:exp( O)× G→ P, defined as Ψ̃(exp_y(v),g)=exp_Ψ(y)( L_Ψ(y)(v))g^-1, is a trivialization along exp( O). Moreover, Ψ̃(exp_ry(rv),sgr^-1) =exp_Ψ(ry)( L_Ψ(ry)(rv))r(sg)^-1=exp_Ψ(ry)( L_rΨ(y)r^-1(rv))r(sg)^-1=exp_Ψ(ry)(r L_Ψ(y)(v)r^-1)r(sg)^-1=rΨ̃(exp_y(v),g) s^-1. Therefore, as in (<ref>), π'(Ψ̃(exp_y(v),g))=gexp_y(v), thus, arguments as in the proof of Theorem <ref> guarantees that exp_y(v)↦π'(Ψ̃(exp_y(v),𝕀)) defines a diffeomorphism. On the other hand, since π' is a Riemannian submersion and v∈( dπ')^,π'(Ψ̃(exp_rx(v),𝕀)) =π'exp_rxr^-1( L_rxr^-1(v))=exp_π '(rxr^-1)(dπ' L_rxr^-1(v))=exp_rx'(Φ(v)). The orbit Gx in Proposition <ref> can be replaced by any G-invariant submnifold N, provided there is a map Ψ:N→ P satisfying Ψ(rx)=rΨ(x)r^-1. The rest of the proof proceeds along the same lines. The equivalence of condition (<ref>) and Definition <ref> follows from the equivariance of Φ̃ (equation (<ref>)): let P→ M be a (possibly non-special) G-G-bundle satisfying (<ref>). For every orbit Gx⊂ M, there is an open tubular neighborhood U_x of G_x such that exp: O→ U_x is a diffeomorphism for some O_x⊂ν Gx. Consider the trivialization Φ̃_x:U_x× G→ P as in the proof of Proposition <ref>. The equivariance (<ref>) guarantees that the transition functions related to the open cover {U_x}_x∈ M satisfies the equivariance condition (<ref>). § CHEEGER DEFORMATIONS AND RICCI CURVATURE Lets recall briefly the process known as Cheeger deformation. The main reference are the notes by W. Ziller <cit.> on M. Müter's thesis. The exposition and Theorem <ref> does not intend to be original, but to make the article more self-contained. Let (M,g) be a Riemannian manifold and (G,Q) a compact Lie group with bi-invariant metric Q acting by isometries on M. For each p ∈ M we can decompose orthogonally the Lie algebra 𝔤 of G as 𝔤 = 𝔤_p⊕𝔪_p, where 𝔤_p denotes the lie algebra of the isotropy group at p. Observe that 𝔪_p is isomorphic to the tangent space to the orbit G· p. We call the tangent space T_pG· p as the vertical space at p, 𝒱_p. Its orthogonal complement, H_p is the horizontal space. Given an element U∈ g, we define its action vector U^*_p=d/dt|_t=0e^tU· p. The map U↦ U^*_p defines linear morphism whose kernel is g_p (the map U↦ U^* defines a Lie algebra morphism between g and the algebra of vector fields X(M) with the vector field brackets). In particular, a tangent vector X at p can be uniquely decomposedas X = X + U^∗, where X is horizontal and U∈ m_p. The main idea in the Cheeger deformation is to consider the product manifold M× G, observing that the action r(p,g) := (r· p, gr^-1) (compare with (<ref>)) is isometric for the product metric g×1/tQ, t > 0. Action (<ref>) is free and its quotient space is diffeomorphic to M (see (<ref>) and (<ref>)). In fact, the projectionπ' : M× G→⋆\ M× G (p,g)↦g· p identifies ⋆\ M× G with M, inducing a family of metrics g_t on M. We proceed defining some important tensors: * let P : 𝔪_p →𝔪_p be the orbit tensor of the G-action, defined by g(U^∗,V^∗) = Q(PU,V),∀ U^∗, V^∗∈𝒱_p. P is a symmetric and positive definite operator * denote by P_t: m_p→ m_p, the operator g_t(U^∗,V^∗) = Q(P_tU,V), ∀ U^∗, V^∗∈𝒱_p * we define the metric tensor of g_t,C_t:T_pM→ T_pM as g_t(X,Y) = g(C_tX,Y), ∀X, Y∈ T_pM The tensors above satisfy: * C_t = 1 in ℋ_p, C_t = P^-1P_t ∈𝒱_p * P_t = (P^-1 + t1)^-1 = P(1 + tP)^-1 * If X = X + U^∗ then C_t(X) = X + (1 + tP)^-1U^∗ The advantage of Cheeger deformations is that g_t does not produce `new' planes with zero curvature – in fact, up to a reparametrization of Gr_2(TM) (the Grassmannian of 2-planes of TM), the sectional curvature of a fixed plane is non-decreasing. Let X = X + U^∗, Y = Y + V^∗tangent vectors. Then κ_t(X,Y) := K_g_t(C_t^-1X,C_t^-1Y) satisfies κ_t(X,Y) = κ_0(X,Y) +t^3/4[PU,PV]_Q^2 + z_t(X,Y), where z_t is a non-negative term related to the fundamental tensors of the foliation (see <cit.>). In what follows, westudy the behavior of Cheeger deformations on the Ricci curvature. Fix a point p ∈ M and consider {e_1,…,e_k,e_k+1,…,e_n} a g-orthonormal base for T_pM such that e_k+1,…,e_n are horizontal and, for i≤ k, e_i = λ_i^-1/2v^∗_i where v_1,…,v_k is a set of Q-orthonormal eigenvectors of P with eigenvalues λ_1,…,λ_k,taken in non-decreasing order. Observe that {C_t^-1/2e_i}_i=1^n is a g_t-orthonormal base for T_pM. From Proposition <ref>, C_t^-1/2e_i = (1+tλ_i)^1/2e_i for i ≤ k and C_t^-1/2e_i = e_i for i > k. Define ^ℋ(X) := ∑_i=k+1^nκ_0(X,e_i). We show that _g^ H and the curvature of each orbit (seem as a normal homogeneous space) determine the Ricci curvature. From (<ref>) we have _g_t(C_t^-1X) = ∑_i=1^nR_g_t(C_t^-1/2e_i,C_t^-1X,C_t^-1X,C_t^-1/2e_i) =∑_i=1^nκ_t(C_t^1/2e_i,X) = ∑_i=1^nκ_0(C_t^1/2e_i,X) + ∑_i=1^nz_t(C_t^1/2e_i,X) + t^3/4∑_i=1^k[PC_t^1/2λ_i^-1/2v_i,PX]_Q^2 = _g^ℋ(X) + ∑_i=1^nz_t(C_t^1/2e_i,X) + ∑_i=1^k1/1+tλ_i(κ_0(λ_i^-1/2v^∗_i,X) + λ_it^3/4[v_i,PU]_Q^2 ). In particular, _g_t(X) = _g^ℋ(C_tX) + ∑_i=1^nz_t(C_t^1/2e_i,C_tX) + ∑_i=1^k1/1+tλ_i(κ_0(λ_i^-1/2v^∗_i,C_tX) + λ_it/4[v_i,tP(1+tP)^-1U]_Q^2). On the other hand, using Proposition <ref> we get: lim_t→∞_g^ℋ(C_tX) =_g^ℋ(X)lim_t→∞∑_i=1^kλ_i/1+tλ_iκ_0(v^∗_i,C_tX) =0 lim_t→∞∑_i=1^ktλ_i/4+4tλ_i[v_i,tP(1+tP)^-1U]_Q^2 =∑_i=1^k1/4[v_i,U]_Q^2 Consider G/G_p as a normal homogeneous space where G is endowed with the metric defined by Q. Then (<ref>) is exactly the Ricci curvature of G/G_p. In particular, there is a constant K>0 such that ∑_i=1^k1/4[v_i,U]_Q^2≥ K||U||^2_Q, provided G/G_p has finite fundamental group. With (<ref>)-(<ref>) at hand, a compactness argument shows that g_t has positive Ricci curvature, provided its orbits have finite fundamental group and _g^ H(X)≥ 0 for all non-zero X∈ H. Let (M,g) be a compact Riemannian G-manifold, where G is a compact. Suppose the orbits of M have finite fundamental group and _g^ℋ(X) > 0 for all non-zeroX ∈ℋ. Then g_t has positive Ricci curvature for some t>0. Define F(t,X)=_g_t(X). We argue by contradiction, supposing that there is no t>0 such that g_t has positive Ricci curvature. Therefore, for each time t=n∈ N, there is a point p_n and a unitary vector X_n=X_n+U_n^* such that F(p_n,t_n,X_n)≤ 0. Passing to a subsequence, if necessary, we conclude that the limit limX_n=X satisfy 0≥lim_n→∞F(t_n,X_n)=_g^ℋ(X)+∑_i=1^k1/4[v_i,U]_Q^2>0, a contradiction. §.§ The geometry of cross-diagrams We now apply Theorem <ref> to G-G-bundles. Let M ← P → M' be a G-G-bundle. Proposition <ref> guarantees that, given a G-invariant metric g_M on M, there is a G×G-invariant metric g_P on P and (given g_P) a unique metric g_M' on M' that makes π' a submersion. Denote by L_π and L_π' the horizontal lifts of π and π', respectively, andV^π= dπ, V^π'= dπ'. We recall the classical O'Neill formula for both π and π' (see O'Neill <cit.> or Gromoll–Walschap <cit.> for details). Let X,Y∈( V^π)^, X',Y'∈( V^π')^. We have K_g_M(X,Y)= K_g_P(ℒ_πX,ℒ_πY) + 3/4[ℒ_πX,ℒ_πY]^𝒱^π^2_g_P, K_g_M'(X',Y')=K_g_P(ℒ_π'X,ℒ_π'Y) + 3/4[ℒ_π'X,ℒ_π'Y]^𝒱^π'^2_g_P. Recall from the proof of Claim <ref> that, given p∈ P, L_p|_ H_π(p): H_π(p)→ H”_p and dπ'|_ H”_p: H”_p→ H'_π'(p) are isometries(we keep the notation in section <ref>, denoting H, H' and H” the orthogonal to the G-orbits in M, M' and the G×G-orbit on P). Being H” orthogonal to both V^π and V^π',we get K_g_M'(dπ'ℒ_πX,dπ'ℒ_πY) =K_g_M(X,Y) + 3/4{[ℒ_πX,ℒ_πY]^𝒱^π'^2_g_P - [ℒ_πX,ℒ_πY]^𝒱^π^2_g_P}. We claim we can change the metric on P without affecting the curvatures K_g_M'(dπ'ℒ_πX,dπ'ℒ_πY), K_g_M(X,Y), but with[ℒ_πX,ℒ_πY]^𝒱^π^2_g_P arbitrarily small. Precisely: Let M← P→ M' be a G-G-bundle with P compact. Let g_P be a G×G-invariant metric on P. Then, given ϵ > 0, there exists t > 0 such that the metric g_P_t, obtained by a finite Cheeger deformation on the G×G-manifold P, satisfy: for each pair X,Y∈ H”, K_g_M'_t(dπ'X,dπ'Y) -K_g_M_t(dπ X,dπ Y)≥ -ϵ||X∧ Y||_g_P^2, where g_M_t, g_M'_t are the resulting submersion metrics on M and M', respectively. Observe that, if g_P is G×G-invariant, so it is g_P_t. In fact, the product metric g_P× t^-1Q on P× G is G×G×G-invariant, where G×G×G acts as (r,s,q)(p,g)=(rpq^-1,sgq^-1). Action (<ref>) is given by the q-coordinate, s and r descends to the π-principal and ⋆-actions on the quotient of P× G by q (which is identified with P via (<ref>)). Therefore, the resulting metric g_P_t is invariant both under the π-principal action and the ⋆-action. It is worth remarking that the resulting metric on M', g_M'_t, is a Cheeger deformation as well. To prove Proposition <ref>, note that the vector [X,Y]^𝒱^π does not change with Cheeger deformation, sinceCheeger deformation does not affect the horizontal (or vertical) space. Therefore, Proposition <ref> follows from Lemma <ref> below. We set some notation first. Given p∈ P, let𝒫_p be the orbit tensor associated to the π-principal action. Given a G× G-invariant metric g_P on P, there exists an orthonormal basis of 𝒫_p-eigenvector{v_1,...,v_k} for 𝒱_p^π. We denote by λ_i the eigenvalue associatedto v_i. Since the principal action is free andP is compact, there is positive number λ := min_p∈ P, iλ_i. Givenϵ > 0, there exists t > 0 such that g_P_t satisfies, for every V∈ V^π, g_P_t(V,V) ≤ϵV^2_g_P. Given p ∈ P, each unitaryV∈ V^π_pcan be written uniquely as V = ∑_i g_P(X,v_i)v_i. In particular, g_P_t(V,V) = g_P((1 + t𝒫)^-1V,V) = ∑_i,jg_P(V,v_i)g_P(V,v_j)g_P((1+ tλ_i)^-1v_i,v_j). Therefore, g_P_t(V,V) = ∑_ig_P(V,v_i)^21 + tλ_i. Once for each i we have that λ_i≥λ we obtain g_P_t(V,V) = ∑_ig_P(V,v_i)^2/1 + tλ_i≤∑_ig_P(V,v_i)^211+tλ = 11+tλ.Since we would like that 11+tλ≤ϵ, we must have t ≥(1-ϵ)λϵ. Finally, we recall that the map (X,Y)↦[X,Y]^𝒱^π^2_g_P is tensorial on X,Y∈ (V^π)^ (see, for instance, the definition of the A-tensor on <cit.> or <cit.>). Therefore, assuming P compact, there is a constant C>0 such that[X,Y]^𝒱^π^2_g_P≤ C||X||_g_P^2||Y||^2_g_P.Observing that g_P_t induces the same metric g_M for every t, we conclude that, for any given ϵ>0 and any G-invariant metric g_M, there is a metric g_M' such thatK_g_M'(dπ'X,dπ'Y) =K_g_M(dπ X,dπ Y) + 3/4{[X,Y]^𝒱^π'^2_g_P - [X,Y]^𝒱^π^2_g_P} ≥K_g_M(dπ X,dπ Y) -ϵ||X∧ Y||_g_P^2. Taking {e_1, …, e_k} an orthonormal base for ℋ_p” and X∈ H_π'(p),we have: ^ℋ_g_M'(X)≥^ℋ_g_M(dπℒ_π'X)-ϵ||X||_g_M'^2. Recalling thatM'← P→ M is a G-G-bundle as well, we conclude: Let M'← P→ M be a G-G-bundle ans suppose P compact. Then, M has a G-invariant metric of positive horizontal Ricci curvature if and only if M' does so. Positive Ricci curvature on the exotic projective planes Σ P C^2m-1, Σ P H^2m-1 follows from Theorem <ref> since vectors orthogonal to the orbits of the U(m)- and Sp(m)-action lift to vectors orthogonal to the O(n)-orbits on Σ^2n-1. In fact, positive horizontal Ricci curvature on Σ^2n-1 (provided by Theorem <ref>) ensures positive horizontal Ricci curvature onΣ P C^2m-1 and Σ P H^2m-1. The conclusion follows from the fact the U(m)-, respectively, Sp(m)-orbits have finite fundamental group. plain
http://arxiv.org/abs/1708.07541v1
{ "authors": [ "Llohann D. Sperança", "Leonardo F. Cavenaghi" ], "categories": [ "math.DG" ], "primary_category": "math.DG", "published": "20170824201539", "title": "On the geometry of some Equivariantly related manifolds" }
Capital flow constrained lot sizing problem with loss of goodwill and loan Ren-qian Zhangcor1 December 30, 2023 ==========================================================================empty§ INTRODUCTION Cities are home to the majority of the world’s population and thus significantly determine global energy consumption, waste, and pollution. The dynamics of the urban energy budget, especially the thermal exchange between the densely built infrastructure and the surrounding environment, are not well understood. This is largely because the component of the energy budget associated with energy storage has been unattainable.The significant of this gap became was highlighted in the early 1990's through classic contributions by number of researchers working on the derivation of the urban energy budget; which included work on the urban heat island, radiative heat transfer and the stored energy. <cit.>. This body of work was subsequently expanded to incorporate urban scale climate models that included the application ofsatellite remote sensing, resulting in better understanding of the thermal dynamic responses of the urban environments <cit.>. Nonetheless, the quantitative analysis of the thermal storage component is still elusive. This is because of the large number of unknowns in the urban space equations of heat transfer. Time resolved analysis of the urban surface temperature is perhaps the most effective avenue for closing this knowledge gap. Advancing the understanding of the energy budget will lead to improvements in several areas: models of urban meteorology and air quality, models that forecast energy demand and consumption, technological innovations in building materials, heating and cooling technologies, as well as climate control systems and urban design, all of which seek to enable energy efficiency at the building level and at city scale, while improving human health and the quality of the environment.When considering the state of art approaches to measuring urban surface temperatures, there has been number of challenges:* Temperatures of vertical surfaces, which constitute the main portion of urban building surfaces, are inaccessible by satellite and/or aerial remote sensing. This information is essential in the accurate prediction of the urban storage flux.* In the case of ground based remote sensing, the unknown surface emissivity plays a crucial role in the accurate determination of surface temperatures. Comprehensive databases of surface emissivity values for buildings in cities are not readily available.* The radiation measured by longwave sensors is combination of gray body radiation of the target, as well as the radiation reflected (diffuse and specular) from other surfaces. Therefore measured values do not alwaysrepresent the intrinsic radiation of the target. Separation of the reflected portion is required for accurate assessment of the surface temperature.* The atmospheric constituents have significant effect in the interpretation of the measured values and the accuracy of numerical models when using actual values of sky radiance. There are number of approaches to compensate for the atmospheric effects. The research presented in this article seeks to address some of these challenges. Here, we present results of studies done in New York City mapping the surface radiations from nearly 100 blocks of Manhattans West Side. This includes measurements using a hyperspectral imaging (HSI) instrument, and a numerical model for calculating the measured radiation. The model results are subsequently compared with the measured values. This work benefits from a legacy of applications of spectroscopic imaging in earth sciences and remote sensing, including surface radiography and plume detection <cit.>.In the majority of those applications, imaging systems are deployed in a “downward-looking” configuration, mounted on moving platforms such as aircraft and satellites. In contrast, when considering urban energy research, stationary ground based imaging offers the advantage of persistence and a desirable field of view. Oblique view urban imaging has shown promise, for example for mapping the persistent leakage of refrigerant gases from a large number of structures in New York City <cit.>. § RESULTS Surface radiance Measurements were carried out using a long wave hyperspectral instrument. Buildings were imaged along an eight kilometer portion of NYC's West Side (Figures 1 and 2). This was followed by a numerical simulation of long wave radiosity of the scene (Figure 3). Measurements were carried out by installing the instrument on a rooftop vantage point at Stevens Institute of Technology in Hoboken, New Jersey. This provided an unobstructed view of Manhattan’s West Side with views including both low and high rise structures. Images were collected at 128 spectral bands, and for one week at cadences ranging from 10 seconds to 3 minutes. The 1.1 mrad angular resolution, when applied to target distances from 1 to 5 km, resulted in the spatial resolution ranging from 1.1-5.5 meters per pixel. This spatial scale can enable the incorporation of useful attributes at building level (including fuel type, composition, occupancy, etc.), not used at this stage, but an option when the platform is at higher level of maturity.The imaging system was manufactured by the Aerospace Corporation. The focal plane has sensors with spectral response ranging from 7.6–13.2 μm; this corresponds to peaks of blackbody spectra at temperatures between 230K and 395 K, thereby useful for the estimation of urban solid surface temperatures. This spectral response also includes a region in which many polyatomic molecules (including gaseous emissions) have well-defined spectral features. Those features including the 40 nm spectral resolution and the Noise Equivalent Spectral Radiance (NESR) of  1 μFlick (μW cm^-2 sr^-1 μm^-1) also allow gaseous compounds to be identified with high selectivity <cit.>.§.§ Derivation of Surface Temperature and Emissivity from Hyperspectral ImageryThe spectral emission from a body is a function of wavelength, the body's surface emissivity and its surface temperature. While the emissivity depends on the wavelength, the temperature is unique for an object at a given instance of time. A challenge in the application of thermographic imaging for mapping surface temperatures in heterogeneous terrains (such as buildings in a city) is that the surface emissivity is often unknown. That emissivity may range from  0.3 (aluminum and “low-e” glass) to higher values of 0.9 (concrete, brick, and polymeric composites). In wavelengths between 7-14 micron, a 1% change in emissivity will result in approximately 0.5^∘ K temperature difference. Therefore, the possible full range of emissivity values for building façades can correspond to as high as 30^∘ K temperature. An additional challenge is that the measured radiance at each pixel also includes effects of atmospheric scattering, absorption or emission by ambient gases along the path of observation. Therefore, when considering applications of thermography for studying urban thermodynamics, e.g. building energy and the urban heat island, significant errors may result if the emissivity effect is ignored.Hyperspectral imagery has been applied for the separation of the effect of emissivity versus surface temperature when considering measured radiance. These approaches have largely been applied to downward looking (aerial and satellite) remote sensing. Some techniques use calibrated curves of emissivity derived from laboratory experiments, in combination of instrumented measures of atmospheric parameters (e.g. using ASTER satellite, Gillespe. Pivovarnik); others achieve the separation as a byproduct of the process for calculation of the atmospheric effects. Overall, calcualtion for the compensation of effect of the atmosphere is done following two distinct approaches: 1) using measured values of atmospheric constituents followed by modeling, the most commonly being the Moderate Resolution Atmospheric Transmission (MODTRAN) <cit.>, and 2) using radiance values measured by the hyperspectral sensor, without the use of auxiliary data, a technique commonly known as "In Scene Atmospheric Compensation" (ISAC) <cit.>. In this study we use the ISAC algorithm proposed by Young et. al <cit.> for the calculation of the atmospheric effects, which also includes the separation of temperature versus emissivity. We also compared the ISAC approach with results obtained from the MODTRAN model which incorporates measured ambient concentrations of gases. Some background and brief summary of the approach and results are given below.In preparation for the application of ISAC algorithm following preprocessing was done <cit.>: * Conversion of raw telemetry data to engineering data* Radiometric and geometric calibrations* Bad pixel replacements* Spectral Smile Removal In contrast with the downward looking imaging, application of ISAC to the oblique view ground based imagingshould be considered with two caveats. 1) The first is the significance of the unequal distance from the sensor to the object, and the corresponding effect on the automated atmospheric compensation, which is inherent in the ISAC technique. In applications involving aerial or satellite platforms the object to sensor distance does not vary greatly, however in the oblique configuration, the distance can vary significantly. In our case (considering the scene in Figure <ref>) it is from 1 to 2 km. On the positive side, the smaller working distance in ground based sensing corresponds to a relatively smaller atmospheric effect. 2) The second point that needs to be considered is that ISAC relies on approximately 10% of the scene to be occupied by target emissivity close to 1. In downward looking remote sensing, these are often bodies of water which satisfy this condition. However, in the oblique view imaging, bodies of water (Hudson river in our case) are not ideal because of the ripple effect observed in the oblique angle diminishes the high emissivity advantage of water when applied to the ISAC algorithm. As a compromise, the large portion of high emissivity pixels corresponding to building materials composed of calcium (e.g. limestone), Silica (e.g. Bricks), or calcium silicates (e.g. concrete), with emissivity as high as 0.95, will to a great extent serve the black body requirement of the Algorithm. The error caused by this assumption was indirectly evaluated by spot checking the surface temperature calculations at selected pixels, using the MODTRAN model <cit.>, incorporating local measurement of atmospheric constituents <cit.>.The calculations were done by first masking the building region of the scene (Figure <ref>) for the application of ISAC algorithms <cit.>, followed by the Temperature Emissivity Separation (Figure <ref>) . Masking was done using K-Means clustering with two clusters. One was the sky, clouds and water pixels as a cluster with lower temperature, i.e., lower radiance. The other are the buildings as a cluster being hotter thus higher radiance. The two cluster mean spectrum associated with the brightness temperature of the scene and the masked image are shown in Figure <ref>. Detailed calculation of atmospheric effects at selected pixel groups was carried out using the MODTRAN model <cit.>.The model incorporates the radiometric effects from the sky (e.g., reflection) as well as the prevailing atmospheric effects in the oblique view of the scene (Meler 2011).The model incorporated twenty-eight input parameters.Twenty-five of these were the atmospheric concentrations of various gases, twenty-two of which were taken from conditions typical in a mid-latitude summer air column (see Table <ref>), while (H_2O, O_3, CO_2) concentrations were measured ambient values obtained from local weather stations (Table <ref>).The remaining three parameters were air temperature, background surface temperature, and emissivity.Air temperature was obtained from local weather stations records. This approach follows the atmospheric correction procedures used for airborne HSI instruments such as ATLAS or HysPIRI, where radiosound launches are used to determine the gas concentrations. In our case the calculation of the background surface temperature requires the variable path length from sensor to the target location on the building surface. This was obtained using a 3D digital surface model (DSM) of the city derived from the NYC building data <cit.>. Other than the above atmospheric effects, the radiance recorded by the sensor also includes the sky radiance and scattered radiation along the observation path (known as down-welling radiation in airborne platforms). The calculation of this downwelling radiation is challenging in the case of oblique view remote sensing. Nonetheless, certain assumptions can simplify the calculations. One is the use of similarity between the night-time radiation received by the objects in the scene, versus radiation received by the sensor from the sky. The building surfaces in the scene are vertically aligned and receive the sky radiance over a range of 90 degrees (from vertical to horizontal). The sensor swath width and height are 94 and 6 degrees, respectively. The field of view looking from the instrument, from Hoboken NJ, toward Manhattan is occupied by approximately 50% sky, 40% buildings and 10% river. The same can be said if the sensor was to be located in Manhattan looking toward Hoboken. Therefore, it can be assumed that the average of all pixels received in the 6 × 94 degree window is representative of what an object receives from its surroundings and seen by the sensor.This "in-scene" measurement was used to approximate the downwelling radiance. The resulting values of surface temperature derived by MODTRAN was subsequently compared the ISAC results for selected pixels in a high rise area and a low rise part of the city (Figure 3). Results of the comparison are quite good for this preliminary stage of work (Table 3). This comparison will need to be automated and applied to the entire scene in order to arrive at the statistics of the difference between the two approaches. §.§ Comparison to Radiosity Model Predictions in order to enable studies on the radiative heat transfer between urban structures (here buildings and streets) we have employed the radiosity method to compute the heat radiation emitted and reflected from all surfaces, including multiple reflections (see Methods). Our model for the urban geometry is derived by surface triangulation from a geospatial dataset (see Methods). For each triangle of the surface mesh, we define its emissivity ϵ, wall thickness d, thermal conductivity κ, and the temperature T^int on the inside of the surface wall. This information together with the long wavelength radiant flux L from the sky determines uniquely the outside surface temperatures in equilibrium for all triangular surface elements. Equilibrium refers to the assumption that there is no incoming radiant flux that varies over time, like day time solar radiation. Hence, we expect that our model can predict surface temperatures in the evening and during early morning before sunrise, or in strong cloud covered conditions. Without prior knowledge of the building envelopes composition and interior temperatures, we have assumed typical value of ϵ_wall, roof=0.95, κ_wall, roof=1.05W/(K m), T^int_wall, roof=293.15^∘K for walls and roofs, and ϵ_street=0.93, κ_street=1.25W/(K m), T^int_street=283.15^∘Km for streets and d=0.2m for all surfaces.Long wave length radiant flux from the sky was estimated to be L=300W/m^2. These estimates are made as a baseline to establish a platform by which the radiant flux of the city can be analyzed at both the building level and the city level, simultaneously. Additional information on the building envelop materials would naturally improve the results. It is expected that with increase of the availability of urban data, building information will also become more available in the near future, contributing to the model acuity.We compared the measured and model results for two block groups in Manhattan (Figure 3), one is largely composed of high rise buildings (area HR) and the other composed of low rise buildings (area LR). Area HR consists of three blocks bounded by 7th to 8th Avenue and W 31st to 35rdStreet, and area LR consists of two blocks bounded by 10th to 11th Avenue and W 20th to 22nd Street. Figure <ref> shows the isometric bird eye view of the blocks, and Figure 4 is the virtual view of the same two block groups from the instrument location. The linear dimensions (in feet) of the two areas is indicated on the 3D views. A total number of 31339 triangular surface elements have been used to model the two areas. A 3D view of the resulting surface temperatures in shown in Figure <ref> for the two areas, with color coded temperature, and the same scale as in Figure <ref>(a).The 3D computed surface temperatures for the high rise and low rise areas is projected on the two-dimensional plane using the same view angle as the hyperspectral imager. it also has a pixel resolution that corresponds to the imager (1.1 rad per pixel in horizontal and vertical directions). Figure <ref> shows the pixel matrix of the computed temperatures. Since the model describes only urban surfaces, the sky temperature is set to the average measured value. The HR and LR areas contain a large fraction of the building surfaces visible from the observation position. There are also surfaces of buildings between the two modeled areas that partially mask the buildings visible in Figure <ref>(a). Computed and measured surface temperatures can be compared by studying the absolute value of the difference between the that derived from measurements and the modeled values of temperature, For the selected scene pixels visible by the imager, the agreement between the measured and computed values is very satisfying as shown in Figure <ref>(b), with the lower bound of the difference at 1^∘K (for the majority of pixels) to larger deviations of  3^∘K for the lower floors which has greater number of multi path reflections.A more detailed comparison between computed temperatures and that derived from measurement is shown in Figure <ref>. Three vertical elevations (camera pixel rows, cf. Figure <ref>) are considered for comparison. Panel (a) corresponds to an elevation that cuts only the tallest building (between pixel columns 14 and 39). Measured and computed temperatures agree nicely, reproducing also the locally higher temperatures at one edge of the building, plausibly due to multiple reflections between facing walls (around pixel column 20).Panel (b) displays a cut through the second tallest building (between pixel columns 60 and 90). This building receives heat radiation from the tallest adjacent high rise building, producing the temperature gradient across its surface. Both measured absolute temperature values and the slope of the gradient are nicely reproduced by the computed temperatures. Finally, panel (c) displays a cut at a near ground elevation. While the temperatures show substantial spatial variations due to the combined effect of many low rise buildings, the overall range of the temperature profile shows reasonable agreement between measured and computed data. The discrepancy for the high rise between pixel columns 14 and 39 is caused by a masking building. We conclude that when compared to values derived from measurement, the model captures the main features of the radiative heat transfer in complex urban geometries, even when exact material parameters like emissivity and thermal conductivity are unknown, but replaced by typical values for urban materials. § DISCUSSION At the introduction we highlighted the rational for mapping the surface temperature in cities as an avenue by which the calculation of urban energy budget can be completed. The platform presented is a start in that direction, with number of additional and important steps to consider:* Incorporation of the emissivity values derived from the measured radiance, or possibly values of emissivity from urban databases if available, can reduce errors caused by the assumptions made for the emissivity in the radiosity model.* In the case of using remote sensing for derivation of the surface emissivity, higher resolution imagery will significantly improve the results. This can be achieved either by smaller instantaneous field of view (IFOV), or shorter distance to the target. The current spatial resolution of 1 to 3 meters introduces errors in certain parts of the scene, when mixed pixel values are present (i.e. when a pixel represents more than one material, like windows vs. facade). it should be added that we did carry out a preliminary sensitivity test to study the effect emissivity values 0.85, 0.9, and 0.95 . The difference between the modeled and measured values (similar to that shown in Figure 4b) remained below 3 Kelvins for all the three emissivity values. There was however some differences in the distribution of temperature across the building facade.* The final step is the separation of the reflected from the intrinsic radiation of the target, now possible using the presented radiosity model.A high resolution radiative transfer model of a city can subsequently be obtained, which when processed in a time resolved mode, would lead to more accurate derivation of the stored energy.§ METHODS §.§ Geospatial Dataset Footprints of buildings in the form of two-dimensional polygons and the maximum building roof height above ground elevation were obtained from the shapefile (Name: “building_1015”) provided by the City of New York, Department of Information Technology and Telecommunications (https://data.cityofnewyork.us/Housing-Development/Building-Footprints/nqwf-w8eh). From these data polygons, the envelopes of connected buildings were constructed by removing internal walls between the buildings. The wall polygons obtained were divided along the vertical direction into smaller rectangles and finally triangulated together with the polygons for the roof street surfaces to obtain a mesh of triangular surface elements. The radiosity method for computing radiative heat transfers has been applied to this mesh.§.§ Model for Radiative Heat Transfer and Surface Temperatures The model has been developed and employed previously for patterned surfaces <cit.>. Urban geometry (building and street surfaces) is represented by a mesh of small surface “patches” given by N mutually joining triangles P_j, j=1,…, N, defined over a planar base plane (xy-plane). The triangles are oriented so that their surface normals n_j are pointing to the outside of the buildings or towards the sky or streets. All surface normals are either normal or parallel to the base plane. Each triangle is further characterized by an emissivity ϵ_j, surface thickness d_j, and thermal conductivity κ_j described in the previous section. On the inside of the surface a local equilibrium interior temperature T^int_j is imposed for each triangle.We assume that the surface receives a homogeneous radiant flux L from the sky. The equilibrium temperatures T_j on the outside surfaces of the triangles are determined by equating the internal and external net flux densities for each triangle. The internal net flux is obtained from the stationary heat conduction equation q^int_j=-κ∂_n T_j integrated across the surface thickness d_j yielding q^int_j=(T_j-T^int_j)κ_j/d_j. The external net flux q^ext_j is obtained as the sum of the incoming fluxes from the sky (L) and those scattered from all other visible triangles and the heat flux σϵ_j T_j^4 radiated by the surface j where σ is the Stefan-Boltzmann constant. For the case of a single planar surface (j=N=1), the condition q^ext_1=q^int_1 yields(T_flat-T^int)κ/d = ϵ (L-σ T_flat^4) ,which determines the outside surface temperature T_flat of the flat surface as function of known parameters. For an urban geometry one has to consider multiple reflections between surface patches that contribute to the net external fluxes. To describe this effect, it is assumed that the surface patches are gray diffusive emitters, i.e., the emissivity is frequency independent and the radiation density is constant across the surface patches and emitted independent of direction. This is justified since the thermal wavelengths (microns) are small compared to urban structures and hence the size of the surface patches. We apply the radiosity method <cit.> to obtain the external fluxes q^ext_j.For a given surface patch P_j, the outgoing radiant flux is given by the sum of emitted thermal radiation and the reflected incoming radiation,J_j = σϵ_j T_j^4 + (1-ϵ_j) E_jwhere we used that the reflectivity equals 1-α_j for an opaque surface where α_j=ϵ_j is the absorptivity.How much energy two surface patches exchange via radiative heat transfer depends on their size, distance and relative orientation which are encoded in the so called view factor F_ij between patches i and j. F_ij is a purely geometric quantity and does not depend on the wavelength due to the assumption of diffusive surfaces above.It is defined by the surface integralsF_ij = ∫_A_i∫_A_jcosθ_i cosθ_j/π A_i | r_ij|^2 dA_i dA_jwhere θ_i is the angle between the surface patch's normal vector n_i and the distance vector r_ij which connects a point on patch i to a point on patch j, and A_i is the surface area of patch i . The view factor matrix obeys the important reciprocity relation A_j F_ji=A_i F_ij and additivity rule ∑_j F_ij = 1. With this geometric quantity, the radiative flux received by surface patch j from all other surface patches can be expressed as E_j = ∑_i F_ji J_i, and one can solve Eq. (<ref>) for the vector of outgoing fluxes, yieldingJ =[ 1 - ( 1 - ϵ) F]^-1J_0 ,where we combined the fluxes J_j from all patches into a vector J and the radiation σϵ_j T_j^4 into a vector J_0 to use a matrix notation. Here 1 is the identity matrix and ϵ is the diagonal matrix with elements ϵ_j. To obtain the surface temperatures T_j we need to compute the net heat transfer to surface patch j which is given by the incident radiation E_j minus the outgoing flux J_j, leading to the net flux q_j^ext = ∑_i F_ji J_i - J_j. In vector notation this net flux becomesq^ext = ( F - 1 ) [ 1 - ( 1 - ϵ) F]^-1J_0 .In the stationary state, the surface patch temperatures are then determined by the condition that the net external flux equals the net internal flux, q^ext = q^int where q^int defines the vector with elements (T_j-T_j^int)κ_j/d_j due to heat conduction across the surface. This condition uniquely fixes the temperatures T_j when all other parameters including the flux L from the sky are known. In the following, technically, we include the sky as an additional surface so that we have now N+1 surface patches. The corresponding additional matrix elements for the view factor matrix F follow from reciprocity and additivity rules, and we include the downward radiation L as the (N+1)^th component in J_0. §.§ View Factors The view factors F_ij between each pair of triangles of the surface mesh are computed using a modified version of the C program View3D which is freely available under GNU General Public License at https://github.com/jasondegraw/View3D. §.§ Solving the radiosity equations The numerical solution of the equations of the radiosity method follows these steps.* The triangular surface elements are grouped into three different classes: horizontal street patches (s), horizontal roof patches (r) and wall patches (w) that are perpendicular the base plane and connect the patches in class s and r. * Computing the view factors F_ij. This needs to be done only for all patch class combinations (w, s), (w,r) and (w,w) with the restriction i<j for (w,w) since the view factors for i>jfollow from reciprocity. The patches of classes s and r cannotsee each other so that the view factor submatrix for these classes vanishes. * Constructing the total view factor matrix F for all triangles of classes w, s and r and the single enclosing surface describing the sky. This is done by using reciprocity to obtainthe matrix elements for the patch class combinations (s,w) and (r,w). To obtain the view factor for the transfer from a surface patch i towards the sky we use the sum rule ∑_j F_ij=1, i.e., F_isky= 1- ∑_j ∈{s,r,w} F_ij. The view factor for the transfer from the sky to a patch i follows from reciprocity as F_skyi= A_i/A F_isky where A is the total planar area of the urban area under consideration.* The inverse matrix of Eq. (<ref>) can be computed as a truncated geometric series since the emissivities are sufficiently close to unity and the view factors F_ij<1 with most of them in fact much smaller then unity. Hence the inverse kernel is given by K^-1≡[ 1 - ( 1 - ϵ) F]^-1=∑_n=0^n_cM^n with M=( 1 - ϵ) F. We find that n_c=6 is sufficiently accurate approximation for the parameters used below. * Finally, we compute the surface patch temperatures T_j by an iterative solution of the equilibrium conditionq^ext = q^int [see Eq. (<ref>)] for given surface emissivity ϵ_j, downward radiation L, interior temperatures T^int_j and effective thermal conductivity κ_j/d_j. The iteration steps are as follows: (i) Choose initial patch temperatures T_j^(ν=0), e.g.,the internal temperature. Convergence to the same result is obtained for a wide range of initial choices.(ii) Compute the external flux q^ext(ν=0) = (F - 1) K^-1J^(ν=0)_0 with the N+1 dimensional initial vector J^(ν=0)_0=[L,σϵ_1 T_1^(ν=0)^4, … , σϵ_N T_N^(ν=0)^4]. (iii) Compute the updated patch temperatures T_j^(ν=1) from the equation q_j^ext(ν=0) = ( T_j^(ν=1) - T^int_j ) κ_j/d_j for j=1,…, N. (iv) Continue with step (i) to start the next iteration step, i.e., q^ext(ν=1) = (F - 1) K^-1J^(ν=1)_0 with the vector J^(ν=1)_0={L,σϵ_1 [(T_1^(ν=1)+T_1^(ν=1))/2]^4, … , σϵ_N [(T_N^(ν=1)+T_N^(ν=1))/2]^4}. In (iv) and all following iteration steps it is useful to use the average of the last two iterations for the patch temperatures, as indicated here, to obtain rapid convergence. Typically, for the models and parameters used below, after about 20 iterations a stable solution for the patch temperatures had been reached (within a relative accuracy of 10^-4). § DATA AVAILABILITY The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.§ ACKNOWLEDGEMENTS Contributions of Jorge Gonzales, Prathap Ramamurthy, Brian Vant-Haul, David Sailor, Andreas Karpf, Gregory Dobler, Steve Koonin, David Messinger, and Julie Pullen are gratefully acknowledged.§ AUTHOR CONTRIBUTIONS M.G. designed the experimental campaign, M.A. and M.G.conducted the measurements and carried out the post processing, T.E. carried out model based computations and analyzed the results.All authors wrote and reviewed the manuscript.§ ADDITIONAL INFORMATION Competing Interests:The authors declare that they have no competing interests.
http://arxiv.org/abs/1708.08089v1
{ "authors": [ "Masoud Ghandehari", "Thorsten Emig", "Milad Aghamohamadnia" ], "categories": [ "physics.ao-ph" ], "primary_category": "physics.ao-ph", "published": "20170827134908", "title": "Surface temperatures in New York City: Geospatial data enables the accurate prediction of radiative heat transfer" }
1,2]Alejandro Cartas 1,2]Mariella Dimiccoli 1,2]Petia Radeva [1][t]University of BarcelonaMathematics and Computer Science Department08007 BarcelonaSpain {alejandro.cartas, petia.ivanova}@ub.edu[2][t]Computer Vision CenterUniversitat Autónoma de Barcelona08193 Cerdanyola del VallèsSpain [email protected] Batch-Based Activity Recognition from Egocentric Photo-Streams [==============================================================Activity recognition from long unstructured egocentric photo-streams has several applications in assistive technology such as health monitoring and frailty detection, just to name a few. However, one of itsmain technical challenges is to deal with the low frame rate of wearable photo-cameras, which causes abrupt appearance changes between consecutive frames. In consequence, important discriminatory low-level features from motion such as optical flow cannot be estimated. In this paper, we present a batch-driven approach for training a deep learning architecture that strongly rely on Long short-term units to tackle this problem. We propose two different implementations of the same approach that process a photo-stream sequence using batches of fixed size with the goal of capturing the temporal evolution of high-level features. The main difference between these implementations is that one explicitly models consecutive batches by overlapping them. Experimental results over a public dataset acquired by three users demonstrate the validity of the proposed architectures to exploit the temporal evolution of convolutional features over time without relying on event boundaries. § INTRODUCTIONAutomatic human behavior understanding has been for a long time one of the main goals of artificial intelligence practitioners <cit.>. Being a fundamental step towards human behavior understanding and having several application areas like healthcare, ambient intelligence, and video surveillance, activity recognition has become one of the most widely studied problems in computer vision <cit.>.With the widespread of wearable sensors in recent years <cit.>, there has been growing interest in recognizing activities from images and videos captured by a wearable camera <cit.>. Since wearable cameras do not require any user intervention, they allow to capture genuine images and videos in a naturalistic setting. Additionally, the first-person point of view is specially well-suited to capture interactions with objects and people; hence tracking the activities of the wearer (see Fig. <ref>). However, in comparison with third-person videos, the camera free motion, the unconstrained nature of the videos, and the non-visibility of the main actor impose additional challenges to the activity recognition problem.In a lifelogging scenario, where typically the frame-rate of the camera is very low (2-3 frames per minute), the lack of temporal coherence and the abrupt changes of the field of view, further harden the activity recognition task. As stated in <cit.>, visual lifelogs offer considerable potential to infer behavior patterns through activity recognition andenable several applications in the field of technology-driven assistive healthcare, such as preventing non-communicable diseases associated with unhealthy trends and risky profiles. Despite that, to the best of our knowledge, research on activity recognition in a lifelogging scenario has received comparatively little attention in the literature <cit.> with respect to the egocentric video setting <cit.>.Activity recognition from egocentric videos is mostly focused on recognizing short-term actions such as take bread or put ketchup, spanning around a few hundreds of frames or more long-term activities: walking or running that usually last for several minutes. Not surprisingly, one of the most widely exploited features in the video setting was the ego-motionGiven the low frame-rate, activity recognition from egocentric photo-streams has focused on recognizing high-level activities that may last from a few minutes to several hours such as cooking or working that have proved to be well characterized by ego-motion <cit.> in the video scenario. Castro et al. <cit.> showed that it is possible to improve the recognition performance of a fine-tuned CNN by adding a fusion ensemble method that puts together the output of the CNN with time meta-data and contextual features through a random forest. However, the validity of the proposed approach was restricted to data belonging to a single user, since the time meta-data and the contextual features cannot generalize to multiple users. In an attempt to improve the generalization capability of this model, Cartas et al. <cit.> proposed to use the output of a fully connected layer as additional features to be used in a fusion ensemble model. However, bothworks operated at image-level even if images were manually labeled in batches. This implies that the annotators used to apply temporal information in order to label certain images, and therefore the labeling of single images could have been different without taking temporal information into account. Oliveria et al. <cit.> exploitedthe relationship between objects and activities for activity recognition purpose. However, all the above mentioned approaches treated photo-streams as an unstructured collection of unrelated images.Encouraged by a previous study <cit.> showing that, besides drastic changes in appearance, the temporal coherence of concepts is preserved in egocentric photo-streams at event-level, we aim to investigate how to take advantage of temporal information to improve the recognition performance.Our proposed approach is similar to <cit.>, where the activity recognition problem from third-person cameras iscast as a video classification problem, whose goal is to learn a global description of the video while maintaining a low computational cost. In <cit.>, the video is down-sampled to a frame-rate of 1 fpm, and explicit motion information in the form of optical flow images computed over adjacent frames is added to compensate the lost of implicit motion information. A Long Short Term Memory (LSTM) recurrent neural network operating on frame-level CNN activations is used to discover long-range temporal relationships and to learn how to integrate information over time. Contrary to <cit.>, in our lifelogging scenario, image sequences have been originally recorded with a low frame-rate, so that motion information is not available. Furthermore, since a single video for us corresponds to the set of all images of the day, the number of labels may be arbitrarily large and activity boundaries are unknown. Therefore, we propose a batch-based deep learning training approach aiming to cope with both the lack of knowledge about event boundaries and the not negligible length of photo-streams (up to 2,000 images).Our contributions can be summarized as follows: * We propose two end-to-end batch-based implementations that exploit the temporal coherence of concepts in photo-streams and outperform the state of the art end-to-end architectures.* We demonstrate that it is possible to capture the temporal evolution of features over time in photo-streams even without knowing event boundaries.* We show that both implementations improve the classification performance, but the first one is slightly better on day sequences that does present clear temporal patterns. The rest of the paper is organized as follows: section <ref> details the proposed approach, whereas the experimental setting, validations and state-of-the-art comparisons are described and discussed in section <ref>. Concluding remarks are reported in section <ref>.§ PROPOSED APPROACHOur hypothesis is that the variations between successive images in a photo-stream encode additional information which could be useful in making more accurate activity predictions. While this hypothesis has been validated in <cit.> for conventional videos whose frames share the same label, there is no evidence that it is applicable to the photo-stream scenario. Typically, a photo-streamhas several labels corresponding to different activities performed by the user during a day. Due to the sparseness of the observations, adjacent frames may have distinct labels even if in general, when the user is performing a long term activity, several consecutive frames share the same label (see Figure <ref>). One possible approach would be to first split the photo-stream or video into events manually or by using a state of the art approach as <cit.>, and then to classify each event separately. However,we propose two different CNN-LSTM implementations,based on the idea of batch-based training,able to cope with the whole photo-stream set analysis without the necessity for event segmentation.In the first implementation (see Fig. <ref>), the output from the CNN layer is given as input to a single LSTM layer. While the architecture itself is not new, the way it is trained and what it is supposed to learn differs from <cit.>. During training, we split the photo-stream into overlapping segments of fixed length and we feed them, together with the corresponding activity label at frame level, to the network. Since each segment has a fixed length, its images may not share the same label. Therefore, we expect the network to learn not only the temporal evolution of features over time within a same event, but also to learn to predict event changes, thus leading to more accurate predictions when event boundaries are unknown.In the second implementation(see Fig. <ref>), we explicitly model the temporal relation between two adjacent overlapping batches.After a CNN architecture, we added a fully-connected layer, a LSTM unit, and one last fully-connected layer. For frames belonging to the first batch of a sequence (frames 1-5 in Fig. <ref>(b)),the input of the LSTM layer is the output of the fully-connected layers. For frames belonging to two temporally overlapping batches (frames 4-5 in Fig. <ref>(b)), the input of the LSTM layer is the output of theLSTM layer from the previous batch. The outputs of the first fully-connected layer and the LSTM unit have the same size. In this way, the input of the LSTM layer has the same size of its output, that in the latter is used in subsequent passes. In other words, after the initial feedfoward pass of the first sequence batch, the LSTMs output of the last n frames are stored. In subsequent passes, they are used as the input for the first n LSTMs. For example, Fig. <ref> shows this configuration composed of 5 timesteps with an overlapping of 2 output/input. This architecture is supposed to learn more complex long-range temporal dependencies without the need of considering each one-day photo-stream as a single sequence. Indeed, since a photo-stream can be made of up to two thousands frames, even for an LSTM it would be unfeasible to learn such long range dependencies<cit.>. Using both implementations, we finally obtain the frame-level predictions.§ EXPERIMENTSWe describe the dataset in section <ref> and detail the networks training in section <ref>. We then present the experimental results on activity recognition on section <ref>. §.§ DatasetWe employed the NTCIR-12 dataset <cit.> on our experiments. This dataset contains 89,593 egocentric images collected in 79 days by three different persons. The data collection was done in a period of almost a month per person. During this time, each user worn a chest-mounted camera that took two pictures per minute. Continuing previous work <cit.>, we used an extended subset of 44,902 images from the NTCIR-12 dataset, around 15,000 images per person. These images were annotated using 21 activity categories and correspond to all three users and 78 days at different times. The annotation process was done in batches of consecutive frames, meaning that the context of a continuous activity across frames was implicitly taken into account by the annotators.We split the annotated images subset in training, validation, and test sets. These splits contain full day sequences and maintain the inherit class imbalance, as illustrated on Fig. <ref>. We accomplished this by doing the following. First, the day sequences were grouped in bins of similar number of images by using the first-fit decreasing algorithm. Second, all possible combinations of test splits from the bins were calculated by using the Twiddle algorithm <cit.>. Then, two category distributions were computed for each test split combination and its remaining bins. Then, for each pair of distributions the sum of the Bhattacharya distances between them and the whole dataset category distribution was obtained. Finally, the best test split is the one with the shortest distance. The validation and training split was calculated on the remaining bins by doing the same steps.For the recurrent proposed models, the annotated pictures of a day were considered as one sequence. In total, the training and validation set consisted of 59 and 7 sequences, respectively. For example, thirteen day sequences from two users are shown in Fig. <ref>. §.§ TrainingAll the models were implemented using the Keras framework <cit.>. In addition, all the models have the same data augmentation process at the frame level. Namely, we randomly applied horizontal flips, translation and rotation shifts, and zoom operations. To avoid overfitting, we added dropout layers <cit.> and weight normalization to all models. The VGG CNN architecture is used to process individual images. To aggregate photostream-level information, we leverage an LSTM to consider sequences of CNN activations. The training and configuration details for all models are described as follows.VGG-16 CNN. We fine-tuned a VGG-16 network <cit.> as our base model. Only the last fully-connected layer was changed to have a 21 category output. The fine-tuning was done in two phases using the splits described in the previous section.The goal of the first phase was to initialize the weights at the top layers, since only the bottom layers of the CNN were initialized using the weights of the ImageNet classification task. Therefore, during the first phase, only the fully-convolutional layers were backpropagated. The optimization method used was the Stochastic Gradient Descent (SGD) for 10 epochs, a learning rate α=1×10^-5, a batch size of 1, a momentum μ=0.9, and a weight decay equal to 5×10-6. In the second phase, the last two convolutional layers were also fine-tuned and the initial weights were obtained from the best epoch of the first phase. Moreover, the SGD ran for another 10 epochs and set with the same parameters except the learning rate α=4×10^-5. The best validation results were obtained at the eight epoch.VGG-16 CNN + LSTM. For this architecture, we added a LSTM layer of 256 units followed by a fully-connected layer after the first fully-connected layer of the VGG-16 network. Furthermore, in our experiments, we used three LSTM configurations with a time step of five, ten, and fifteen frames. In order to train all the configurations, we froze the weights of the first four blocks of the convolutional layers. During training, we used SGD as optimization algorithm. For the timestep 5 configuration, we trained it for 5 epochs with a learning rate α=2.5×10^-5, a momentum μ=0.9, and a weight decay equal to 5×10-6. For the timestep 10 configuration, we trained it for 4 epochs with a learning rate α=1×10^-4, a momentum μ=0.9, and a weight decay equal to 5×10-6. The timestep 15 configuration was trained for 2 epochs with a learning rate α=1×10^-4, a momentum μ=0.9, and a weight decay equal to 5×10-6. The initial weights of the optimization process were the ones obtained for the base model. The training was performed in batches of 5, 10, and 15 frames, respectively. These batches were sampled using a sliding window of frames from each sequence. For instance, Fig. <ref> shows an unrolled version of this model and two consecutive batches of 5 frames. Besides being a crucial characteristic of the proposed approach, this form of training can be understood as a kind of data augmentation over the sequence.VGG-16 CNN + Piggyback LSTM. For this architecture, we added a fully-connected and LSTM layers after the convolutional architecture of the VGG-16 network. Both layers have an output vector length of 256 and are followed by a dense layer with a softmax activation. The feedback from overlapping frames between batches was implemented using a filter layer. Therefore, the network had two additional inputs: one input for the previous batch and another one used as a mask. We used three different configurations in the reported experiments. The first configuration had a batch size of 5 frames and 2 overlapping frames, the second one had a batch size of 10 frames and 3 overlapping frames, and the last one had a batch size of 15 frames and 4 overlapping frames. Our results were achieved by dividing the training into two phases. The purpose of the first training phase was to learn the high-level features from adjacent frames, while the purpose of the second one was to learn temporal patterns from them throughout the sequence. During the first phase, the day sequences were considered as consecutive batches without overlapping. Accordingly, this phase followed the same training procedure as the previous described architecture. In the second phase, we froze all the convolutional layers and the first fully-connected layer. The data augmentation process for the day sequences consisted in the following. Given a batch size n and a overlapping number of frames m, the sequential training batches with m overlapping frames were created from the day sequences starting at the first n frames. This created more training examples than the previous architecture and considered all the frames of a sequence. The first training step was to randomly shuffle the day sequences at each epoch. Then, all the day sequences were processed one by one. The ordered training batches from a day sequence were consecutively feedforwarded to the network. The SGD algorithm was also used for this second phase. The learning rates for the configurations were α=2.5×10^-5 , α=1×10^-4, and α=1×10^-4, correspondingly. Moreover, all configurations shared the same momentum μ=0.9 and weight decay equal to 5×10-6. Early stopping criteria was used for both training phases. §.§ Results and DiscussionThe evaluation of the models and their configurations was done over 12 day sequences from all users, i.e. 6,225 images. In contrast with previous works  <cit.>, we are interested in a many-to-many sequence classification and we did not apply any kind of average over a processed batch. Since the dataset is unbalanced, we used other metrics besides accuracy to measure the classification performance. Table <ref> shows the obtained results for the activity recognition task. These results demonstrated that processing batches of sequential frames improves accuracy performances with respect to the pure CNN baseline for most of the cases. A larger time-step is generally preferred since it allows to better capture the temporal evolution of features over time. By using the first proposed architecture, we achieve an improvement of more than 4% with respect to the VGG-16 baseline even considering a very small batch size. The results of the second proposed architecture also improved all the performance metrics with respect to the pure CNN baseline, but only for the largest timestep configuration. In comparison with the former architecture, the overall performance improvement was less. This might be explained as a consequence of not having clear temporal activity patterns throughout the whole day sequences, as shown in Fig. <ref>. Moreover, a comparison of the models over three day sequences is illustrated in Fig. <ref>. Although, the proposed architectures improved the overall accuracy, they still failed at classifying categories highly correlated like Eating together and Driking together as seen on the confusion matrices on Fig. <ref>.§ CONCLUSIONSWe presented a batch-based learning approach for activity recognition from egocentric photo-stream sequences. In order to learn temporal activity patterns between frames, both proposed implementations of this approach uses a LSTM unit on top of a convolutional neural network to process a day sequence of frames using windows of fixed size. Specifically, our first implementation uses a sliding window of consecutive frames to generate training batches. Moreover, our second implementation is able to handle information of previous batches from a sequence by reprocessing a fixed number of overlapping frames.Although this paper has demonstrated that it is possible to exploit temporal coherence of concepts without knowing event boundaries, we consider that clustering a day sequence into different scene subsequences could further improve the activity recognition. Additionally, we think that the second proposed implementation could improve the activity recognition performance on video data. Both ideas will be addressed in future work.§ ACKNOWLEDGMENTS A.C. was supported by a doctoral fellowship from the Mexican Council of Science and Technology (CONACYT) (grant-no. 366596). This work was partially founded by TIN2015-66951-C2, SGR 1219, CERCA, ICREA Academia'2014 and 20141510 (Marató TV3). The funders had no role in the study design, data collection, analysis, and preparation of the manuscript. M.D. is grateful to the NVIDIA donation program for its support with GPU card.ieee10=-1ptabebe2016robust G. Abebe, A. Cavallaro, and X. Parra. Robust multi-dimensional motion features for first-person vision activity recognition. Computer Vision and Image Understanding, 149:229–248, 2016.bambach2015lending S. Bambach, S. Lee, D. J. Crandall, and C. Yu. Lending a hand: Detecting hands and recognizing activities in complex egocentric interactions. In Proceedings of the IEEE International Conference on Computer Vision, pages 1949–1957, 2015.behera2012egocentric A. Behera, D. C. Hogg, and A. G. Cohn. Egocentric activity monitoring and recovery. In Asian Conference on Computer Vision, pages 519–532. Springer, 2012.bolanos2017toward M. Bolanos, M. Dimiccoli, and P. Radeva. Toward storytelling from visual lifelogging: An overview. IEEE Transactions on Human-Machine Systems, 47(1):77–90, 2017.byrne2010everyday D. Byrne, A. R. Doherty, C. G. Snoek, G. J. Jones, and A. F. Smeaton. Everyday concept detection in visual lifelogs: validation, relationships and trends. Multimedia Tools and Applications, 49(1):119–144, 2010.cartas2017recognizing A. Cartas, J. Marín, P. Radeva, and M. Dimiccoli. Recognizing activities of daily living from egocentric images. In Proceedings of the Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA), pages 87–95, Cham, June 2017. Springer.castro2015predicting D. Castro, S. Hickson, V. Bettadapura, E. Thomaz, G. Abowd, H. Christensen, and I. Essa. Predicting daily activities from egocentric images using deep learning. In Proceedings of the 2015 ACM International symposium on Wearable Computers, pages 75–82. ACM, 2015.Chase1970 P. J. Chase. Algorithm 382: Combinations of m out of n objects [g6]. Commun. ACM, 13(6):368–, June 1970.chollet2015keras F. Chollet et al. Keras. <https://github.com/fchollet/keras>, 2015.dimiccoli2016sr M. Dimiccoli, M. Bolaños, E. Talavera, M. Aghaei, S. G. Nikolov, and P. Radeva. Sr-clustering: Semantic regularized clustering for egocentric photo streams segmentation. Computer Vision and Image Understanding, 155:55–69, 2016.donahue2015 J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2625–2634, June 2015.fathi2011understanding A. Fathi, A. Farhadi, and J. M. Rehg. Understanding egocentric activities. In 2011 International Conference on Computer Vision, pages 407–414. IEEE, 2011.gers2000learning F. A. Gers, J. Schmidhuber, and F. Cummins. Learning to forget: Continual prediction with lstm. Neural computation, 12(10):2451–2471, 2000.guler2016brief S. D. Guler, M. Gannon, and K. Sicchio. A brief history of wearables. In Crafting Wearables: Blending Technology with Fashion, pages 3–10. Apress, Berkeley, CA, 2016.gurrin2016NTCIR C. Gurrin, H. Joho, F. Hopfgartner, L. Zhou, and R. Albatal. Overview of ntcir-12 lifelog task. In Proceedings of the 12th NTCIR Conference on Evaluation of Information Access Technologies, pages 354–360, 2016.li2015delving Y. Li, Z. Ye, and J. M. Rehg. Delving into egocentric actions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 287–295, 2015.Ma_2016_CVPR M. Ma, H. Fan, and K. M. Kitani. Going deeper into first-person activity recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1894–1903, June 2016.mccandless2013object T. McCandless and K. Grauman. Object-centric spatio-temporal pyramids for egocentric activity recognition. In BMVC, volume 2, page 3, 2013.ng2015 J. Y.-H. Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4694–4702, 2015.Nguyen2016 T.-H.-C. Nguyen, J.-C. Nebel, and F. Florez-Revuelta. Recognition of activities of daily living with egocentric vision: A review. Sensors (Basel), 16(1):72, Jan 2016. sensors-16-00072[PII].oliveira2017leveraging G. Oliveira-Barra, M. Dimiccoli, and P. Radeva. Leveraging activity indexing for egocentric image retrieval. In Proceedings of the Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA), pages 295–303, Cham, June 2017. Springer.pantic2006human M. Pantic, A. Pentland, A. Nijholt, and T. Huang. Human computing and machine understanding of human behavior: A survey. In Proceedings of the 8th international conference on Multimodal interfaces, pages 239–248. ACM, 2006.pirsiavash2012detecting H. Pirsiavash and D. Ramanan. Detecting activities of daily living in first-person camera views. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2847–2854. IEEE, 2012.poleg2016compact Y. Poleg, A. Ephrat, S. Peleg, and C. Arora. Compact cnn for indexing egocentric videos. In Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, pages 1–9. IEEE, 2016.poppe2010survey R. Poppe. A survey on vision-based human action recognition. Image and vision computing, 28(6):976–990, 2010.ryoo2016first M. Ryoo and L. Matthies. First-person activity recognition: Feature, temporal structure, and prediction. International Journal of Computer Vision, 119(3):307–328, 2016.ryoo2013first M. S. Ryoo and L. Matthies. First-person activity recognition: What are they doing to me? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2730–2737, 2013.sazonov2014wearable E. Sazonov and M. R. Neuman. Wearable Sensors: Fundamentals, implementation and applications. Elsevier, 2014.simonyan2014two K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In Advances in Neural Information Processing Systems, pages 568–576, 2014.Simonyan14c K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014.srivastava14a N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958, 2014.talavera2015r E. Talavera, M. Dimiccoli, M. Bolanos, M. Aghaei, and P. Radeva. R-clustering for egocentric video segmentation. In Iberian Conference on Pattern Recognition and Image Analysis, pages 327–336. Springer, 2015.weinland2011survey D. Weinland, R. Ronfard, and E. Boyer. A survey of vision-based methods for action representation, segmentation and recognition. Computer vision and image understanding, 115(2):224–241, 2011.yue2015beyond J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici. Beyond short snippets: Deep networks for video classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4694–4702, 2015.
http://arxiv.org/abs/1708.07889v1
{ "authors": [ "Alejandro Cartas", "Mariella Dimiccoli", "Petia Radeva" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20170825211242", "title": "Batch-Based Activity Recognition from Egocentric Photo-Streams" }
Department of Physics, College of Science,Swansea University, Singleton Park, Swansea SA2 8PP, United Kingdom In 2+1 dimensions the global U(2N) symmetry associated with massless Dirac fermions is broken to U(N)⊗U(N) by a parity-invariant mass. I will show how to adapt the domain wall formulation to recover the U(2N)-invariant limit in interacting fermion models as the domain wall separation is increased. In particular, I will focus on the issue of potential dynamical mass generation in the Thirring model, postulated to take place for N less than some critical N_c. I will present results of simulations of the model using both HMC (N=2) and RHMC (N=1) algorithms, and show that the outcome is very different from previous numerical studies of the model made with staggered fermions, where the corresponding pattern of symmetry breaking is distinct.Numerical study of the 2+1d Thirring modelwith U(2N)-invariant fermions Simon Hands1Acknowledges financial support fromSTFC and the Leverhulme Trust ==================================================================================== § INTRODUCTION Relativistic fermions moving in 2+1 spacetime dimensions form the basis of some facscinating quantum field theories, whose principalapplications mainly fall in the domain of the condensed matter physics of layered systems. For example, fermion degrees of freedom described by a Dirac equation have been invoked and studied in models of the underdoped regime of d-wave superconductors <cit.>,where they arise as excitations at the nodes of the gap function Δ(k⃗); in models of spin-liquid behaviour in Heisenberg antiferromagnets <cit.>,where the infrared behaviour of compact QED_3 remains an open issue; as surface states of three-dimensional topological insulators; and of course as low-energy electronic excitations in graphene <cit.>.What all these models share in common is their use of four-component reducible spinor representations for the fermi fields. For non-interactingfermions the action in Euclidian metric is S=∫ d^3x ψ̅(γ_μ∂_μ)ψ+mψ̅ψ;μ=0,1,2;a key point is the mass term proportional to m is hermitian and invariant under the parity inversion x_μ↦-x_μ. For m=0 S has a global U(2) invariance generated by ψ↦ e^iαψ, ψ̅↦ψ̅e^-iα;ψ↦ e^αγ_3γ_5ψ, ψ̅↦ψ̅e^-αγ_3γ_5; ψ↦ e^iαγ_3ψ, ψ̅↦ψ̅e^iαγ_3; ψ↦ e^iαγ_5ψ, ψ̅↦ψ̅e^iαγ_5.For m≠0 the γ_3 and γ_5 rotations (<ref>) are no longer symmetries and the general pattern of breaking is thus U(2N)→U(N)⊗U(N), where we generalise to N degenerate flavors.Because there is no chiral anomaly in 2+1d, it is possible to perform a change of variables in the path integral to identify two further “twisted” mass terms which, though antihermitian, are physically equivalent to the mψ̅ψof (<ref>):im_3ψ̅γ_3ψ; im_5ψ̅γ_5ψ.The “Haldane” mass term m_35ψ̅γ_3γ_5ψ is not equivalent because it changes sign under parity, and will not be considered further.§ THE THIRRING MODEL IN 2+1D A model of particular interest is the Thirring model, which has a contact interaction between conserved fermion currents. Its Lagrangian density readsL=ψ̅_i(∂/ +m)ψ_i+g^22N(ψ̅_iγ_μψ_i)^2;i=1,…,N.Equivalently, its dynamics is captured by a bosonised action via the introduction of an auxiliary vector field A_μ:L_A=ψ̅_i(∂/ +ig√ NA_μγ_μ+m)ψ_i+12A_μ^2.The Thirring model is arguably the simplest interacting theory of fermions requiring a computational approach. The coupling g^2 has dimension 2-d, where d is the dimension of spacetime, so a naive expansion in powers of g^2 is non-renormalisable for d>2. However, things look different after resummation. First introduce an additional Stückelberg scalar φ so the bosonic term becomes 12(A_μ-∂_μφ)^2, to identify a hidden local symmetry <cit.>ψ↦ e^iαψ;A_μ↦ A_μ+∂_μα;φ↦φ+α.This point of view strongly suggests the identification of A_μ as an abelian gauge field; the original Thirring model (<ref>) corresponds to a unitary gauge φ=0. In Feynman gauge, to leading order in 1/N the resummed vector propagator is now of the form ⟨ A_μ(k)A_ν(-k)⟩∝δ_μν/k^d-2. An expansion in powers of 1/N may now be developed by analogy with QED_3, and is exactly renormalisable for 2<d<4 <cit.>.The outstanding theoretical issue is whether, for g^2 sufficiently large and N sufficiently small, there is a symmetry-breaking transition leading to formation of a bilinear condensate ⟨ψ̅ψ⟩≠0 accompanied by dynamical fermion mass generation.It is of particular interest to identify the critical N_c above which symmetry breaking does not occur even in the strong coupling limit.The transitions at g_c^2(N<N_c) then potentially define a series of distinct quantum critical points (QCPs). One early prediction, using strong-coupling Schwinger-Dyson equations in the ladder approximation, found N_c≃4.32 <cit.>.§ THE THIRRING MODEL WITH STAGGERED FERMIONS The Thirring model has been studied by numerical simulations using staggered fermions, with action <cit.>S_stag=12∑_xμ i[χ̅_x^iη_μ x(1+iA_μ x)χ_x+μ̂^i - ]+m∑_xiχ̅_x^iχ_x^i+N4g^2A_μ x^2.Eq. (<ref>) is not unique, but has the feature that the linear coupling of the auxiliary precludes higher-point interactions between fermions once it is integrated over. The action (<ref>) has a global U(N)⊗U(N) symmetry broken to U(N) by either explicit or spontaneous mass generation. In a weakly coupled long-wavelength limit a U(2N_f) symmetry is recovered with N_f=2N <cit.>; however the putative QCP is not weakly coupled, so this conventional wisdom must be questioned.The action (<ref>) has been studied for N_f∈[2,6] <cit.> and QCPs have been identified and characterised. Other studies with N_f=2 have found compatible results <cit.>.A more recent study which plausibly identifies the strong-coupling limit reported N_fc=6.6(1), δ(N_fc)≈7 <cit.>.The results are summarised in Fig. <ref>. The “chiral” symmetry is indeed broken for small N_f and large g^2, and the exponent δ defined by the critical scaling ⟨χ̅χ⟩|_g_c^2∝ m^1δ is very sensitive to N_f. Other exponents can be estimated using hyperscaling. These results are in qualitative but not quantitative agreement with the Schwinger-Dyson approach, which predicts δ(N_fc)=1.A non-covariant form of (<ref>) has been used to model the semimetal-insulator transition in graphene, finding N_fc≈5 <cit.> suggesting that a Mott insulating phase is possible for the N_f=2 appropriate for monolayer graphene <cit.>.The staggered Thirring model thus exhibits a non-trivial phase diagram, with a sequence of QCPs with an N-dependence quite distinct from those of the theoretically much better-understood Gross-Neveu (GN) model. However, recent simulations of N_f=2 with a fermion bag algorithm which permits study directly in the massless limit, have found compatible exponents for theQCP <cit.>:ν = 0.85(1); η=0.65(1); η_ψ=0.37(1);ν = 0.849(8); η=0.633(8); η_ψ=0.373(3);The large-N GN values are ν=η=1. These results are troubling: from the perspective of the large-N expansion using bosonised actions the models should be distinct, whereas (<ref>) suggest rather they lie in the same RG basin of attraction. Indeed, when written purely in terms of four-point interactions between staggered fields spread over elementary cubes, the only difference between the models is an extra body-diagonal coupling in the GN case <cit.>.In this study we examine the possibility that staggered fermions do not reproduce the expected physics because of a failure to capture the correct continuum global symmetries near a QCP. A similar insight has been offered by the Jena group <cit.>.§ DOMAIN WALL FERMIONS IN 2+1D The physical idea of domain wall fermions (DWF) is that fermions Ψ,Ψ̅ are allowed to propagate along an extra fictitious dimension of extent L_s with open boundary conditions. In 2+1+1d this propagation is governed by an operator ∼∂_3γ_3. As L_s→∞ zero modes of D_DWF localised on the domain walls at either end become ± eigenmodes of γ_3, and physical fields in the target 2+1d space identified viaψ(x)=P_-Ψ(x,1)+P_+Ψ(x,L_s); ψ̅(x)=Ψ̅(x,L_s)P_-+Ψ̅(x,1)P_+,with P_±=12(1±γ_3). The walls are then coupled with a term proportional to the explicit massgap m. However, the emergence of the U(2N) symmetry outlined in Sec. <ref> is not manifest, because while the wall modes are eigenmodes of γ_3, the continuum symmetry (<ref>,<ref>) demands equivalence under rotations generated by both γ_3 and γ_5. Another way of seeing this is that the twisted mass terms (<ref>) should yield identical physics, eg. the strength of the corresponding bilinear condensate, as L_s→∞. This requirement is apparently non-trivial, since while m and m_3 couple Ψ and Ψ̅ fields on opposite walls, m_5 couples fields on the same wall.The recovery of U(2N) symmetry as L_s→∞ was demonstrated numerically in a study of quenched non-compact QED_3 on 24^3× L_s systems, for a range of couplings β <cit.>. First define the principal residual Δ via the imaginary part of the twisted condensate:[i⟨Ψ̅(1)γ_3Ψ(L_s)⟩]= -[i⟨Ψ̅(L_s)γ_3Ψ(1)⟩]≡Δ(L_s).The difference between the various condensates and their value in the large-L_s limit is then specified in terms of secondary residuals ε_i(L_s) via⟨ψ̅ψ⟩_L_s = i⟨ψ̅γ_3ψ⟩_L_s→∞+2Δ(L_s) +2ε_h(L_s);i⟨ψ̅γ_3ψ⟩_L_s = i⟨ψ̅γ_3ψ⟩_L_s→∞ +2ε_3(L_s);i⟨ψ̅γ_5ψ⟩_L_s = i⟨ψ̅γ_3ψ⟩_L_s→∞ +2ε_5(L_s). Empirically, as shown in Fig. <ref>, the residuals Δ and ε_i decay exponentially with L_s, with a clear hierarchy Δ≫ε_h≫ε_3≡ε_5. This is strong evidence for the ultimate recovery of U(2N) (the convergence rate is strongly dependent on both β and system volume), and moreover suggests the optimal simulation strategy is to focus on the twisted condensate i⟨ψ̅γ_3ψ⟩ for whichfinite-L_s corrections are minimal. The equivalence of γ_3 and γ_5 condensates at finite L_s and their superior convergence to the large-L_s limit was shown analytically in <cit.>, where the convergence to fermions obeying 2+1d Ginsparg-Wilson relations was also demonstrated. Exponential improvement of convergence wasshown in the large-N limit of the GN model in <cit.>.The benefits of atwisted mass term for improved recovery of U(2N) symmetry were also observed in a study of non-compact QED_3 using Wilson fermions in <cit.>.§ THE THIRRING MODEL WITH DOMAIN WALL FERMIONS Even after settling on DWF, we still encounter some remaining formulational issues. Just as for the staggered model, we have chosen a linear interaction between the fermion current and a 2+1d vector auxiliary field A_μ defined on the lattice links. The simplest approach to formulating the Thirring model is to restrict the interaction to the physical fields (<ref>) defined on the domain walls at s=1,L_s. This follows the treatment of the GN model with DWF developed in <cit.>. It has the technical advantage that the Pauli-Villars fields required to cancel bulk mode contributions to the fermion determinant do not couple to A_μ, and hence can be safely excluded from the simulation, which brings significant cost savings. In what follows this approach will be referred to as the Surface model.However, following the discussion below eq. (<ref>) suggesting the strong similarity of A_μ with an abelian gauge field, we also consider a Bulk formulation in which Ψ, Ψ̅ interact with a “static” field (ie. ∂_3 A_μ=0) throughout the bulk:S=Ψ̅ DΨ=Ψ̅D_WΨ+Ψ̅D_3Ψ+m_iS_i,with D_W=γ_μ D_μ-(D̂^2+M); D_3=γ_3∂_3-∂̂_3^2;and m_iS_i is the explicit mass term defined only on the walls. Here D_μ xy = 12[(1+iA_μ x)δ_x+μ̂,y-(1-iA_μ x-μ̂)δ_x-μ̂,y], D̂^2_xy = 12∑_μ[(1+iA_μ x)δ_x+μ̂,y+(1-iA_μ x-μ̂)δ_x-μ̂,y-2δ_xy],and M is the domain wall height, here set equal to 1. ∂_3, ∂̂_3^2 are defined similarly using finite differences in the 3 direction, respecting the open boundary conditions, and with no coupling to the auxiliary. These definitions imply the following properties:[∂_3,D_μ]=[∂_3,D̂^2]=0 [∂_3,∂̂_3^2]≠0The action (<ref>) may be simulated using the HMC algorithm for N even; however the failure of the third commutator in (<ref>) to vanish everywhere is an obstructionto proving D is positive definite, so N=1 is simulated using the RHMC algorithm with functional measure ( D^† D)^12.§ NUMERICAL RESULTS We have studied both surface and bulk formulations of the Thirring model in the coupling range ag^-2∈[0.1,1.0] with first N=2 <cit.> and now N=1. The RHMC algorithm used 25 partial fractions to estimate fractional powers of D^† D. Most results are obtained on 12^3×16 (N=2) or 12^3×8 (N=1). An exploration of volume and finite-L_s effects for N=2 was presented in <cit.>. Summary results for the bilinear condensate i⟨ψ̅γ_3ψ⟩ with m_3a=0.01 are shown in Fig. <ref>. The bulk model shows signifcantly enhanced pairing for g^-20.5 compared to the surface model, and as might be anticipated pairing is greater for N=1 than N=2. This trend continues until a maximum at ag^-2≈0.2. In previous work with staggered fermions this has been identified with the effective location of the continuum strong coupling limit <cit.>,since for stronger lattice couplings there is a breakdown of reflection positivity <cit.>. We are thus confident that the range of couplings explored includes the strong coupling limit. Fig. <ref> shows the auxiliary action over the same coupling range, compared with the free field value d2. The difference between surface and bulk models is apparently very striking, but it should be borne in mind that this is really a comparison of UV properties of two different regularisations ofostensibly the same continuum field theory. Once again, there is tentative evidence for a change of behaviour of the N=1 bulk model at ag^-2≈0.5. Fig. <ref> shows the ratio i⟨ψ̅γ_3ψ⟩/m_3χ_πobtained for N=2, with the transverse susceptibilitydefined χ_π=N∑_x⟨ψ̅γ_5ψ(0)ψ̅γ_5ψ(x)⟩.For a theory where the ψ,ψ̅ fields respect U(2N) symmetry, the 2+1d generalisation of the axial Ward identity predicts the ratio to be unity. Fig. 5 shows that this requirement is far from being met, and that further work is needed to understand and calibrate the identification of the physical fields via relations such as (<ref>). Again, the disparity between bulk and surface models is striking, with neither being obviously preferred. Finally we consider whether U(2N) is spontaneously broken at strong coupling. In <cit.> the bilinear condensate for N=2 was examined as a function of bare mass m across a range of couplings, and in every case a linear scaling ⟨ψ̅ψ⟩∝ m was found, indicative of U(2N) symmetry being manifest as m→0. We now extend this study to N=1. Figs. <ref>,<ref> show⟨ψ̅ψ(m)⟩ for ma=0.01,…,0.05; with a trivial rescaling implemented by choosing the abscissa as m/⟨ψ̅ψ(am=0.01)⟩, data from the entire range of couplings studied collapses onto a near linear curve, for both bulk (left) and surface (right) models. There is no evidence for any singular behaviour associated with a symmetry-breaking phase transition, and it seems safe to conclude lim_m→0⟨ψ̅ψ⟩=0. Assuming this picture persists in the large volume and L_s→∞ limits, this provides strong evidence that the critical flavor number in the U(2N)-symmetric Thirring model is constrained byN_c<1. § SUMMARY AND OUTLOOK It has been shown that it is feasible to use DWF to study U(2N)-symmetric fermions in 2+1d, and that use of a twisted mass term ∼ im_3ψ̅γ_3ψ optimises the recovery of the symmetry as L_s→∞. A study of the Thirring model at strong fermion self-couplings then shows that DWF capture a very different physics to that described by staggered fermions, which are governed by a different global symmetry away from the weak-coupling long-wavelength limit. While it is still not possible to settle on the preferred formulation of the strongly-coupled Thirring model, both the bulk and surface versions presented here are in agreement that the critical flavor number N_c<1. Fortunately this is compatible with the results obtained using a distinct U(2N)-symmetric approach involving the SLAC derivative <cit.>, also presented at this conference <cit.>. Recent studies of U(2N)-symmetric QED_3, an asymptotically-free theory, have also concluded N_c<1 <cit.>.It is noteworthy that a disparity between DWF and staggered fermions in a very different physical context, namely near a conformal fixed point in a 3+1d non-abelian gauge theory, has also been reported <cit.>.In future work it will obviously necessary to study the effects of L_s→∞, V→∞, and of varying the domain wall height M. It will also be valuable to study the locality properies of the corresponding 2+1d overlap operator, furnishing a non-trivial test of the DWF approach in a new physical context. Another question to ponder is what does chiral symmetry breaking actually look like for 2+1+1d DWF? We are currently investigating this issue by quenching the Thirring model. Finally, functional renormalisation group studies indicate that in the hunt for a QCP it may be interesting to include a U(2N)-invariant Haldane interaction-g̃^2(ψ̅γ_3γ_5ψ)^2 <cit.>; the control offered by DWF will make this a straightforward exercise.
http://arxiv.org/abs/1708.07686v1
{ "authors": [ "Simon Hands" ], "categories": [ "hep-lat" ], "primary_category": "hep-lat", "published": "20170825105604", "title": "Numerical study of the $2+1d$ Thirring model with U($2N$)-invariant fermions" }
acmauthoryearL[1]> m#1 C[1]> m#1 R[1]> m#1□ soldot/.style=color=black,only marks,mark=* holdot/.style=color=black,fill=white,only marks,mark=*
http://arxiv.org/abs/1708.07436v2
{ "authors": [ "Thông T. Nguyên", "Siu Cheung Hui" ], "categories": [ "cs.LG", "cs.CR", "cs.DB" ], "primary_category": "cs.LG", "published": "20170824142544", "title": "Differentially Private Regression for Discrete-Time Survival Analysis" }
§ INTRODUCTION The Sun/Moon photometer CE318-T, installed at the Southern site of the Cherenkov Telescope Array (CTA) in Chile, is part ofthe auxiliary scientific instrumentation developed for the atmosphere monitoring and calibration of the CTA <cit.>. The photometer can measure day-time and night-time evolution of the integral optical depth of the atmosphere with very highprecision in 9 photometric passbands (with the central wavelengths at 340, 380, 440, 500, 675, 870, 937, 1020 and 1640 nm) with high cadence of one measurement every three minutes. This allows an understanding of the detailed characteristics of the atmospheric extinction and theunderlying particulates, mainly aerosols.The usual disadvantage of passive devices measuring AOD at night is the low incoming light flux from stars. Using theMoon as a light source with brightness 10^5-times bigger than that of the brightest star Sirius enables the photometerto reach a high signal-to-noise ratio and therefore also a precision unreachable for star photometers. The only disadvantageis that the photometer does not measure continuously at night but requires at least 40 % illumination of the Moon andits altitude above 10 degrees in order to obtain reasonable precision of all measurements. The photometer is a completely stand-alone device, equipped with built-in batteries, solar panel, GPS synchronization and a control unit which enables autonomous measurements and data acquisition. It was installed at the Southern site of the CTA in June 2016 and since then it collected data until September 2016. In this work we present the analysis of the last two months ofobservations.Installed at a high altitude of 2154 metres above sea level, near Cerro Paranal, a place with very stable atmospheric conditions,the photometer can also be used for the characterization of its performance and for testing new methods of AOD retrieval,cloud-screening and calibration. This is very important in the case of Moon photometry, which is still not completelyoptimized. In this work, we discuss our calibration method for nocturnal measurements and modification of usual cloud-screeningfor the purposes of the Moon photometry. § CALIBRATIONThe photometer is part of AERONET (AErosol RObotic NETwork)[https://aeronet.gsfc.nasa.gov/], a wordwide network of precisely calibrated instruments, which provides globally distributed observations of diurnal AODs. It was calibrated for diurnal measurements in AERONET inNovember 2016, which allows us to compare the results of our on-site calibration with those obtained by AERONET and to estimate systematic uncertainties of our AODs. However, photometry of the Moon and calibration of nocturnal measurements is still not satisfactorily understood and AERONET does not yet provide calibration of the photometer for the purposes of nocturnal measurements. In order to obtain night-time AODs we had to develop our own methods based on recently published papers. For the purpose of diurnal calibration, the classical Langley method was used. The Langley method is based on a simplelinear fit of the equation (<ref>) based on the Lambert-Beer's lawln V = ln (V_0 / R^2) - X τwhere V stands for measured voltage, X for airmass, τ for optical depth, R for Sun-Earth distance and V_0 isthe so-called extraterrestrial voltage, assumed constant in a given photometric passband. According to the suggestion made by Shaw <cit.>,only morning measurements on a short interval of airmasses from 2 to 5 were used, because of more stable atmosphericconditions in the morning than after noon. After a preliminary cloud-screening based on the AERONET V3 algorithm(see further details in section <ref>) sigma-clipping was applied in order to reject outliers and remaining measurements affected by clouds. Finally, only those fits with empirically estimated threshold on RMS of the distribution of residuals as RMS≤ 0.01 werekept and the final precision of the mean values of V_0 for each passband[The 937 nm passband is not included, because there isstrong contribution of water vapour to absorbtion and the total optical depth is therefore highly varible.]is listed in Table <ref>, together with its comparison to those obtained from AERONET.The case of calibration of nocturnal measurements is much more difficult, because illumination of the Moon varies with time and therefore also the extraterrestrial voltage strongly depends on the phase of the Moon and the standard Langley method has to bemodified. Barreto et al. <cit.> developed the so-called Lunar-Langley method, where the extraterrestrial voltageV_0 in each passband is modified as V_0 = I_0 κ, where κ is a calibration constant and I_0 is an extraterrestrial irradiance of the detector, which can be calculated using the ROLO lunar reflectance model <cit.>. The lunar reflectance in the ROLO model is given as a complicated empirically derived analytical function of geometrical variables like the absolute value of the phase angle, the selenographic longitude of the Sun and selenographic latitude and longitude of the observer, respectively. These variables have to be obtained with the use of some reliable source of precise ephemerides. In thiswork, we used JPL (Jet Propulsion Laboratory) Horizons ephemerides obtained via thelibrary .[Available at <https://pypi.python.org/pypi/CALLHORIZONS>]The accuracy of the ROLO model plays a critical role in the calibration. The authors of the original paper quoted an accuracyof better than 1 %. However, during our analysis it turned out that after Lunar-Langley calibration, we obtained a lot of negative AODs or apparent nocturnal cycles of AODs (see left Fig. <ref>), occuring when the calibration is wrong (e.g. <cit.>). This means that κ is not really a constant and the extraterrestrial voltage, although corrected by the ROLO model, still varies with time.This problem was independently found by Barreto et al. <cit.> and they also recently proposed an original method to overcome it <cit.>. They kept the κ constant and fitted differences between extrapolated diurnal AODs and measured nocturnal AODs with a polynomial function of the Moon phase,modulated by airmass. Our approach was different. Since we have a lot of Langley plots from many clear nights withstable conditions, we were able to study possible dependencies of κ and it turned out that κ stronglydepends on the phase of the Moon as shown in the Fig. <ref>. We therefore fitted the dependence with a third order polynomial toobtain corrected κ for each Moon phase. An example of corrected nocturnal AODs is shown in the right panel of Fig. <ref> and it can be seen that there are no residual nocturnal cycles.RMS values the of residuals of the fits of the κ dependence on the Moon phase are listed in Tab. <ref>.The persistent question is whether the ROLO model is accurate enough, and if not, why. This would be, of course, one of the possible explanations for the residual discrepancy.But as Tom Stone pointed out <cit.>, also temperature variations of the sensor, or non-linearities of thedetector on low fluxes from nocturnal observation taken far from the full Moon phase can cause the observed phasedependence of κ.It is still an open question and matter of intensive research.§ CLOUD-SCREENING A key part of the AOD retrieval is cloud-screening, because even optically thin clouds in the path of observation stronglyaffect the obtained value of AOD.In this work, we used a method based on a modified AERONET V3 algorithm<cit.> consisting of two steps - applying the triplet stability criterion and a smoothness check. The AERONET algorithm was developed for diurnal measurements only and the thresholds were universally set on large data samples from many observingsites. In the next paragraphs we describe our readjustment of the method to our site and its modification for the purpose ofnocturnal measurements.The triplet stability criterion for diurnal observations according to the AERONET V3 algorithm states that a given measurement isaffected by clouds if Δτ > max(0.01,0.015 τ) in all 675, 870 and 1020 nm passbands, whereΔτ is the difference between the maximal and minimal value of the triplet. However, it turned out that thisgeneral criterion is too loose in the case of our data, taken at the site under stable atmospheric conditions. Wetherefore decided to lower the 0.01 threshold to 0.005 based on empirical observation. An example of triplet stability ofour diurnal measurements is shown in the left panel of Fig. <ref> for one month of observation. In case of lunar measurements, the situation becomes more difficult. Since the triplet stability depends on the incidentflux, it also depends on the lunar phase and the criterion has to be modified. In first approximation, the dependence can be described as a simple polynomial of second order as shown in the right panel of Fig. <ref>. Based on our empirical assessment, we modified the criterion like Δτ < max[(0.005,0.015 τ) P(g) ],where P(g) is a function of the Moon phase g given by equationP(g) = 0.8g^2 - 0.2|g| + 1.The second step of the cloud-screening, the smoothness check, assumes that the AODs do not vary too rapidly. According to the AERONET V3 algorithm it means that, in case of diurnal observations, ΔAOD/ ΔJD[JD is so-called Julian date.] is required to be less than 14.4days^-1for each pair of measurements in the 500 nm passband if there are no clouds. As shown in Fig. <ref>, in ourcase the general criterion is also too loose and we had to lower the threshold to the empirical value of 1day^-1. This lead to the rejection of most outliers, which we believed were caused by clouds. For the purpose of lunar measurements the criterion had to be modified, due to the fact thatΔAOD/ ΔJD is phase dependent as shown in Fig. <ref>. We used the samefunction as (<ref>) to modulate the diurnal criterion like ΔAOD/ ΔJD < 1.4 P(g)days^-1.§ RESULTS The methods described above were applied on data taken during two months of observation – August and September 2016. Final values of AODs were obtained after subtraction of molecular absorption computed with the use of a MODTRAN model <cit.>.Since the flux of the moonlight is too low for shorter wavelength, the lowest two channels – 340 and 380 nm – were not used in theanalysis. In addition, our MODTRAN model covers wavelengths only up to 900 nm and therefore channels 937, 1020 and 1640 were also excluded fromthe processing.Diurnal AODs together with nocturnal ones for the 500 nm passband for one month of observation are shown in Fig. <ref>.Despite the fact that AODs show rapid variation, nocturnal measurements connect very well to the diurnal measurements. In some cases the nocturnal AODs are higherthan expected from the diurnal trend (for example days 53 and 54 from the beginning of the time interval).We think that this is most probably causedby imperfect calibration. Note that the calibration error of 1 % results in an uncertainty in the optical depth of ≈0.01 for optical airmass X=1 if other sources of uncertainties are neglected. The calibration accuracy naturally decreases with decreasing lunarirradiance and is therefore phase dependent as shown in Table <ref>. Thus a mismatch of the order of ≈0.01in AOD is expected for phases far from the full moon. One can also notice that in most cases AOD increases during the day. At this point, we are not sure whether these trends are real or not. This effect is known for example from Mauna Loa observations, where AODs increase in the afternoon due to the upslope winds. But we have to be careful, because such behavior might also be related to the unstable temperature of the detector. The overall statistics of AODs retrieved at the southern CTA site is shown in the right panel of Fig. <ref> andthe distribution of uncertainties is shown in Fig. <ref>. Uncertainties of diurnal AODs(u^D_AOD) and nocturnal AODs (u^N_AOD), respectively, were calculated from the following relations(u^D_AOD)^2 = 1/X^2u(V_0)^2/V_0^2 + u_sys^2, (u^N_AOD)^2 = 1/X^2(u(κ)^2/κ^2 + u(I_0)^2/I_0^2 + u(V)^2/V^2) + u_sys^2,where u(V_0)/V_0 and u(κ)/κ stand for relative accuracy of the calibration constants, u(I_0)/I_0 is the relativeaccuracy of the ROLO model (≈ 1 %) and u(V)/V is relative precision of each measurement, derived from tripletvariability <cit.>. Systematic uncertainties (u_sys) were estimated from differences between our and the AERONET calibration. This work was conducted in the context of the CTA CCF Work Package.We gratefully acknowledge financial support from the agencies and organizations listed here: http://www.cta-observatory.org/consortium_acknowledgments. This work was also supported by the Czech Ministry of Education ofthe Czech Republic (MSMT LTT17006, LM2015046 andCZ.02.1.01/0.0/0.0/ /16_013/0001403).99ebr2017 Ebr, J. et al., Atmospheric calibration of the Cherenkov Telescope Array, these proceedings. barreto2016 Barreto, Á. et al., Atmos. Meas. Tech. 09 (2016) 02, 10.5194/amt-9-631-2016 shaw1983 Shaw G. E., Bulletin of the AMS 64 (1983) 01, 10.1175/1520-0477 barreto2013 Barreto, A. et al., Atmos. Meas. Tech. 06 (2013) 03, 10.5194/amt-6-585-2013 rolo2005 Kieffer, H. H. and Stone, T. C., AJ 129 (2005) 06 cachorro2004 Cachorro, V. E. et al., Geophysical Research Letters 31 (2004) 12, 10.1029/2004GL019651 barreto2017 Barreto, Á. et al., Atmos. Meas. Tech. Discuss., 2017 (2017), 10.5194/amt-2016-423 stone_talk2017 Stone, T., Talk at the Lunar Photometry Workshop, Izaña Atmos. Research Center, Tenerife, (2017) smirnov2000 Smirnov A. et al., Remote Sensing of Env. 73 (2000) 03, 10.1016/S0034-4257(00)00109-7 holben_talk2013 Holben, B., Talk at the ICAP 5th working group meeting (2013)modtran Berk A, et al., In proceedings of SPIE 9088 (2014), 10.1117/12.2050433 barreto2016 Barreto, Á. et al., The new sun-sky-lunar Cimel CE318-T multiband photometer – a comprehensive performance evaluation, Atmos. Meas. Tech. 09 (2016) 02, 10.5194/amt-9-631-2016kasten_young1986 Kasten, F. & Young A. T., Revised optical air mass tables and approximation formula, Applied Optics 22 (1986) 28, 10.1364/AO.28.004735shaw1983 Shaw G. E., Sun Photometry, Bulletin of the AMS 64 (1983) 01, 10.1175/1520-0477barreto2013 Barreto, A. et al.,A new method for nocturnal aerosol measurements with a lunar photometer prototype, Atmos. Meas. Tech. 06 (2013) 03, 10.5194/amt-6-585-2013rolo2005 Kieffer, H. H. and Stone, T. C., The Spectral Irradiance of the Moon, AJ 129 (2005) 06cachorro2004 Cachorro, V. E. et al., The fictitious diurnal cycle of aerosol optical depth: A new approach for 'in situ' calibration and correction of AOD data series, Geophysical Research Letters 31 (2004) 12, 10.1029/2004GL019651barreto2017 Barreto, Á. et al., Assessment of nocturnal Aerosol Optical Depth from lunar photometry at Izaña high mountain Observatory, Atmos. Meas. Tech. Discuss., 2017 (2017), 10.5194/amt-2016-423stone_talk2017 Stone, T., The lunar phase dependence: causes and potential solutions, Talk at the Lunar Photometry Workshop, Izaña Atmospheric Research Center, Tenerife (2017)michalsky1994 Michalsky, J. J. et al., Measurement of the seasonal and annual variability of total column aerosol in a northeastern US network., In Proceedings of ISCAAO (1994)smirnov2000 Smirnov A. et al., Cloud-Screening and Quality Control Algorithms for the AERONET Database, Remote Sensing of Environment 73 (2000) 03, 10.1016/S0034-4257(00)00109-7holben_talk2013 Holben, B., Update on AERONET: significant developments on Version 3 processing, new NRT products, expansion and collaboration with SKYNET, Talk at the ICAP 5th working group meeting: Recent Progress in Aerosol Observability for Global Modeling, Japan, (2013) modtran Berk A, et al., MODTRAN6: a major upgrade of the MODTRAN radiative transfer code,In proceedings of SPIE 9088 (2014),10.1117/12.2050433
http://arxiv.org/abs/1708.07484v2
{ "authors": [ "Jakub Juryšek", "Michael Prouza" ], "categories": [ "astro-ph.HE", "astro-ph.IM" ], "primary_category": "astro-ph.HE", "published": "20170824163424", "title": "Sun/Moon photometer for the Cherenkov Telescope Array - first results" }
t1This work is supported in part by the TUBITAK grant 115F212.e1e-mail: [email protected] e2e-mail: [email protected] e3e-mail: [email protected] e4e-mail: [email protected] Department of Physics, İzmir Institute of Technology, Urla, İzmir, 35430 TURKEY TÜBİTAK National Metrology Institute, Gebze, Kocaeli, 41470 TURKEY Hidden Spin-3/2 Field in the Standard Modelt1Durmuş Demire1,addr1 CananKarahane2,addr1 Beste Korutlue3,addr2 Ozan Sargıne4,addr1Received: December 30, 2023/ Accepted: December 30, 2023 ================================================================================================================== Here we show that a massive spin-3/2 field can hide in the SM spectrum in a way revealing itself only virtually. We studycollider signatures and loop effects of this field,and determine its role in Higgs inflation andits potential as Dark Matter. We show that this spin-3/2 field has a rich linear collider phenomenology and motivates consideration of a neutrino-Higgs collider. We also show that study of Higgs inflation, dark matter and dark energycan reveal more about the neutrino and dark sector.12.60.-i 95.35.+d§ INTRODUCTIONThe Standard Model (SM) of strong and electroweak interactions, spectrally completed by the discovery of its Higgs boson at the LHC <cit.>,seems to be the model of the physics at the Fermi energies. It does so because various experiments have revealed so far no new particles beyond the SM spectrum. There is, however, at least the dark matter (DM), which requires new particles beyond the SM. Physically, therefore, we must use everyopportunity to understand where those new particles can hide, if any.In the present work we study a massive spin-3/2 field hidden in the SM spectrum. This higher-spin field, described by the Rarita-Schwinger equations <cit.>, has to obey certain constraints to have correct degrees of freedom when it is on the physical shell. At the renormalizable level, it can couple to the SM matter via only the neutrino portal (the composite SM singlet formed by the lepton doublet and the Higgs field). This interaction is such that it vanishes when the spin-3/2 field is on shell. In Sec. 2 below we give the model and basic constraints on the spin-3/2 field. In Sec. 3 we study collider signatures of the spin-3/2 field. We study there ν_L h →ν_L h and e^-e^+→ W^+W^-scatterings in detail. We give analytical computations and numerical predictions. We propose there a neutrino-Higgs collider and emphasize importance of the linear collider in probing the spin-3/2 field.In Sec. 4 we turn to loop effects of the spin-3/2 field. We find that the spin-3/2 field adds logarithmic and quartic UV-sensitivities atop the logarithmic and quadratic ones in the SM. We convertpower-law UV-dependent terms into curvature terms as a result of the incorporation of gravity into the SM. Here we use the results of <cit.> which shows that gravity can be incorporated into the SM properly and naturally (i) if the requisite curved geometry is structured by interpreting the UV cutoff as a constant value assigned to the spacetime curvature, and (ii) if the SM is extended by a secluded new physics (NP) that does not have to interact with the SM. This mechanism eliminates big hierarchy problem by metamorphosing the quadratic UV part of the Higgs boson mass turns into Higgs-curvature coupling. In Sec. 5 we discuss possibility of Higgs inflation via the large Higgs non-minimal coupling induced by the spin-3/2 field. We find that Higgs inflation is possible in a wide range of parameters provided that the secluded NP sector is crowded enough. In Sec. 6 we discuss the DM. We show therein that the spin-3/2 field is a viable DM candidate. We also show that the singlet fields in the NP can form a non-interacting DM component. In Sec. 7 we conclude. There, we give a brief list of problems that can be studied as furthering of the material presented this work. § A LIGHT SPIN-3/2 FIELDIntroduced for the first time by Rarita and Schwinger <cit.>, ψ_μ propagates withS^αβ(p) = i/p - MΠ^αβ(p),to carry one spin-3/2 and two spin-1/2 components through the projector <cit.> Π^αβ = -η^αβ + γ^αγ^β/3+ (γ^αp^β - γ^βp^α)/3M+2 p^αp^β/3 M^2, that exhibits both spinor and vector characteristics.It is necessary to impose <cit.> p^μψ_μ(p)⌋_p^2=M^2=0, and γ^μψ_μ(p)⌋_p^2=M^2=0, to eliminate the two spin-1/2 components to make ψ_μ satisfy the Dirac equation (p - M)ψ_μ=0 as expected of an on-shell fermion. The constraints (<ref>) and (<ref>) imply that p^μψ_μ(p) and γ^μψ_μ(p) both vanish on the physical shell p^2=M^2. The latter is illustrated in Fig. <ref> taking ψ_μ on-shell.Characteristic of singlet fermions, the ψ_μ, at the renormalizable level, makes contact with the SM via ℒ^(int)_3/2 = c^i_3/2L^i H γ^μψ_μ + h.c.in which L^i = ([ ν_ℓ L; ℓ_L ])_iis the lepton doublet (i=1,2,3), and H = 1/√(2)([ v + h + i φ^0; √(2)φ^- ])is the Higgs doublet with vacuum expectation value v≈ 246GeV, Higgs boson h, and Goldstone bosonsφ^-, φ^0 and φ^+ (forming the longitudinal components of W^-, Z and W^+ bosons, respectively). In general, neutrinos are sensitive probes of singlet fermions. They can get masses through, for instance, the Yukawa interaction (<ref>), which leads to the Majorana mass matrix (m_ν)^i j_3/2∝ c^i_3/2v^2/M c^⋆ j_3/2after integrating out ψ_μ. This mass matrix can, however, not lead to the experimentally known neutrino mixings <cit.>. This means that flavor structures necessitate additional singlet fermions. Of such are the right-handed neutrinosν_R^k of mass M_k (k=1,2,3,…), which interact with the SM through ℒ^(int)_R = c_R^i kL̅^i H ν_R^k + h.c.to generate the neutrino Majorana masses (m_ν)^i j_R∝ c_R^i kv^2/M_k c_R^⋆ k jof more general flavor structure. This mass matrix must have enough degrees of freedomto fit to the data <cit.>. Here we make a pivotal assumption. We assume that ψ_μ and ν_R^k can weigh as low as a TeV, and that c^i_3/2 and some of c_R^i k can be 𝒪(1). We, however, require that contributions to neutrino masses fromψ_μ and ν_R add up to reproduce with experimental result(m_ν)^i j_3/2 + (m_ν)^i j_R≈ (m_ν)^i j_expvia cancellations among different terms.We therefore take c_3/2≲𝒪(1) ,M≳ TeVand investigate the physics ofψ_μ. This cancellation requirement does not have to cause any excessive fine-tuning simply because ψ_μ and ν_R^kcan have appropriate symmetries that correlate their couplings. One possible symmetry would be rotation of γ^μψ_μ and ν_R^k into each other. We defer study of possible symmetries to another work in progress <cit.>. The right-handed sector, which can involve many ν_R^k fields, is interesting by itself but hereon we focus on ψ_μ and take, for simplicity, c^i_3/2 real and family-universal (c^i_3/2=c_3/2 for ∀ i).§ SPIN-3/2 FIELD AT COLLIDERSIt is only when it is off-shell that ψ_μ can reveal itself through the interaction (<ref>). This means that its effects are restricted to modifications in scattering rates of the SM particles. To this end, as follows from (<ref>), it participates in* ν_L h →ν_L h (and also ν_Lν_L→ h h)* e^+ e^- → W^+_L W^-_L (and also ν_Lν_L→ Z_L Z_L)at the tree level. They are analyzed below in detail.§.§ ν_L h →ν_L h ScatteringShown in Fig. <ref> are the two box diagrams which enable ν_L h →ν_L h scattering in the SM.Added to this loop-suppressed SM piece is the ψ_μ piece depicted in Fig. <ref>. The two contributions add up to give the cross section dσ(ν_L h →ν_L h)/dt= 1/16π𝒯_ν h(s,t)/(s-m_h^2)^2in which the squared matrix element 𝒯_ν h(s,t)=9(c_3/2/3 M)^4((s-m_h^2)^2 + st) - 16(c_3/2/3 M)^2( 2(s-m_h^2)^2 +(2s -m_h^2)t) 𝕃+ 2( s-m_h^2)(s + t-m_h^2) 𝕃^2 involves the loop factor𝕃=(g_W^2+g_Y^2)^2 M_Z^2 m_h^2 I(M_Z)/192 π^2 + g_W^4 M_W^2 m_h^2 I(M_W)/96 π^2in which g_W (g_Y) is the isospin (hypercharge) gauge coupling, andI(μ)=∫_0^1dx∫_0^1-xdy∫_0^1-x-ydz ((s-m_h^2)(x+y+z-1) y - txz + m_h^2 y (y-1) + μ^2 (x + y + z))^-2 is the box function. In Fig. <ref>, we plot the total cross section σ(ν_L h →ν_L h)as a function of the neutrino-Higgs center-of-mass energy for different M values. The first important thing about the plot is that there is no resonance formation around √(s)=M. This confirms the fact that ψ_μ, under the constraint (<ref>), cannot come to physical shell with the couplings in (<ref>). In consequence, the main search strategy for ψ_μ is to look for deviations from the SM rates rather than resonance shapes. The second important thing about the plot is that, in general, as revealed by (<ref>), larger the M smaller the ψ_μ contribution. The cross section starts around 10^-7pb, and falls rapidly with √(s). (The SM piece, as a loop effect, is too tiny to be observable: σ(ν_L h →ν_L h)≲ 10^-17pb). It is necessary to have some 10^4/fb integrated luminosity (100 times the target luminosity at the LHC) to observe few events in a year. This means that ν_L ν_L → h h scattering can probe ψ_μ at only high luminosity but with a completely new scattering scheme. Fig. <ref> shows that neutrino-Higgs scattering can be a promising channel to probe ψ_μ (at high-luminosity, high-energy machines). The requisite experimental setup would involve crossing of Higgs factories with accelerator neutrinos. The setup, schematically depicted inFig. <ref>, can be viewed as incorporatingfuture Higgs (CEPC<cit.>, FCC-ee <cit.> and ILC <cit.>) and neutrino <cit.> factories. If ever realized, it could be a rather clean experiment with negligible SM background. This hypothetical “neutrino-Higgs collider”, depicted in Fig. <ref>, must have, as suggested by Fig. <ref>, some 10^4/fb integrated luminosity to be able to probe a TeV-scale ψ_μ. In general, need to high luminosities is a disadvantage of this channel.(Feasibility study, technical design and possible realization of a “neutrino-Higgs collider” falls outside the scope of the present work.) §.§ e^+ e^- → W_L^+ W_L^- ScatteringIt is clear that ψ_μ directly couples to the Goldstone bosons φ^+,-,0 via (<ref>). The Goldstones, though eaten up by the W and Z bosons in acquiring their masses, reveal themselves at high energies. In fact, the Goldstone equivalence theorem <cit.> states that scatterings at energy E involving longitudinal W^±_L bosons are equal to scatterings that involve φ^± up to terms 𝒪(M_W^2/E^2). This theorem, with similar equivalence for the longitudinal Z boson, provides a different way of probing ψ_μ. In this regard, depicted inFig. <ref> is ψ_μ contribution to e^+ e^- → W_L^+ W_L^- scattering in light of the Goldstone equivalence. The SM amplitude isgiven in <cit.>. The total differential cross sectiondσ(e^+ e^- → W^+_L W^-_L)/dt= 1/16π s^2𝒯_W_L W_L(s,t)involves the squared matrix element 𝒯_W_L W_L(s,t) = (g_W^2/s-M_Z^2(-1+M_Z^2/4 M_W^2 +M_Z^2-M_W^2/s) +g_W^2/s-4 M_Z^2(1+M_W^2/t) +c^2_3/2/3 M^2)^2(-2 s M_W^2 -2 (t-M_W^2)^2) + c^4_3/2 s/18 M^2(4 + t/t-M^2)^2 Plotted in Fig. <ref> is σ(e^+ e^- → W^+_L W^-_L) as a function of the e^+ e^- center-of-mass energy for different values of M. The cross section, which falls with √(s) without exhibiting a resonance shape, is seen to be large enough to be measurable at the ILC <cit.>. In general, larger the M smaller the cross section but even 1/fb luminosity is sufficient for probing ψ_μ for a wide range of mass values.Collider searches for ψ_μ, as illustrated byν_L h →ν_L h and e^-e^+→ W^+W^- scatterings, can access spin-3/2 fields of several TeV mass. For instance, the ILC, depending on its precision, can confirm or exclude a ψ_μ of even 5 TeV mass with an integrated luminosity around 1/fb. Depending on possibility and feasibility of a neutrino-neutrino collider (mainly accelerator neutrinos), it may be possible to study also ν_L ν_L → h h and ν_L ν_L → Z_L Z_L scatterings, which are expected to have similar sensitivities to M. § SPIN-3/2 FIELD IN LOOPSAs an inherently off-shell field, ψ_μ is expected to reveal itself mainly in loops. Its one possible loop effect would be generation of neutrino masses but chirality forbids it. Despite the couplings in (<ref>), therefore, neutrino masses do not get any contribution from ψ_μ-h loop.One other loop effect of ψ_μ would be radiative corrections to the Higgs boson mass. This is not forbidden by any symmetry. The relevant Feynman diagram is depicted in Fig. <ref>. It adds to the Higgs boson squared-mass a logarithmic piece(δ m_h^2)_log = c_3/2^2/12π^2M^2log G_F M^2relative to the logarithmic piece log G_F Λ^2 in the SM, and a quartic piece (δ m_h^2)_4 = c_3/2^2/ 48 π^2Λ^4/M^2 which have the potential to override the experimental result <cit.> depending on how large the UV cutoff Λ is compared to the Fermi scale G_F^-1/2 = 293GeV. The logarithmic contribution in (<ref>), which originates from the η^αβ part of (<ref>), gives rise to the little hierarchy problem in that larger the M stronger the destabilization of the SM Higgs sector. Leaving aside the possibility of cancellations with similar contributions from the right-handed neutrinos ν_R^k in (<ref>), the little hierarchy problem can be prevented if M (more precisely M/c_3/2) lies in the TeV domain. The quartic contribution in (<ref>), which originates from the longitudinal p^α p^β term in (<ref>), gives cause to the notorious big hierarchy problem in that larger the Λ larger the destabilization of the SM Higgs sector. This power-law UV sensitivity exists already in the SM(δ m_h^2)_2 = 3 Λ^2/16 π^2 |⟨ H ⟩|^2( m_h^2 + 2 M_W^2 + M_Z^2 - 4 m_t^2)at the quadratic level <cit.> and violates the LHC bounds unless Λ≲ 550 GeV. This bound obviously contradicts with the LHC experiments since the latter continue to confirm the SM at multi TeV energies. This experimental fact makes it obligatory to find a natural UV completion to the SM.One possibility is to require(δ m_h^2)_4 to cancel out(δ m_h^2)_2. This requirement involves a severe fine-tuning (as with a scalar field <cit.>, Stueckelberg vector <cit.> and spacetime curvature <cit.>) and cannot form a viable stabilization mechanism.Another possibility would be to switch, for instance, to dimensional regularization scheme, wherein the quartic and quadratic UV-dependencies are known to disappear. This, however, is not a solution. The reason is that the SM, as a quantum field theory of the strong and electroweak interactions, needs gravity to be incorporated as the forth known force. And the fundamental scale of gravity, M_Pl, inevitably sets an ineliminable physical UV cutoff (rendering Λ physical). This cutoff forces quantum field theories to exist in between physical UV and IR scales. The SM plus ψ_μ (plus right-handed neutrinos), for instance, ranges from G_F^-1/2 at the IR up to Λ at the UV such that both scales are physical (not to be confused with the formal momentum cutoffs employed in the cutoff regularization).To stabilize the SM, it is necessary to metamorphose the destabilizing UV effects. This necessitates a physical agent. The most obvious candidate is gravity. That is to say, the UV-naturalness problems can be a clue to how quantized matter must gravitate. Indeed, quantized matter in classical curved geometry suffers from inconsistencies. The situation can be improved by considering long-wavelength matter by integrating out high-frequency modes. This means that the theory to be carried into curved geometry for incorporating gravity is not the full action but the effective action (see the discussions in <cit.> and <cit.>). Thus, starting with the SM effective action in flat spacetime with well-known logarithmic, quartic and quadratic UV-sensitivities, gravity can be incorporated in a way ensuring UV-naturalness. More precisely, gravity gets incorporated properly and naturally(i) if the requisite curved geometry is structured by interpreting Λ^2 as a constant value assigned to the spacetime curvature, and (ii) if the SM is extended by new physics (NP) that does not have to interact with the SM. The ψ_μ can well be an NP field. Incorporating gravity by identifying Λ^2 g_μν with the Ricci curvature R_μν(g), fundamental scale ofgravity gets generated asM_Pl^2 ≈(n_b-n_f)/2(8 π)^2Λ^2where n_b (n_f) are the total number of bosons (fermions) in the SM plus the NP. The ψ_μ increases n_f by 4, right-handed neutrinos by 2. There are various other fields in the NP, which contribute to n_band n_f to ensure Λ≲ M_Pl. Excepting ψ_μ,they do not need to interact with the SM fields. Induction of M_Pl ensures that the quadratic UV-contributions to vacuum energy are canalized not to the cosmological constant but to the gravitational constant (see<cit.> arriving at this result in a different context). This suppresses the cosmological constant down to the neutrino mass scale.The quartic UV contributions in (<ref>) and the quadrat­ic contributions in (<ref>) (suppressing contributions from the right-handedneutrinos ν_R^k) change their roles with the inclusion of gravity. Indeed, corrections to the Higgs mass term [(δ m_h^2)_4+(δ m_h^2)_2] H^† Hturns into[3(m_h^2 + 2 M_W^2 + M_Z^2 - 4 m_t^2)/(8π)^2|⟨ H ⟩|^2+c_3/2^2/12(n_b-n_f)M_Pl^2/M^2] R H^† Hwhich is nothing but the direct coupling of the Higgs field to the scalar curvature R. This Higgs-curvature coupling is perfectly natural; it has no potential to de-stabilize the Higgs sector. Incorporation of gravity as in <cit.> leads, therefore, to UV-naturalization of the SM with a nontrivial NP sector containing ψ_μ as its interacting member. § SPIN-3/2 FIELD AS ENABLER OF HIGGS INFLATIONThe non-minimal Higgs-curvature coupling in (<ref>) reminds one at once the possibility of Higgs inlation. Indeed, the Higgs field has been shown in <cit.> to lead to correct inflationary expansion provided that c_3/2^2/12(n_b-n_f)M_Pl^2/M^2≈ 1.7× 10^4after dropping the small SM contribution in (<ref>). This relation puts constraints on M and Λ depending on how crowded the NP is. For a Planckian UV cutoff Λ≈ M_Pl, the Planck scale in (<ref>) requires n_b - n_f≈ 1300, and this leads to M/c_3/2≈ 6.3× 10^13GeV. This heavy ψ_μ, weighing not far from the see-saw and axion scales, acts as an enabler of Higgs inflation. (Of course, all this makes sense if the ψ_μ contribution in (<ref>) is neutralized by similar contributions from the right-handed neutrinos ν_R^k to alleviate the littlehierarchy problem.) For an intermediate UV cutoff Λ≪ M_Pl, n_b-n_f can be large enough to bring M down to lower scales. In fact, M gets lowered to M∼ TeV for n_b-n_f≃ 10^24, and this sets the UV cutoff Λ∼ 3TeV. This highly crowded NP illustrates how small M and Λ can be. Less crowded NP sectors lead to intermediate-scale M and Λ.It follows therefore that it is possible to realize Higgs inflation through the Higgs-curvature coupling (corresponding to quartic UV-dependence the ψ_μ induces on the Higgs mass). It turns out that Higgs inflation is decided by how heavy ψ_μ is and how crowded the NP is. It is interesting that the ψ_μ hidden in the SM spectrum enables successful Higgs inflation if gravity is incorporated into the SM as in <cit.>. § SPIN-3/2 FIELD AS DARK MATTER Dark matter (DM), forming one-forth of the matter in the Universe, must be electrically-neutral and long-lived. The negative searches <cit.> so far have added one more feature: The DM must have exceedingly suppressed interactions with the SM matter. It is not hard to see that the spin-3/2 fermion ψ_μ possesses all these properties.Indeed, the constraint (<ref>) ensures that scattering processes in which ψ_μ is on its mass shell must all be forbidden simply because its interactions in (<ref>)involves the vertex factor c_3/2γ^μ. This means that decays of ψ_μ as in Fig.<ref> as well as its co-annihilations with the self and other SM fields are all forbidden. Its density therefore does not change with time, and the observed DM relic density <cit.> must be its primordial density, which is determined by theshort-distance physics the ψ_μ descends from. It is not possible to calculate the relic density without knowing the short-distance physics. Its mass and couplings, on the other hand, can be probed via the known SM-scatterings as studied in Sec. 3 above. In consequence, the ψ_μ, as an inherently off-shell fermion hidden in the SM spectrum, possesses all the features required of a DM candidate.Of course, the ψ_μ is not the only DM candidate in the setup. The crowded NP sector, needed to incorporate gravity in a way solving the hierarchy problem (see Sec. 4 above), involves various fields which do not interact with the SM matter. They are viable candidates for non-ineracting DM as well as dark energy (see the detailed analysis in <cit.>). The non-interacting NP fields can therefore contribute to the total DM distribution in the Universe. It will be, of course, not possible to search for them directly or indirectly. In fact, they do not have to come to equilibrium with the SM matter. Interestingly, both ψ_μ and the secluded fields in the NP act as extra fields hidden in the SM spectrum. Unlike ψ_μ which reveal itself virtually, the NP singlets remain completely intact. The main implication is that, in DM phenomenology, one must keep in mind that there can exist an unobservable, undetectable component of the DM <cit.>.§ CONCLUSION AND OUTLOOKIn this work we have studied a massive spin-3/2 particle ψ_μ obeying the constraint (<ref>) and interacting with the SM via (<ref>). It hides in the SM spectrum as an inherently off-shell field. We first discussed its collider signatures by studying ν_L h →ν_L h and e^-e^+→ W^+W^- scatterings in detail in Sec. 3. Following this, we turned to its loop effects and determined how it contributes to big and little hierarchy problems in the SM. Resolving the former by appropriately incorporating gravity, we show that Higgs field can inflate the Universe. Finally, we show that ψ_μ is a viableDM candidate, which can be indirectly probed via the scattering processes we have analyzed. The material presented in this work can be extended in various ways. A partial list would include: * Determining under what conditions right-handed neutrinos can lift the constraints on ψ_μ from the neutrino masses, * Improving the analyses of ν_L h →ν_L h and e^-e^+→ W^+W^- scatterings by including loop contributions, * Simulating e^-e^+→ W^+W^- at the ILC by taking into account planned detector acceptances and collider energies, * Performing a feasibility study of the proposed neutrino-Higgs collider associated with ν_L h →ν_L h scattering, * Exploring UV-naturalness by including right-handed neutrinos, and determining under what conditions the little hierarchy problem is softened. * Including effects of the right-handed neutrinos into Higgs inflation, and determining appropriate parameter space.* Giving an in-depth analysis of the dark matter and dark energy by taking into account the spin-3/2 field, right-handed neutrinos and the secluded NP fields. * Studying constraints on the masses of NP fields from nucleosynthesis and other processes in the early Universe. We will continue to study the spin-3/2 hidden field starting with some of these points.Acknowledgements. This work is supported in part by the TUBITAK grant 115F212. We thank to the conscientious referee for enlightening comments and suggestions. 99 higgs-mass G. Aad et al. [ATLAS and CMS Collaborations], “Combined Measurement of the Higgs Boson Mass in pp Collisions at √(s)=7 and 8 TeV with the ATLAS and CMS Experiments”, Phys. Rev. Lett.114, 191803 (2015)[arXiv:1503.07589 [hep-ex]]. Rarita:1941mf W. Rarita and J. Schwinger, “On a Theory of Particles with Half-Integral Spin”, Phys. Rev.60 (1941) 61.pillingV. Pascalutsa, “Correspondence of consistent and inconsistent spin - 3/2 couplings via the equivalence theorem”, Phys. Lett. B 503, 85 (2001) [hep-ph/0008026]; T. Pilling, “Symmetry of massive Rarita-Schwinger fields”, Int. J. Mod. Phys. A 20, 2715 (2005) [hep-th/0404131]. gravity D. A. Demir, “A Mechanism of Ultraviolet Naturalness”, arXiv:1510.05570 [hep-ph]; D. A. Demir, “Curvature-Restored Gauge Invariance and Ultraviolet Naturalness,” Adv. High Energy Phys.2016 (2016) 6727805[arXiv:1605.00377 [hep-ph]]; gravity2 D. Demir, “Naturalizing Gravity of the Quantum Fields, and the Hierarchy Problem,” arXiv:1703.05733 [hep-ph]. neutrino-mass K. S. Babu, E. Ma and J. W. F. Valle, “Underlying A(4) symmetry for the neutrino mass matrix and the quark mixing matrix,” Phys. Lett. B 552 (2003) 207[hep-ph/0206292]; W. Grimus and L. Lavoura, `A Nonstandard CP transformation leading to maximal atmospheric neutrino mixing,” Phys. Lett. B 579 (2004) 113[hep-ph/0305309]; E. Ma, “Neutrino theory: Mass, interactions, connections,” PoS CORFU 2015 (2016) 009. Ozan O. Sargin,“Symmetries of the Hidden Spin-3/2 Field and Right-Handed Neutrinos, and Implications for Neutrino Masses”, work in progress (2017).Ruan:2014xxaM. Ruan, “Higgs Measurement at e^+e^- Circular Colliders”, Nucl. Part. Phys. Proc.273-275, 857 (2016) [arXiv:1411.5606 [hep-ex]].Gomez-Ceballos:2013zzn M. Bicer et al. [TLEP Design Study Working Group Collaboration], “First Look at the Physics Case of TLEP”, JHEP 1401, 164 (2014) [arXiv:1308.6176 [hep-ex]];D. d'Enterria, “Physics at the FCC-ee”, arXiv:1602.05043 [hep-ex].Baer:2013cma H. Baer et al., “The International Linear Collider Technical Design Report - Volume 2: Physics”, arXiv:1306.6352 [hep-ph];G. Moortgat-Pick et al., “Physics at the e+ e- Linear Collider”, Eur. Phys. J. C 75, no. 8, 371 (2015) [arXiv:1504.01726 [hep-ph]];K. Fujii et al., “Physics Case for the International Linear Collider”, arXiv:1506.05992 [hep-ex]. Choubey:2011zzq S. Choubey et al. [IDS-NF Collaboration], “International Design Study for the Neutrino Factory, Interim Design Report”, arXiv:1112.2853 [hep-ex];M. Bonesini, “Perspectives for Muon Colliders and Neutrino Factories”, Frascati Phys. Ser.61, 11 (2016) [arXiv:1606.00765 [physics.acc-ph]];D. M. Kaplan [MAP and MICE Collaborations], “Muon Colliders and Neutrino Factories”, EPJ Web Conf.95, 03019 (2015) [arXiv:1412.3487 [physics.acc-ph]]. equivalence J. M. Cornwall, D. N. Levin and G. Tiktopoulos, “Derivation of Gauge Invariance from High-Energy Unitarity Bounds on the s Matrix,” Phys. Rev. D 10 (1974) 1145Erratum: [Phys. Rev. D 11 (1975) 972];M. S. Chanowitz and M. K. Gaillard, `The TeV Physics of Strongly Interacting W's and Z's,” Nucl. Phys. B 261 (1985) 379;M. E. Peskin and D. V. Schroeder, “An Introduction to quantum field theory.” Veltman:1980mj M. J. G. Veltman, “The Infrared - Ultraviolet Connection”, Acta Phys. Polon. B 12, 437 (1981).normal-DM-searchZ. H. Yu, J. M. Zheng, X. J. Bi, Z. Li, D. X. Yao and H. H. Zhang, “Constraining the interaction strength between dark matter and visible matter: II. scalar, vector and spin-3/2 dark matter”, Nucl. Phys. B 860, 115 (2012) [arXiv:1112.6052 [hep-ph]];R. Ding and Y. Liao, “Spin 3/2 Particle as a Dark Matter Candidate: an Effective Field Theory Approach”, JHEP 1204, 054 (2012) [arXiv:1201.0506 [hep-ph]];R. Ding, Y. Liao, J. Y. Liu and K. Wang, “Comprehensive Constraints on a Spin-3/2 Singlet Particle as a Dark Matter Candidate”, JCAP 1305, 028 (2013) [arXiv:1302.4034 [hep-ph]]; K. G. Savvidy and J. D. Vergados, “Direct dark matter detection: A spin 3/2 WIMP candidate”, Phys. Rev. D 87, no. 7, 075013 (2013)[arXiv:1211.3214 [hep-ph]]; S. Dutta, A. Goyal and S. Kumar, “Anomalous X-ray galactic signal from 7.1 keV spin-3/2 dark matter decay”, JCAP 1602, no. 02, 016 (2016) [arXiv:1509.02105 [hep-ph]]; M. O. Khojali, A. Goyal, M. Kumar and A. S. Cornell, “Minimal Spin-3/2 Dark Matter in a simple s-channel model”, arXiv:1608.08958 [hep-ph]; fine-tune-scalar M. Capdequi Peyranere, J. C. Montero and G. Moultaka, “Is natural fine tuning feasible in the Standard Model?”, Phys. Lett. B 260, 138 (1991);A. A. Andrianov, R. Rodenberg and N. V. Romanenko, “Fine tuning in one Higgs and two Higgs standard model”, Nuovo Cim. A 108, 577 (1995)[hep-ph/9408301]; F. Bazzocchi, M. Fabbrichesi and P. Ullio, “Just so Higgs boson”, Phys. Rev. D 75, 056004 (2007)[hep-ph/0612280]; C. N. Karahan and B. Korutlu, “Effects of a Real Singlet Scalar on Veltman Condition”, Phys. Lett. B 732, 320 (2014) [arXiv:1404.0175 [hep-ph]];besteyle D. A. Demir, C. N. Karahan and B. Korutlu, “Higgsed Stueckelberg Vector and Higgs Quadratic Divergence”, Phys. Lett. B 740, 46 (2015) [arXiv:1409.1033 [hep-ph]]; curvature-ftD. A. Demir, “Effects of Curvature-Higgs Coupling on Electroweak Fine-Tuning”, Phys. Lett. B 733, 237 (2014)[arXiv:1405.0300 [hep-ph]]; B. Korutlu, “Softly Fine-Tuned Standard Model and the Scale of Inflation”, Mod. Phys. Lett. A 30, no. 34, 1550179 (2015) [arXiv:1510.08606 [hep-ph]].demir-ccp D. A. Demir, “Vacuum Energy as the Origin of the Gravitational Constant”, Found. Phys.39, 1407 (2009) [arXiv:0910.2730 [hep-th]]; D. A. Demir, “Stress-Energy Connection and Cosmological Constant Problem”, Phys. Lett. B 701, 496 (2011) [arXiv:1102.2276 [hep-th]].higgs-inf F. L. Bezrukov and M. Shaposhnikov, “The Standard Model Higgs boson as the inflaton”, Phys. Lett. B 659, 703 (2008)[arXiv:0710.3755 [hep-th]].higgs-inf-2F. Bezrukov, A. Magnin, M. Shaposhnikov and S. Sibiryakov, “Higgs inflation: consistency and generalisations,” JHEP 1101 (2011) 016[arXiv:1008.5157 [hep-ph]].plehn M. Klasen, M. Pohl and G. Sigl, “Indirect and direct search for dark matter,” Prog. Part. Nucl. Phys.85 (2015) 1[arXiv:1507.03800 [hep-ph]]; F. Kahlhoefer, “Review of LHC Dark Matter Searches,” Int. J. Mod. Phys. A 32 (2017) no.13,1730006[arXiv:1702.02430 [hep-ph]].leszek L. Roszkowski, E. M. Sessolo and S. Trojanowski,“WIMP dark matter candidates and searches - current issues and future prospects,” arXiv:1707.06277 [hep-ph]. planck P. A. R. Ade et al. [Planck Collaboration], `Planck 2015 results. XIII. Cosmological parameters,” Astron. Astrophys.594 (2016) A13[arXiv:1502.01589 [astro-ph.CO]].
http://arxiv.org/abs/1708.07956v1
{ "authors": [ "Durmuş Demir", "Canan Karahan", "Beste Korutlu", "Ozan Sargın" ], "categories": [ "hep-ph" ], "primary_category": "hep-ph", "published": "20170826104442", "title": "Hidden Spin-3/2 Field in the Standard Model" }
§ INTRODUCTIONDespite the fact that dark matter should exist in the universe, it composition is still in question. Among the possible dark matter candidates, Weakly Interacting Massive Particles (WIMPs) are one of the leading hypothetical particle physics candidates for cold dark matter. A WIMP is a dark matter particle that interacts with known standard model particles via a force similar in strength to the weak force <cit.>.WIMPs can annihilate into standard model particles and produce photons. WIMPs can also have large dark matter decay lifetimes and would produce gamma rays in similar quantity and energy to the observed neutrinos, from 100 TeV to several PeV <cit.>. WIMP-like particles which decay may be responsible for the observation of an astrophysical neutrino excess by the IceCube detector <cit.>.Among the places in the Universe to look for signatures of dark matter, dwarf spheroidal galaxies (dSphs) are some of the best candidates for a dark matter search due to their high dark matter content and low luminous material. The dwarf spheroidal galaxies considered in this analysis are companion galaxies of the Milky Way, in what is known as our Local Group. They are very low luminosity galaxies, with low diffuse Galactic gamma-ray foregrounds and little to no astrophysical gamma-ray production <cit.>. A total of 15 dSphs are considered in this analysis: Bootes I, Canes Venatici I, Canes Venatici II, Coma Berenices, Draco, Hercules, Leo I, Leo II, Leo IV, Segue 1, Sextans, Ursa Major I, Ursa Major II, Ursa Minor and TriangulumII. These dSphs were chosen for their favored declination angle for the HAWC observatory and well studied dark matter content with respect to other dSphs. The best DM decay lifetime limits are expected to come from the most massive objects in the universe such as galaxies and galaxy clusters, thus we studied M31 galaxy and the Virgo Cluster. Even though these sources are extended sources (∼3-6^∘), in this analysis, we treated them as point sources. More detailed analyses incorporating their DM morphology are in progress and they should provide better limits than presented here.§ HAWC OBSERVATORY The High Altitude Water Cherenkov (HAWC) observatory detects high-energy gamma-ray and is located at Sierra Negra, Mexico. The site is 4100 m above sea level, at latitude 18^∘59.7' N and longitude 97^∘18.6' W. HAWC is a survey instrument that is sensitive to gamma rays of 500 GeV to a few hundred TeV <cit.> energies. HAWC consists of 300 water Cherenkov detectors (WCDs) covering 22000 m^2 area. Each detector contains four photo-multiplier tubes <cit.> and the trigger rate of HAWC is approximately 25kHz. HAWC has a duty cycle >95% and a wide, unbiased field of view of ∼2 sr. It has been operating with a partial detector since August 2013 and has been operating with the full detector since March 2015. Here we present results from 507 days of its operations with the full detector.§ DARK MATTER GAMMA-RAY FLUX §.§ Gamma-ray Flux from Dark Matter Annihilation and Decay Expected gamma-ray fluxes from dark matter annihilation were calculated using both the astrophysical properties of the potential dark matter source and the particle properties of the initial and final-state particles for different dark matter masses and for different annihilation channels. The differential gamma-ray flux integrated over solid angle of the source is given by: dF/dE_annihilation = ⟨σ_Av⟩/8π M_χ^2dN_γ/dEJ where ⟨σ_Av⟩ is the velocity-weighted dark matter annihilation cross-section, dN_γ/dE is the gamma-ray spectrum per dark matter annihilation, and M_χ is the dark matter particle mass. J-factor (J) is defined as the dark mass density (ρ) squared and integrated along the line of sight distance x and over the solid angle of the observation region: J= ∫_ sourcedΩ∫dx ρ^2 (r(θ,x)) where the distance from the earth to a point within the source is given by r(θ,x) = √(R^2 - 2xRcos(θ) + x^2), R is the distance to the center of the source, and θ is the angle between the center of the source and the line of sight. The gamma-ray flux from dark matter decay is similar to the dark matter annihilation gamma-ray flux as described above in Equation <ref>: dF/dE_decay = 1/4πτ M_χdN_γ/dED. Decay process involves only one particle, thus, the gamma-ray flux from dark matter decay depends on a single power of the dark matter density ρ instead of the square: D=∫_ source dΩ∫ dx ρ (r_gal(θ,x)).§.§ Dark Matter Density Distributions Density profiles describe how the density (ρ) of a spherical system varies with distance (r) from its center. We used the Navarro-Frenk-White (NFW) model for the dark matter density profiles. The NFW density profile is given by ρ_ NFW (r)= ρ_s/(r/r_s)^γ(1+(r/r_s)^α)^(β-α)/γ where ρ_s is the scale density, r_s is the scale radius of the galaxy, γ is the slope for r<<r_s, β is the slope for r>>r_s and α is the transition parameter from inner slope to outer slope. NFW profiles parameters from <cit.> are used for all sources listed, except for Triangulum II, for dSphs. The J- and D-factor for 14 dSph sources are calculated using the CLUMPY software  <cit.> for different realizations of the tabulated values and their respective uncertainties for an angular window of θ_max from <cit.>.A similar procedure was applied for M31 <cit.> and the Virgo Cluster <cit.>. The mean values of the J- and D-factors are tabulated in Table <ref>. For Triangulum II, we use the J and D factors from <cit.>.§.§ Systematics Systematic uncertainties arise from a number of sources within the detector. These effects were carefully investigated in <cit.>. The overall systematic uncertainty on the HAWC data set is on the order of ±50% on the observed flux. The uncertainties on the expected dark matter annihilation and decay limits were calculated to account for these systematics uncertainties.There are additional systematic uncertainties on the expected dark matter flux due to the integration angle of J- and D-factor. HAWC has an angular resolution between 1^∘ and 0.2^∘ for near zenith angles. In the case of better angular resolutions, the integration angle gets smaller which makes J- and D-factors smaller. Similarly, for worse angular resolutions cases, the integration angle gets greater that makes the J- and D- factors larger values. However, there is constraint on the dark matter distribution which limits the dark matter content of a source at angles larger than θ_max. We impose this physically motivated constraint on the J- and D- factor uncertainties. For combined limit uncertainties, we used the uncertainties corresponding to Segue1 (42% for annihilation cross-section limits and 38% for decay lifetime limits) since it is one of the dominant sources. § LIMITS ON THE DARK MATTER ANNIHILATION CROSS SECTION AND DECAY LIFETIME WITH HAWC DATA We analyzed the individual and combined limits from 15 dwarf spheroidal galaxies within the HAWC field of view, M31 galaxy and the Virgo Cluster for the HAWC 507 days data. Considering the angular resolution of HAWC observatory (∼0.5 degrees) <cit.>, the limits were calculated assuming that the dSphs are point sources. No statistically significant gamma-ray flux has been found for a range of dark matter masses, 1 TeV – 100 TeV, and five dark matter annihilation channels. So, we calculated 95% confidence level limits on the annihilation cross-section and decay lifetime, the source significance is used to determine the exclusion curves on the dark matter annihilation cross-section ⟨σ_Av⟩ and decay lifetime τ, for the individual sources. A joint likelihood analysis was also completed by combining the statistics for all 15 dSphs in order to increase the sensitivity of the analysis. In this paper, we only present results for ττ̅ channel. The rest of the results can be found in <cit.> with the explanation of the limit calculations. For other possible studies, flux upper limits were also presented in the respective paper.Triangulum II has a particularly large J-factor and transits near the zenith for HAWC. However, it was discovered recently <cit.> and it still has large uncertainties in its DM profile. Because of this, we show the joint dwarf limit both with and without Triangulum II in Figures <ref> and <ref>. For the bb̅ channel, Fermi-LAT limit is the most contraining up to ∼4 TeV, the followed by MAGIC Segue 1 limit up to ∼10 TeV. Beyond ∼10 TeV, the HAWC combined dSph limit is the most stringent limit for this channel. The HAWC combined limits with Triangulum II are strongest for the W^+W^- channel for M_χ≳. This result is consistent within uncertainties with Veritas Segue 1 limit. For μ^+μ^- and τ^+τ^- channels, the HAWC combined dSph limits are the strongest above a few TeV. M31 limits are also comparable with the combined limits with Triangulum II whereas the Virgo Cluster is not as sensitive as M31 galaxy (Figure <ref>)Dark matter models for thermal relic and Sommerfeld enhanced cross-sections are compared. For the Sommerfeld enhancement, a weak-scale coupling of 1/35 and a very conservative dark matter velocity of 300 km/s was assumed. Only W^+W^- annihilation channel is considered for the Sommerfeld enhancement since this channel is assured to have dark matter coupled to gauge bosons <cit.>. At resonances, HAWC limit rules out a dark matter with mass of ∼4 TeV, and HAWC limit approaches to corresponding Sommerfeld-enhanced models by 1 order of magnitude for a dark matter with mass of ∼20 TeV. Slower dark matter velocity enhances the amplitude of resonances, thus making HAWC results closer to Sommerfeld-enhanced thermal relic.Figure <ref> shows 15 individual dSph, the combined, M31 and the Virgo cluster limits. Like the dark matter annihilation results, Segue 1, Coma Berenices, and Triangulum II are dominant, though for decays, Bootes I and Draco also contribute to the combined limits. This is due to the fact that dark matter decay is related to ∫ρ (total dark matter mass) compared to ∫ρ^2 at the source of annihilation or decay. The strongest lifetime lower limit is obtained with the τ^+τ^- channel. The Virgo Cluster results are within 2–3 factors of the combined dSph limits. Despite not providing good limits for annihilation, the good decay lifetime limits for the Virgo Cluster is due to the total DM mass in the cluster. § SUMMARYWe present 95% CL limits on the annihilation cross section and the decay lifetime for 15 dwarf spheroidal galaxies within the HAWC field-of-view, M31 Galaxy and the Virgo Cluster. A combined limit is also shown from a stacked analysis of all dwarf spheroidal galaxies. These are the limits also shown in <cit.>.The HAWC collaboration is improving its analysis tools for enhancing energy and angular resolution. In addition, with more data collected, HAWC is expected to be more sensitive at lower dark matter masses.§ ACKNOWLEDGMENTS We acknowledge the support from: the US National Science Foundation (NSF); the US Department of Energy Office of High-Energy Physics; the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory; Consejo Nacional de Ciencia y Tecnología (CONACyT), Mexico (grants 260378, 232656, 55155, 105666, 122331, 132197, 167281, 167733); Red de Física de Altas Energías, Mexico; DGAPA-UNAM (grants IG100414-3, IN108713,IN121309, IN115409, IN111315); VIEP-BUAP (grant 161-EXC-2011); the University of Wisconsin Alumni Research Foundation; the Institute of Geophysics, Planetary Physics, and Signatures at Los Alamos National Laboratory; the Luc Binette Foundation UNAM Postdoctoral Fellowship program.JHEP
http://arxiv.org/abs/1708.07461v2
{ "authors": [ "Tolga Yapici", "Andrew J. Smith" ], "categories": [ "astro-ph.HE", "astro-ph.CO", "hep-ex", "hep-ph" ], "primary_category": "astro-ph.HE", "published": "20170824153941", "title": "Dark Matter Searches with HAWC" }
e1e-mail: [email protected] e2e-mail: [email protected] of Physics, Faculty of Science, Akdeniz University, 07058 Antalya, TurkeyScalar-Tensor Teleparallel Gravity With Boundary Term by Noether Symmetries Ganim Gecime1,addr1,Yusuf Kucukakcae2,addr1 December 30, 2023 =========================================================================== In the framework of teleparallel gravity, the Friedman-Robertson-Walker cosmological model with scalar tensor theory where scalar field is non-minimally coupled to both the torsion scalar and boundary term is studied. Utilizing the Noether symmetry approach in such a theory, we obtain the explicit forms of the couplings and potential as a function the scalar field. We present some important cosmological solutions for the modified field equations using these functions getting via the Noether symmetry approach. Finally, the interesting cosmological properties of these solutions are discussed in detail, and it is shown that they can describe a universe lead to the late time accelerating expansion. § INTRODUCTION Currently, one of the great problems of modern cosmology is to understand the late-time accelerated expansion of the universe. The idea of accelerated expansion of the universe has been confirmed by several astrophysical observations such as observations of supernovae Type Ia(SNe Ia) <cit.>, cosmic microwave background radiation (CMB) <cit.>, and large-scale structure <cit.>. Despite all these observations, this cosmological effect is not compatible with the existing equations of the standard Einstein's general theory of relativity. Therefore, the proposed solutions for explain the nature of the late-time accelerated expansion can be categorized into two classes in the literature. The first approach is to add an exotic fluid with a negative-pressure so-called dark energy to the matter part of the Einstein field equations. In this way, various dynamically dark energy models with different kind of scalar fields such as quintessence <cit.>, phantom <cit.>, quintom <cit.>, fermionic fields <cit.> and vector fields <cit.> have been put forward to explain accelerating expanding of the universe. The second one is to modify the geometric part of the Einstein field equations at high energy levels. Hence the dark energy comes out from these modifications. Such a modifications in the framework of General Relativity (GR) are known f(R) gravity in the metric <cit.> and Palatini formalism <cit.>, f(G) gravity <cit.> and f(R,G) gravity theory <cit.> where f(R) and f(G) is an arbitrary function of the curvature scalar and Gauss-Bonnet invariant respectively.The teleparallel Gravity (TG) is an alternative gravitational theory to the GR. Although this theory gives the same field equations of the GR, the geometric formulations of these theories are different. On the one hand, GR is constructed from the curvature defined by the symmetric Levi-Civita connection that yields a vanishing torsion. Furthermore, TG considers a nonsymmetric Weitzenbck connection that has no curvature but only torsion. In other words, one can say that GR uses the curvature to geometrize the space-time, meanwhile teleparallel equivalent to GR uses torsion to explain the gravitational effects. It may be noted that in order to define the Weitzenbck connection in TG tetrad fields are used as dynamical objects whereas in order to define the Levi-Civita connection in GR, metric fields are used as dynamical variables. One of the interesting modified gravity model is the f(T) gravity which is arbitrary function of the torsion scalar T. Many features of f(T) gravity model have been considered in literature especially that in order to explain accelerated expansion of the universe through torsion <cit.> . Furthermore, some interesting black holes solutions have been found and studied, for example see <cit.>.Of particular interest is gravitational models allowing for nonminimal couplings between the scalar field and gravitational part. For example, one may consider a model of gravity with a non-minimally coupled scalar field to the curvature scalar in the form of ξ Rϕ^2 where ξ is a coupling constant and R is the Ricci scalar <cit.>. Such nonminimal couplings in the framework of GR have been studied in different contexts in the literature. It, naturally, appears when quantum corrections are considered and essential for the renormalizability of the scalar field theory in the curved space <cit.>. Moreover, this coupling has been used to explain both the early time inflation and late time cosmic acceleration of the universe in the metric and Palatini formalism <cit.>. On the other hand, an alternative gravitational scenario the so-called teleparallel dark energy, has been presented in the framework of TG in Ref. <cit.> where authors considered the scalar field to be nonminimally coupled to the torsion in the form of ξ Tϕ^2. Considering the nonminimal coupling between the scalar field and the torsion scalar opens a new window in analysing the cosmological evolution of the universe. It has been shown that this model has a richer structure, exhibiting quintessence-like or phantom-like behavior, or experiencing the phantom-divide crossing. Other studies for the model discuss parameter fit with cosmological observations <cit.>, phase-space analysis <cit.>, Noether symmetry approach <cit.>, growth of density perturbations <cit.>, the possibility of singularities <cit.> and spherically symmetric solutions <cit.>. We note that there are some interesting models including fermionic and tachyonic field nonminimal coupled to the torsion discussing the cosmic evolution of the universe <cit.>.Recently, Bahamonde and Wright <cit.> proposed a different model in the teleparallel gravity framework by introducing a scalar field non-minimally coupled to the torsion scalar T as well as to the boundary term B, corresponding to the divergence of torsion vector B=2/e∂_μ(eT^μ). The model reduce to both nonminimally coupled teleparallel gravity and nonminimally coupled general relativity under certain limits. There it was found that this model generically yields to a late time accelerating attractor solution without requiring any fine tuning of the parameters. It was studied the parameterized post-Newtonian approximation <cit.> as well as thermodynamics aspects <cit.> of this model. Using the Noether symmetry technique for this model, new wormhole solutions according to the Morris and Thorne paradigm has been derived <cit.>. In Ref <cit.>, the behavior and stability of the scaling solutions are studied for scalar fields endowed with inverse power law potentials and with exponential potentials for this models.One of the most popular methods of finding the exact solutions to the nonlinear higher-order differential equations is to use the Noether symmetry approach. Noether symmetry is associated with field equations possessing the lagrangian and it guarantees the existence of conserved quantities that allow to reduce dynamics thanks to the presence of cyclic variables <cit.>. Moreover, the existence of this symmetry leads to a specific form of the unknown functions that appear in the Lagrangian. The method is used to obtain cosmological models in several alternative theory of gravity, for example, scalar tensor theory <cit.>, f(R) theory <cit.>, f(T) theory <cit.>, the theories of gravity with a fermionic field <cit.> and others <cit.>.With the help of Noether symmetry, some cosmological analytical solutions for the scalar tensor TG were obtained in Ref <cit.>. Having above points in mind, the main goal of this paper is to explore Noether symmetry in scalar tensor theory of gravity in which the scalar field is non-minimally coupled to both torsion scalar and the boundary term. We have determined the interesting physical forms of the coupling functions and potential by existence of Noether symmetry and found exact solutions of the field equations to discuss cosmic evolution of the universe via cosmological parameters. This paper is organized as follows. In Section <ref>, we give a basic formulation of the scalar tensor teleparallel gravity including a boundary term. In Section <ref>, we discuss the Noether symmetries of the flat Friedmann-Robertson-Walker space-time in the context of the considering model. We search the cosmological solutions by using the obtained forms of the coupling functions and potential. Finally, in Section <ref>, we conclude with a brief summary.§ TELEPARALLEL THEORY WITH A NON-MINIMAL COUPLING TO A BOUNDARY TERM We will now consider the following action which describes a non-minimally coupled scalar field to both torsion scalar and the boundary term <cit.>S=∫ d^4x e [1/2κ^2(f(ϕ)T+g(ϕ)B) + 1/2∂_μϕ∂^μϕ - V(ϕ)].Here the boundary term is defined in terms of divergence of torsion vector B=2/e∂_μ(eT^μ) where e = det(e_μ^a)=√(-g) isthe volume element of the metric, V(ϕ) is the scalar field potential andf(ϕ) andg(ϕ) are the generic functions describing the coupling between the scalar field and torsion scalar and boundary term, respectively. In fact, such a coupling of the scalar field to a torsion scalar and boundary term is not a new idea. For example, one can choose the coupling functions as f(ϕ)=1-ξϕ^2 and g(ϕ)=χϕ^2 where ξ and χ are coupling constants. For χ=0, one can recover an action named the teleparallel dark energy model in which scalar field coupled to the torsion scalar. For ξ=0, it may correspond to gravity including boundary term. When one sets ξ=-χ one will recover an action which is scalar field models non-minimally coupled to the Ricci scalar. The minimally coupled quintessence theories arise when we take ξ=χ=0. Additionally, teleparallel scalar tensor theory without the boundary term can be recovered, if we set g(ϕ)=0 in the action (<ref>). We note that any possible coupling of the scalar field with ordinary matter Lagrangian is disregarded and we use κ^2=1.By varying the above action with respect to the tetrad field, one obtain the following gravitational field equations <cit.>2f(ϕ)[e^-1∂_μ(eS_a^μν)-e_a^λT^ρ_μλS_ρ^νμ-1/4e^ν_aT]+e^ν_a [1/2∂_μϕ∂^μϕ -V(ϕ)] -e^μ_a ∂^νϕ∂_μϕ+e^ν_a g(ϕ)+2[∂_μf(ϕ)+∂_μg(ϕ)]e^ρ_a S_ρ^μν-e^μ_a ∇^ν∇_μg(ϕ) = 0where = ∇_α∇^α; ∇_α is the covariant derivative with respect to the Levi-Civita connection. On the other hand, the variation of the action (<ref>) with respect to the scalar field ϕ gives rise to the Klein-Gordon equation governing the dynamics of the scalar fieldϕ+V'(ϕ)-f'(ϕ)T-g'(ϕ)B=0.where the prime indicates the derivative with respect to ϕ.Now, we consider the four-dimensional spatially flat Friedmann Robertson Walker (FRW) space-time with the metricds^2=dt^2-a^2(t)(dx^2+dy^2+dz^2),where a(t) is the scale factor of the universe. The corresponding tetrad components for the FRW metric aree^i_μ=(1,a(t),a(t),a(t)). With this definition of the tetrad field, the torsion scalar and the boundary term can be expressed in terms of the scale factor and its time derivatives as follows <cit.>T=-6ȧ^2/a^2, B=-6(ä/a+2ȧ^2/a^2),where dot denotes differentiation with respect to the time coordinate. Inserting the tetrad components for the flat FRW metric into the action (<ref>) and using the equations (<ref>) we find a point-like Lagrangian as follows,ℒ=-3faȧ^2+3g'a^2ȧϕ̇+a^3(ϕ̇^2/2-V(ϕ)).From the Euler-Lagrange equation for the scale factor a applied to the above Lagrangian, we obtain the following acceleration equation2Ḣ+3H^2=-p_ϕ/f.Here, H=ȧ/a is the Hubble parameter. The modified Friedmann equation is obtained by imposing that the energy function E_L associated with the Lagrangian (<ref>) vanishes, i.e.E_L=ȧ∂L/∂ȧ+ϕ̇∂L/∂ϕ̇=0 ⇒ H^2=ρ_ϕ/3f,In the equations (<ref>) and (<ref>), the energy density and the pressure of the scalar field ρ_ϕ and p_ϕ are respectively defined as followρ_ϕ=1/2ϕ̇^2+V+3g'Hϕ̇, p_ϕ=1/2ϕ̇^2-V+2f'Hϕ̇-g'ϕ̈-g”ϕ̇^2.These two expressions define an effective equation of state parameter ω_ϕ=p_ϕ/ρ_ϕ, which drives the behavior of the cosmological model. Finally, from the Euler-Lagrange equation for the scalar field ϕ by using the Lagrangian (<ref>), the modified Klein-Gordon equation takes the formϕ̈+3Hϕ̇=1/2(f'T+g'B)-V. It is clear that to obtain some cosmological solutions to the modified field equations, first of all one has to determines for a form of the potential function V(ϕ) and the coupling functions f(ϕ) and g(ϕ) of the scalar field. In the next section, we will fix this issue by demanding that the point-like Lagrangian of the action (<ref>) satisfies the Noether symmetry condition.§ NOETHER SYMMETRY APPROACH AND COSMOLOGICAL SOLUTIONS In theoretical physics, it is important to develop techniques to findthe solutions of non-linear equations system. Noether symmetry approach has become an important technique to solve such a system. This approach provides an systematic way to find conserved quantities for a given Lagrangian. At the same time, as a physical criterion, this approach also allows one to select the unknown functions in gravity models. Now, we seek Noether symmetries for the Lagrangian (<ref>). The Noether symmetry generator is a vector field on the tangent space 𝒯 𝒬=(a,ϕ,ȧ,ϕ̇) defined byX=α∂/∂ a+β∂/∂ϕ+ α̇∂/∂ȧ+β̇∂/∂ϕ̇,where α and β are both functions of the generalized coordinates a and ϕ. The Noether symmetry then implies that the Lie derivative of the Lagrangian with respect to this vector field vanishes, that is, L_Xℒ=0, which leadsL_Xℒ=Xℒ=α∂ℒ/∂ a+β∂ ℒ/∂ϕ+ α̇∂ℒ/∂ȧ+β̇∂ℒ/∂ϕ̇=0.In general, the Noether symmetry condition leads to an expression of second degree in the velocities (ȧ and ϕ̇) with coefficients being partial derivatives of α and β with respect to the variables a and ϕ. Thus, the resulting expression is identically equal to zero if and only if these coefficients are zero. This gives us a set of partial differential equations for α and β. For the Lagrangian (<ref>), the Noether symmetry condition (<ref>) yields the following system of partial differential equationsf(α+2a∂α/∂ a)+f'aβ-g'a^2∂β/∂ a=0, g'(2α+a∂α/∂ a+a∂β/∂ϕ)+g”aβ-2f∂α/∂ϕ+a^2/3∂β/∂ a=0, α+2g'∂α/∂ϕ+2a/3∂β/∂ϕ=0, 3Vα+aV'β=0.We solve this system of equations to find the values of α, β, f(ϕ), g(ϕ) and V(ϕ). Since the system is difficult to solve, we firstly choose the potential proportional to the square of scalar field and then we use the separation of variables techniques. Therefore, we obtain the non-trivial solution for the above set of differential equations (<ref>)-(<ref>) as the followsα=-2α_0/3a^n+1, β=α_0a^nϕ, f(ϕ) = -3/8ϕ^2-3c_1/2ϕ^2(n+3)/3, g(ϕ) = 1/4ϕ^2+3c_1/2(n+3)ϕ^2(n+3)/3+c_2, V(ϕ)=λϕ^2,where c_1, c_2, α_0, λ and n are integration constantsand n≠-3. From the values of symmetry generator coefficients (<ref>), the Noether symmetry generator is given byX=-2α_0/3a^n+1∂/∂ a+α_0a^nϕ∂/∂ϕ.For the special case n=-3, the Noether symmetry equations (<ref>)-(<ref>) give the following solutionsα=-2α_0/3a^-2, β=α_0a^-3ϕ, f(ϕ) = -3/8ϕ^2+3c_3/ϕ^2-3c_1/2, g(ϕ) = 1/4ϕ^2+c_1ln(ϕ)+c_2,with same potential function given by Eq. (<ref>).In this part we attempt to solve the basic cosmological equations of the scalar tensor teleparallel gravity model with a boundary term analytically. In order to integrate the dynamical systems (<ref>), (<ref>) and (<ref>), we search for a cyclic variable associated with the Noether symmetry generator (<ref>). So, we introduce two arbitrary functions z and u defined as z=z(a,ϕ) and u=u(a,ϕ) respectively. The transformed Lagrangian is cyclic in one of the new variables so that the Lagrangian depending on new variables produces a reduced dynamical system which is generally solvable. Utilizing the relations i_ X dz =1 and i_ X du =0 wherei_ X is the interior product operator of X, we obtain the differential equationsα∂ z/∂ȧ+β∂ z/∂ϕ̇=1, α∂ u/∂ȧ+β∂ u/∂ϕ̇=0,respectively. A general discussion of this issue could be found in <cit.>. Inserting the values of α and β given by (<ref>) into the equations (<ref>) and (<ref>), we obtain the following solutionsz=3/2nα_0a^-n, u=a^3/2ϕ,where n≠0. For the case of n=0, the coupling function f(ϕ) is proportional to square of the scalar field and we take c_1=-1/2, then the coupling function g(ϕ) vanishes. Thus the action (<ref>) reduces to the telleparalel dark energy model that is in depth analysed using the Noether symmetry approach in our previous work <cit.>. Correspondingly, the scale factor and scalar field could be expressed asa=(2nα_0/3z)^-1/n,ϕ=u(2nα_0/3z)^3/2n.Under this transformation, considering the coupling functions (<ref>) and the potential (<ref>) the Lagrangian (<ref>) in terms of new variables takes the suitable formℒ=u̇^2/2-2c_1α_0u^2n+3/3u̇ż-λ u^2,in which one can easily see that z is a cyclic variable. The new Lagrangian provide the following equations of motion:-2c_1α_0u^2n+3/3u̇=I_0, ü-2c_1α_0u^2n+3/3z̈+2λ u=0, u̇^2/2-2c_1α_0u^2n+3/3u̇ż+λ u^2=0,where I_0 is a constant of motion. The equation (<ref>) can be easily integrated to giveu(t)=(-I_0(n+3)/3α_0c_1t+b_1)^3/2(n+3),where b_1 is an arbitrary constant of integration and n≠-3. Firstly, we consider the case n=-3. For this case, the Noether symmetry approach yields the coupling functions and potential given by the Eqs. (<ref>) and (<ref>) with the symmetry generator coefficients (<ref>). Now, we can easily solve Eqs. (<ref>)-(<ref>) for z and u. Utilizing the obtained solution for z and Eq. (<ref>), we find the scale factor as a(t)^3=a_0e^-I_0t/2c_1α_0+c_2 where a_0 and c_2 are constants being combinations of other constants. This solution for c_2=0 gives us the de Sitter Universe. Secondly in general case, by inserting the solution (<ref>) into Eqs. (<ref>) and (<ref>), we obtain the following solution for z,z(t)=[8λ n(I_0(n+3)t-3α_0c_1b_1)^2-9I_0^2(n+6)/24I_0^2α_0c_1n(n+6)] ×(ℓ t+b_1)^-n/n+3+b_2,where b_1 is an another constant of integration, we define ℓ=-I_0(n+3)/3α_0c_1 and n≠-6 and I_0≠0. Therefore, the exact solution of the scale factor and scalar fieldcould be given out as belowa(t)=[8λ n(I_0(n+3)t-3α_0c_1b_1)^2-9I_0^2(n+6)/36I_0^2c_1(n+6)]^-1/n ×(ℓ t+b_1)^1/n+3, ϕ(t)=[8λ n(I_0(n+3)t-3α_0c_1b_1)^2-9I_0^2(n+6)/36I_0^2c_1(n+6)]^3/2n ×(ℓ t+b_1)^-3/n(n+3),where we take that b_2 is zero without loss of generality. It is very difficult to analyze the solution (<ref>) in this form. Hence, we follow the procedure used in Ref.<cit.> by setting the present time t_0=1. (see also detailed this procedure()) We assume first that at t=0, a(0)=0 so that we fix the origin of time. This condition gives, with the use of scale factor (<ref>), b_1=0 for n>0 or 8λ n (α_0c_1b_1)^2-I_0^2(n+6)=0, b_1≠0 for n<0. At this point we restrict to ourselves to the case b_1=0 for n>0. The second condition is to set a_0≡ a(t_0=1)=1 which is standard, and finally the Hubble constant are constrained H(t_0=1)≡ℋ_0. Because of this choice of time unit, it turns out that our ℋ_0 is not the same as the H_0 that appears in the standard FRW model. By means of these choices, we obtain the scale factor and the Hubble parameter as followsa(t)=[c_1m-n[ℋ_0(n+3)-1]t^2/2(n+3)]^-1/nt^1/n+3, H(t)=m+(n+6)[ℋ_0(n+3)-1]t^2/(n+3)[mt-n(ℋ_0(n+3)-1)t^3],where we define m=n[ℋ_0(n+3)+1]+6. The deceleration parameter which is defined by q=-äa/ȧ^2 is useful to study current expansion of the universe. So the universe expands in an accelerated behavior for q < 0 while q > 0 means a decelerated expansion of the universe. For our model, this parameter turns out to beq=-1- n(n+3)(n+6)[ℋ_0(n+3)-1]^2t^4/[m+(n+6) [ℋ_0(n+3)-1]t^2]^2+(n+3)[m^2-2m(2n+3)[ℋ_0(n+3)-1]t^2]/[m+(n+6) [ℋ_0(n+3)-1]t^2]^2.On the other hand, using the energy density (<ref>) and pressure (<ref>) of the scalar field, the equation of state parameter takes the following form in the modelω_ϕ=-1 - 2 n (n+3)(n+6)[ℋ_0(n+3)-1]^2t^4/3[m+(n+6) [ℋ_0(n+3)-1]t^2]^2+2(n+3)[m^2-2m(2n+3)[ℋ_0(n+3)-1]t^2]/3[m+(n+6) [ℋ_0(n+3)-1]t^2]^2. The graphical analysis of the scale factor versus cosmic time t for the different value of n represented in the Figure (<ref>). As one can see from this figure, the universe describes an expansionary phase with the scale factor a monotonically increasing function of time. The deceleration parameter, depicted in Figure (<ref>), indicates the existence of a transition from a decelerating phase to an accelerating one. Figure (<ref>) shows behavior of the equation of state parameter with respect to cosmic time t for the different value of n. From this figure, we observe that crossing of the phantom divide line ω_ϕ=-1 can be realised from the quintessence phase ω_ϕ>-1 to phantom phase ω_ϕ<-1 in our model described by the Noether symmetry solution. We also note that both cosmological parameters go to the value -1 in the large time limit for small values of the parameter n.Now, we return to the solution of field equations (<ref>)-(<ref>) in the specially case of the constant of motion I_0=0. It can be seen below that this case has an interesting solution. If I_0=0, then from Eq. (<ref>) we get a solution as u(t)=u_0 where u_0 is a constant. This solution satisfies Eq. (<ref>) if λ=0 which gives a scalar-tensor teleparallel model with boundary term but without scalar potential. From Eq. (<ref>) the variable z(t) is solved as z(t)=z_0t+z_1 where z_0 and z_1 are an integration constant. Therefore, for I_0=0 the scale factor and scalar field are obtained as followsa(t)=[2nα_0/3(z_0t+z_1)]^-1/n, ϕ(t)=u_0[2nα_0/3(z_0t+z_1)]^3/2.From these considerations, it is easy to realize that any power law solution can be achieved according to the value of n. For example, a pressureless matter solution is recovered for a(t)∼ t^2/3 with n=-3/2, a radiation solution is for a(t)∼ t^1/2 with n=-2. The deceleration parameter takes the form as followsq=-1-n,which means that The equation of state parameter becomeω_ϕ=-1-2n/3.From Eqs. (<ref>) and (<ref>), if -1<n<0, then we have q<0 and -1<ω_ϕ<-1/3 which corresponds to a universe with quintessence phase. So that the universe is both expanding and accelerating in this case. if n<-1, then we obtain q>0 and ω_ϕ>-1/3 which corresponds to a universe with decelerating expansion. On the other hand, if n>0, then we have q<0 and ω_ϕ<-1 which corresponds to a universe with phantom phase. So that the universe is accelerating but shrinking in this case. We also note that the limit of n→ 0 in Eq. (<ref>) corresponds to the limit of ω_ϕ→ -1, which is consistent with the ΛCDM. model.§ SUMMARY AND CONCLUSIONThere exist some methods to investigate the integrability of a dynamical system. In this study, we chose to find Noether symmetries of the point-like Lagrangian of a scalar tensor teleparallel gravity theory to obtain the conserved quantities. For that gravitational Lagrangian, we considered a model including the scalar field which is nonminimal coupled to the torsion and boundary term where the boundary term represents the divergence of the torsion vector. This model is important since it shows some interesting aspects in cosmology and in describing the late time acceleration of the Universe. Furthermore, the model is reduced to the theories such as quintessence, teleparallel dark energy and non-minimally coupled scalar field to the Ricci scalar under the suitable limits.As above mentioned, the Noether symmetry approach is important because it can be considered as a physically motivated criterion so that such a symmetry are always related to conserved quantities. The existence of Noether symmetry also restricts the forms of the unknown functions in a given Lagrangian (i.e. in particular coupling functions and scalar potential), and allows us to find a transformation given by (<ref>) in which the scale factor and the scalar field are written in terms of new dynamical variables where one of the variables is cyclic. Under these transformations, Lagrangian is reduced to a simpler form. In this study, the equations of motion of the considered model for the FRW space-time background in the form of equations (<ref>)-(<ref>) have obtained. By applying the Noether symmetry approach, we have found the explicit forms of the coupling functions f(ϕ) and g(ϕ) and potential of the scalar field V(ϕ) as Eqs. (<ref>) and (<ref>) respectively. By introducing cyclic variables, we have found some exact cosmological solutions of the corresponding field equations using these forms obtained by the existence of Noether symmetry. The properties of the cosmological parameters relevant to the solutions have been analyzed in detail. The main, and interesting feature of these solutions is that they describe an accelerating expansion of the universe. We have also observed the equation of state parameter shows that crossing of the phantom divided line can be realized (see Fig. (<ref>)). Therefore, it is important to investigate Noether symmetries of the teleparallel dark energy models with the boundary term to explain the late time acceleration of the universe.This work was supported by Akdeniz University, Scientific Research Projects Unit.99riess99 A.G. Riess et al., Astrophys. J. 116, 1009-1038 (1998)Perll S. Perlmutter et al., Astrophys. J. 517 565-586 (1999)Sperl D.N. Spergel et al., Astrophys. J. Suppl. 148,213 (2003)netterfield02 C.B. Netterfield et al., Astrophys. J., 571, 664 (2002)bennet C.N. Bennett et al., Astrophys. J. Suppl. 148 1, (2003)tegmark04 M. Tegmark et al., Phys. Rev. D 69 103501 (2004)Rat B. Ratra and P.J.E. Peebles, Phys. Rev. D 37, 3406-3427 (1988)Zlatev I. Zlatev and L.M. Wang, P.J. Steinhardt, Phys. Rev. Lett. 82, 896-899 (1999)Cad R.R. Caldwell, Phys. Lett. B 545, 23-29 (2002)Guo Z.K. Guo, Y.S. Piao, X.M. Zhang and Y.Z. Zhang, Phys. Lett. B 608, no.3, pp. 177-182, (2005)Ar C. Armendariz-Picon and P.B. Greene, Gen. Relativ. Gravit 35, 1637 (2003)Rib M.O. Ribas, F.P. Devecchi and G.M. Kremer, Europhys. Lett 81, 19001 (2008)Kremer R.C. De Souza and G.M. Kremer, Class.Quant.Grav 25,225006 (2008)kuc1 Y. Kucukakca, Eur. Phys. J. C 74, 3086 (2014)kuc2 G. Gecim, Y. Kucukakca and Y. Sucu, Adv. High Energy Phys. 2015, 567395 (2015)gec G. Gecim and Y. Sucu: Adv. High Energy Phys. 2017, 2056131 (2017)r1 V. V. Kiselev, Class. Quant. Grav. 21, 3323 (2004)r2 C. G. Boehmer and T. Harko, Eur. Phys. J. C 50, 423429 (2007)SotT.P. Sotiriou and V. Faraoni, Rev. Mod. Phys. 82, 451497 (2010)fel A. De Felice and S. Tsujikawa, Living Rev. Rel. 13, 3 (2010)noji S. Nojiri and S.D. Odintsov, Phys.Rept. 505, 59144 (2011)pala E.E. Flanagan, Phys. Rev. Lett. 92, 071101 (2004)ama M. Amarzguioui, O. Elgaroy, D.F. Mota and T. Multamaki, Astron. Astrophys. 454,707-714 (2006)olm G.J. Olmo, Int. J. Mod. Phys. D20, 413-462 (2011)c1 Y. Kucukakca and U. Camci, Astrophys. Space Sci. 338, 211216 (2012)c2 Y. Kucukakca, Astrophys. Space Sci. 361, 80 (2016)odi S. Nojiri and S.D. Odintsov, Phys. Lett. B 631, 1-7 (2005)s1 M. Sharif and I. Fatima, J. Exp. Theor. Phys. 122, 104 (2016)c3 E. Elizalde, R. Myrzakulov, V. V. Obukhov and D. Sez-Gmez, Class. Quant. Grav. 27, 095007 (2010)c4 R. Myrzakulov, L. Sebastiani and S. Zerbini, Int. Jour. of Mod. Phys. D 22, 1330017 (2013)c5 S. Capozziello, M. Laurentis and S.D. Odintsov, Mod. Phys. Lett. A 29, 1450164 (2014)c6 M. Laurentis, M. Paolella and S. Capozziello,Phys. Rev. D 91, 083531 (2015)c7 M.F. Shamir and F. Kanwal, Eur. Phys. J. C 77, 286 (2017)ben G.R. Bengochea and R. Ferraro, Phys. Rev. D 79,124019 (2009)myr R. Myrzakulov, Eur. Phys. J. C 71, 1752 (2011)pali1 S. Basilakos, S. Capozziello, M. De Laurentis, A. Paliathanasis and M. Tsamparlis, Phys. Rev. D 88, 103526 (2013)pali2 A. Paliathanasis, J.D Barrow and P.G.L. Leach,Phys. Rev. D 94, 023525 (2016)yi Y. F. Cai, S. Capozziello, M. De Laurentis and E. N. Saridakis, Rept. Prog. Phys. 79, 106901 (2016)pali3 A. Paliathanasis, S. Basilakos, E. N. Saridakis, S. Capozziello, K. Atazadeh, F. Darabi and M. Tsamparlis, Phys. Rev. D 89, 104042 (2014)na G.G.L. Nashed: Phys. Rev. D 88, 104034 (2013)m1 N. D. Birrell and P. C. W. Davies, Phys. Rev. D 22, 322 (1980)m2 C. G. Callan, Jr., S. R. Coleman and R. Jackiw, Annals Phys. 59, 42 (1970)kit N. D. Birrell and P. C. W. Davies, Quantum Fields in Curved Space. Cambridge University Press, Cambridge (1984)q1 L. F. Abbott, Nucl. Phys. B 185, 233 (1981)q2 T. Futamase and K. i. Maeda, Phys. Rev. D 39, 399 (1989)q3 C. Pallis and N. Toumbas, JCAP 12, 002 (2011)q4 C.V. Sahni and S. Habib, Phys. Rev. Lett. 81, 01766 (1998)q5 V. Faraoni, Phys. Rev. D 62, 023504 (2000)q6 E. Elizalde, S. Nojiri and S. Odintsov, Phys. Rev. D 70, 043539 (2004)q7 P. Wang, P. Wu and H. Yu, Eur. Phys. J. C 72, 2245 (2012)geng1 C.-Q. Geng, C.-C. Lee, E.N. Saridakis and Y.-P. Wu, Phys. Lett. B 704, 384 (2011)Wei1 H. Wei, Phys. Lett. B 712, 430 (2012)ota1 G. Otalora, J. Cosmol. Astropart. Phys. 07, 044 (2013)sad H.M. Sadjadi, Phys. Rev. D 87, 064028 (2013)geng4 J.T. Li, Y.-P. Wu and C.-Q. Geng, Phys. Rev. D 89, 044040 (2014)ss1 M.A. Skugoreva, E.N. Saridakis and A.V. Toporensky, Phys. Rev. D 91, 044023 (2015)ja L. Jrv and A.V. Toporensky, Phys. Rev. D 93, 024051 (2016)ds M.A. Skugoreva and A.V. Toporensky, Eur. Phys. J. C 76, 340 (2016)faz B. Fazlpour, Gen. Rel. Grav. 48, 159 (2016)sad1 H.M. Sadjadi, JCAP 01, 031 (2017)t1 Y. P. Wu, Phys. Lett. B 762, 157 (2016)geng2 C.-Q. Geng, C.-C. Lee and E.N. Saridakis, J. Cosmol. Astropart. Phys. 01, 002 (2012)Gu J.A. Gu, C.-C. Lee and C.-Q. Geng, Phys. Lett. B 718, 722 (2013)Xu1 C. Xu, E.N. Saridakis and G. Leond, J. Cosmol. Astropart. Phys. 07, 005 (2012)kucuk Y. Kucukakca, Eur. Phys. J. C 73, 2327 (2013)srf M. Sharif and I. Shafiqu, Phys. Rev. D 90, 084033 (2014)gng C.-Q. Geng and Y.-P. Wu, JCAP 1304, 033 (2013)geng3 C.-Q. Geng, J.A. Gu and C.-C. Lee, Phys. Rev. D 88, 024030 (2013)ss G. Kofinas, E. Papantonopoulos and E.N. Saridakis, Phys. Rev. D 91, 104034 (2015)Bani A. Banijamali and B. Fazlpour, Astrophys. Space Sci. 342, 229-235 (2012)ota2 G. Otalora, Phys. Rev. D 88, 063505 (2013)faz2 B. Fazlpour and A. Banijamali, Adv. High Energy Phys. 2015, 283273 (2015)mot H. Motavalli and A.R. Akbarieh,M. J. Nasiry, Exp. Theor. Phys. 123, 33 (2016)ba1 S. Bahamonde and M. Wright, Phys. Rev. D 92, 084034 (2015)sa H.M. Sadjadi, Eur. Phys. J. C 77, 191 (2017)ba2 M. Zubair and S. Bahamonde, arXiv:1604.02996 [gr-qc]ba3 S. Bahamonde, U. Camci, S. Capozziello, and M. Jamil, Phys. Rev. D 94, 084042 (2016)t2 M. Marciu, Int. J. of Mod. Phys. D 26, 1750103 (2017)rit1 R. de Ritis, G. Marmo, G. Platania, C. Rubano, P. Scudellaro and C. Stornaiolo, Phys. Rev. D 42, 1091 (1990)rit2 S. Capozziello, R. de Ritis, C. Rubano and P. Scudellaro, Riv. Nuovo Cimento 19, 1 (1996)cap1 S. Capozziello and R. de Ritis, Phys. Lett. A 177, 1 (1993)cap2 S. Capozziello and G. Lambiase, Gen. Relativ. Gravit. 32, 673 (2000)san2 A.K. Sanyal, Phys. Lett. B 524, 177 (2002)camci U. Camci and Y. Kucukakca, Phys. Rev. D 76, 084023 (2007)Basilakos S. Basilakos, M. Tsamparlis and A. Paliathanasis, Phys. Rev. D 83, 103512 (2011)kuc3 Y. Kucukakca, U. Camci and I. Semiz, Gen. Relativ. Gravit. 44, 1893 (2012)muh M. Sharif and S. Waheed, J. Cosmol. Astropart. Phys. 02, 043 (2013)Kremer4 R.C. de Souza and R. André, G.M. Kremer, Phys. Rev. D 87, 083510 (2013)pali A. Paliathanasis, M. Tsamparlis, S. Basilakos and S. Capozziello, Phys. Rev. D 89, 063532 (2014)isil U. Camci, A. Yildirim and I. Basaran Oz, Astropart. Phys. 76, 29 (2016)beli J. A. Belinchon, T. Harko and M. K. Mak, Astrophys Space Sci. 361, 52 (2016)beli1 J. A. Belinchon, T. Harko and M. K. Mak,Int. J. Mod. Phys. D 26, 1750073 (2017)o1 A. Paliathanasis, M. Tsamparlis, S. Basilakos and J. D. Barrow, Phys. Rev. D 91, 123535 (2015)papag G. Papagiannopoulos, J. D. Barrow, S. Basilakos, A. Giacomini and A. Paliathanasis, Phys. Rev. D 95, 024021 (2017)Cap08 S. Capozziello and A. De Felice, J. Cosmol. Astropart. Phys. 08, 016 (2008)Vakili08B. Vakili, Phys. Lett. B 16, 664 (2008)ros M. Roshan and F. Shojai, Phys. Lett. B 668, 238 (2008)pal A. Paliathanasis, M. Tsamparlis and S. Basilakos, Phys. Rev. D 84, 123514 (2011)hus I. Hussain, M. Jamil and F.M. Mahomed, Astrophys. Space Sci. 337, 373 (2012)sha M.F. Shamir, A. Jhangeer and A.A. Bhatti, Chin. Phys. Lett. 29, 080402 (2012)daraa F. Darabi, K. Atazadeh and A. Rezaei-Aghdam, Eur. Phys. J. C 73, 2657(2013)hy M. Sharif and I. Nawazish, J. Exp. Theor. Phys. 120, 49 (2015)wei2 H. Wei, X.J. Guo and L.F. Wang, Phys. Lett. B 707, 298 (2012)ata K. Atazadeh and F. Darabi, Eur. Phys. J. C 72, 2016(2012)jam1 M. Jamil, D. Momeni and R. Myrzakulov, Eur. Phys. J. C 72, 2137 (2012)moh H. Mohseni Sadjadi,Phys. Lett. B 718, 270 (2012)hann H. Dong, J. Wang and X. Meng, Eur. Phys. J. C 73, 2543 (2013)aslam A. Aslam, M. Jamil and R. Myrzakulov, Phys. Scr. 88, 025003 (2013)pal2 S. Basilakos, S. Capozziello, M. De Laurentis, A. Paliathanasis andM. Tsamparlis, Phys. Rev. D 88, 103526 (2013)pal3 A. Paliathanasis, S. Basilakos, E.N. Saridakis, S. Capozziello, K. Atazadeh, F. Darabi and M. Tsamparlis, Phys. Rev. D 89, 104042 (2014)k1 M. Sharif and R. Saleem, Astrophys. Space. Sci. 357, 71 (2015)k2 M. Sharif and I. Nawazish, Gen. Relativ. Gravit. 49, 76 (2017)k3 M. Sharif and I. Nawazish, Eur. Phys. J. C 77, 189 (2017)k4 M.F. Shamir and M. Ahmad, Eur. Phys. J. C 77, 55 (2017)k5 M.F. Shamir and M. Ahmad, Mod. Phys. Lett. A 32, 1750086 (2017)k6 M.F. Shamir and F. Kanwal, Eur. Phys. J. C 77, 286 (2017)k7 N. Dimakis, A. Giacomini, S. Jamal, G. Leon and A. Paliathanasis, Phys. Rev. D 95, 064031 (2017)k8 A. Paliathanasis, Phys. Rev. D 95, 064062 (2017)k9 B. Tajahmad, Phys. J. C 77, 211 (2017)k10 B. Tajahmad, arXiv:1701.01620 [gr-qc] (2016)ba4 S. Bahamonde and S. Capozziello, Phys. J. C 77, 107 (2017)g1 C. Rubano, P. Scudellaro, E. Piedipalumbo, S. Capozziello and M. Capone, Phys. Rev. D 69, 103510 (2004)g2 M. Demianski, E. Piedipalumbo, C. Rubano and C. Tortora, Astron. Astrophys. 431, 27 (2005).
http://arxiv.org/abs/1708.07430v1
{ "authors": [ "Ganim Gecim", "Yusuf Kucukakca" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170824141324", "title": "Scalar-Tensor Teleparallel Gravity With Boundary Term by Noether Symmetries" }
http://arxiv.org/abs/1708.07714v1
{ "authors": [ "Michael Coughlin", "Tim Dietrich", "Kyohei Kawaguchi", "Stephen Smartt", "Christopher Stubbs", "Maximiliano Ujevic" ], "categories": [ "astro-ph.HE", "gr-qc" ], "primary_category": "astro-ph.HE", "published": "20170825125145", "title": "Towards rapid transient identification and characterization of kilonovae" }
[email protected] Institute of Astronomy, University of Cambridge, Madingley Rise, CB3 0HA, UKTensors are a natural way to express correlations among many physical variables, but storing tensors in a computer naively requires memory which scales exponentially in the rank of the tensor. This is not optimal, as the required memory is actually set not by the rank but by the mutual information amongst the variables in question. Representations such as the tensor tree perform near-optimally when the tree decomposition is chosen to reflect the correlation structure in question, but making such a choice is non-trivial and good heuristics remain highly context-specific. In this work I present two new algorithms for choosing efficient tree decompositions, independent of the physical context of the tensor. The first is a brute-force algorithm which produces optimal decompositions up to truncation error but is generally impractical for high-rank tensors, as the number of possible choices grows exponentially in rank. The second is a greedy algorithm, and while it is not optimal it performs extremely well in numerical experiments while having runtime which makes it practical even for tensors of very high rank.Efficient Tree Decomposition of High-Rank Tensors Adam S. Jermyn December 30, 2023 =================================================§ INTRODUCTION In recent years the language of tensors and tensor networks has become the standard for discussing correlations amongst physical variables <cit.>. Disparate concepts such as lattice partition functions <cit.>, differential equations <cit.>, high-energy field theories <cit.>, and quantum computing <cit.> are now being written and examined in this language, and this trend is only increasing.Alongside the conceptual shift towards tensors and tensor networks has come a numerical shift. Techniques such as the density matrix renormalization group (DMRG) and matrix product states (MPS) are now recognized as special cases of tensor network manipulations and representations <cit.>. Likewise there have been renewed efforts towards multidimensional numerical renormalization, and these lean heavily on fundamental tensor operations <cit.>.Along the way a few key operations have been identified as bottlenecks <cit.>. Among the more prominent are contraction of networks <cit.> and efficient representation of individual tensors <cit.>. This work is focused on the latter.There have been several attempts to devise efficient means of representing high-rank tensors <cit.>, and among the most successful for physical applications is the tree tensor <cit.>. Tree tensors represent a means of factoring a high-rank tensor into an acyclic network (tree) formed of rank-3 tensors <cit.>. The advantage of the tree representation is that it may be contracted efficiently: matrix elementsof tree networks may be computed with little memory overhead.Factoring alone does not produce a more compact representation. To reduce the storage requirements of a tensor tree, the dimension of the bonds between constituent tensors must be restricted, either globally or using an error cutoff <cit.>. In the latter case the network represents an approximation of the original tensor, and if the underlying correlations are weak or well-localized then the approximation may be made extremely good even with low-dimensional bonds.The main outstanding difficulty with using tensor trees for this purpose lies in picking a decomposition <cit.>: Which variables or indices ought to go at which points on the tree? Historically this has been answered heuristically, and said to depend on the context of the problem at hand <cit.>. For instance, if there is a physical reason to expect two sets of variables to be weakly coupled, they can be placed on opposing sides of the tree. More recently methods which automate the selection have been proposed <cit.> and found to perform well. The purpose of this work is to provide two new algorithms in this vein.In Section <ref> I review the construction of tensor trees. I then outline a brute force algorithm in Section <ref> and a much faster greedy algorithm in Section <ref>. Both algorithms choose decompositions based purely on the correlation properties of the tensor, with no additional physical reasoning required. In Section <ref> I examine the results of numerical experiments and benchmarks of both algorithms. The brute-force algorithm provides an optimal result up to truncation error. Unfortunately it is too slow to be practical for high-rank tensors. The greedy algorithm is practical even at very high rank while producing results which are almost as good, and so is generally much more useful.The algorithms and experiments discussed in this work are provided in an open-source Python package. Details of this software are given in Appendix <ref>.§ TENSOR TREES §.§ Singular Value Decomposition The fundamental building block of a tensor tree is the singular value decomposition (SVD), or equivalently the Schmidt decomposition <cit.>. This decomposition allows us to write any matrix M of shape (n,m) in the formM = U Σ V^†,where U and V are (n,k) and (k,m) unitary matrices, k=min(n,m) and Σ is a (k,k) diagonal matrix with non-negative real entries. The diagonal entries of Σ are known as the singular values of M. This decomposition is shown diagrammatically in Penrose notation <cit.> in Fig. <ref>. Briefly, squares represent tensors and lines represent indices. Where lines attached to different tensors connect those indices are to be contracted.If M has matrix rank less than k some diagonal entries of Σ vanish, and in this case the corresponding columns in U and rows in V^† may be eliminated. In this way the SVD allows us to produce a compressed representation of M. Similarly, if several diagonal entries of Σ are smaller than some threshold ϵ we may eliminate them and writeM ≈U̅Σ̅V̅^†,where the bar indicates that these matrices have had entries eliminated. This approximation is optimal in the sense that there is no better low rank linear approximation for a given threshold <cit.>. As a simple example, consider the matrixM = 1/√(2)( [ 1 1; 1 1 ]).This may be decomposed asU= (1,1),V= (1,1) andΣ = (1).If we perturb this matrix toM = 1/√(2)( [ 1 1; 1 1 ]) + δ/√(2)( [1 -1; -11 ]),then the full decomposition hasU= 1/√(2)( [11;1 -1 ]),V= 1/√(2)( [11;1 -1 ]) andΣ = 1/√(2)( [ 1 0; 0 δ ]).If δ < ϵ we may choose to truncate this decomposition by eliminating the second row and column in Σ, producing the same decomposition as that of equation (<ref>).As a notational convenience, after truncating Σ we defineÛ≡ U √(Σ) andV̂≡ V √(Σ),and likewise for the approximate matrices. Note that the matrix square root is defined for Σ because it is diagonal. With this notation equation (<ref>) becomesM = ÛV̂^†and likewise for equation (<ref>). Fig. <ref> then becomes Fig. <ref>.To apply the SVD to tensors with more than two indices we transform the tensor into a matrix and decompose that instead. More specifically, for some tensor T with indices i_1, i_2, ..., i_N we produce two groups of indices i_j_1, i_j_2, ..., i_j_M and i_j_M, i_j_M+1, ..., i_j_N-M and identifyM_{i_j_1, i_j_2, ..., i_j_M}, {i_j_M, i_j_M+1, ..., i_j_N-M} = T_i_1, i_2, ..., i_N,where the notation {...} means we merge (flatten) the listed indices into a single index. This identification is depicted graphically in Fig. <ref>. The matrix M may then be decomposed exactly according to equation (<ref>) or approximately according to equation (<ref>). In either case after decomposition the original indices may be made separate once more, and the resulting diagram is as shown in Fig. <ref>. §.§ Tree Decomposition Tree decompositions are a natural extension of the singular value decomposition, in that they simply amount to an iterated SVD. Using the example of Fig. <ref>, a first application of the SVD produces the result in Fig. <ref>. Each of Û and V̂^† may then be decomposed further. For instance we may group indices i_1 and i_2 together so thatM_{i_1, i_2}, {i_3, i_i} = Û_i_1, i_2, i_3, i_i,where i_i is the internal line. The matrix M may then be decomposed with the SVD. The result is shown in Fig. <ref> where A and B result from the decomposition of U.This process may be iterated until there are no tensors in the graph with more than three indices. This choice was made because the cost of storing tensors scales exponentially in their rank and three is the minimal rank which allows a tree structure[In some cases this may incur unnecessary overhead. For instance if the indices of a tensor of rank four are maximally correlated attempting to decompose the tensor into two tensors of rank three doubles the storage required. However this cost is a constant overhead and so is typically acceptable.] Furthermore the choice to proceed until all tensors have three indices is the only choice which can represent tensors of any rank greater than two without invoking tensors of varying ranks, and this uniform structure is considerably simpler to manipulate than a more general one with variable tensor ranks.At this point further decomposition may be warranted on the original indices, but further decomposition on indices which are internally contracted just introduces further intermediate tensors with no corresponding reduction in rank. This is because the SVD is already optimal, and so if an index results from performing the SVD it cannot be further compressed without raising the error threshold. Because the SVD just introduces intermediate tensors between sets of indices it cannot introduce cycles into the graph and so the result of this iterative decomposition is a tree. What we mean by a tensor tree then is an acyclic tensor network (graph) composed of only tensors with three or fewer indices. §.§ Choice Sensitivity To understand why the tree decomposition is sensitive to the precise choice of tree, consider the tensorT_ijkl = e^ije^kl.There are three possible tree decompositions of T, shown in Fig. <ref>. The corresponding singular values for each are plotted in Fig. <ref>. Because T factors precisely into a product of terms dependent on i and j and ones dependent on k and l, the first decomposition has only one non-zero singular value. In the numerical decomposition the second largest singular value is a factor of 10^16 smaller than the largest, which is within the floating point tolerance. By contrast the other two decompositions have many non-zero singular values, and may require several of these to be included to produce a good approximation. So for instance an approximation which is good to one part in 10^4 requires five singular values to be retained with these decompositions.Picking a good decomposition only becomes more important for tensors with more indices. This is because each internal index must encode the correlations between all indices on either side of it. Separating a highly correlated pair of indices therefore requires increasing the rank of every intermediate index between them to accommodate their correlations.§ ALGORITHMS§.§ Brute ForceA straightforward way to optimize over a finite set of choices is to evaluate every choice and pick the best one. For tensors with only a few indices this is possible: we simply enumerate all possible tree decompositions, evaluate each one, and pick the one which requires the storing the fewest elements for a given error threshold ϵ. This enumeration may be done quite efficiently <cit.>, but unfortunately the number of trees grows rapidly with the number of indices. The number of trees with N leaves and three links on each internal node isS_N = (2n-5)!! = 2^2-N(2N-4)!/(N-2)!<cit.>. For large N,S_N∼ 2^-N(2N)!/N!.Applying Stirling's approximation we findS_2N-2,N∼ 2^-N(2N/e)^2N(N/e)^-N = (2N/e)^Nwhich grows exponentially with N. As such this method is only practical for tensors with a small number of indices.Our implementation of this algorithm iterates over all trees. It identifies and performs a sequence of SVD operations to produce each tree. When the truncation error is negligible the order of SVD operations does not matter. In testing however there were rare cases in which the order of application of the SVD mattered. This may occur, for instance, when a singular value lies very near to the threshold and so can be pushed to one side or the other by truncation error. Still, in almost all cases the brute force algorithm identifies the optimal tree decomposition, and so we do not further examine all possible SVD orderings. As it stands the algorithm is practical only for relatively small tensors, and examining SVD orderings would only make this worse. §.§ Greedy ChoiceWhile considering every possible tree decomposition ensures that we find the best one, there are useful heuristics which can often do nearly as well. One such heuristic is to construct the tree by locally minimizing the number of terms left in the truncated SVD at each stage. More specifically, we consider all factorizations of the tensor T with indices i_1,i_2,...,i_N of the formT_i_1,i_2,...,i_N≈ A_i_j_1,i_j_2,i_i T'_i_i,i_j_3,...,i_j_N,where there is summation over the repeated internal index i_i. Thus we are looking for the pair of indices to factor away from the rest which minimize the rank of the resulting internal index. This requires checking N(N-1)/2 factorizations except in the special case where N = 4, where symmetry allows us to check just 3 cases. Once we have found the best pair to factor away from T we may iterate and search for the best index pair to factor out from T'. This process ends when the process returns two tensors each with three indices.It is possible to further improve the performance of this algorithm by noting that most indices of T' are the same as those of T. If T is exactly represented by the contraction of A and T' then we do not need to check pairs of these again, as they represent precisely the same decomposition as before. If T is only approximately represented then in principle the rank of these pairs may have changed, but in practice for a sufficiently small error threshold ϵ we expect this to be unlikely. As a result we only need to examine pairs involving the new index i_i, of which there are N-2. Iterating until there are just three indices on each tensor requires1/2N(N-1) + ∑_j=1^N-4 (j+2) = N^2 - 2N - 2pairs to be tested if N > 4. In the limit of large N this scales quadratically, and so is immensely preferable to the exponential scaling of the brute force algorithm.Note that for simplicity our implementation of this algorithm does not include this improvement, and has cubic runtime in N. Regardless the number of comparisons is polynomial in N rather than exponential, which is the key advantage of this algorithm.It is important to emphasize that this algorithm does not generally produce the optimal tree decomposition, though in specific instances it may. For instance if the tensor was formed from an outer product of rank-2 objects this algorithm would correctly identify the pairs of indices which are uncorrelated from all others and the resulting tensor tree would be composed of the original rank-2 objects, augmented by links of dimension one to one another and possibly rescaled. Likewise if the tensor was formed by contracting a sequence of identical rank-3 tensors against one another, as in the N-point correlation function of the one-dimensional Ising model (see Section <ref>), the entanglement between any pair of indices and the remainder of the indices is guaranteed to be monotonic in the distance within the pair and hence a greedy approach correctly identifies that adjacent indices ought to be split off from the rest. These scenarios are quite special, however, and do not necessarily generalize. Hence the greedy algorithm should be treated as producing approximations to the best choice of decomposition rather than the best decomposition outright. §.§ Runtime Comparison In both algorithms the runtime of the SVD typically scales exponentially in N. This is because the number of entries in the matrix scales as d^N, where d is the typical dimension of the indices. This may be avoided by employing approximate decompositions and rank estimations <cit.>. Nevertheless such approximations are not our focus and so we proceed with the naive exponential implementation.Because the runtime is proportional to the cost of performing decompositions and/or rank estimations, for comparing these two algorithms we may just examine how many such operations must be performed. In the brute-force algorithm this scales as (2N/e)^N, while in the greedy algorithm it scales as N^2. For N > 1 the latter is smaller, and even for modest N the difference in runtime may be enormous. For example when N=10 the brute force algorithm requires approximately (20/e)^10≈ 5× 10^8 comparisons while the greedy one just requires of order 10^2 or 10^3 comparisons depending on the implementation. This makes the latter preferable so long as it produces results of comparable quality.§ NUMERICAL EXPERIMENTSWe have already discussed the relative runtime of our algorithms and so we proceed directly to examining their relative performance on a variety of tasks. For simplicity our implementation of the greedy algorithm does not take advantage of the fact that index pairs repeat between iterations. Furthermore we take the output from the brute force algorithm to be the standard of comparison for the greedy algorithm, as it is optimal in all but certain rare cases [Because the brute force algorithm relies on the iterated application of SVD it may fail to produce optimal results if approximation error from an early SVD application suffices to change the effective ranks of subsequent applications. In such an instance this error could result in a sub-optimal tree being selected. With ϵ set as small as in our numerical experiments this ought to be rare, and indeed we only see it occurring twice. Both instances appear as the greedy algorithm outperforming the brute force one in Figure <ref>.]. Timing was done on an Intel IvyBridge processor using the Intel MKL linear algebra backend. The linear algebra was parallelized across cores when this provided a speed improvement (i.e. for large tensors) but the search algorithms were run serially. This prevents competition between threads for linear algebra resources.For the threshold we useϵ^2 = 10^-6∑_i Σ_ii^2.This ensures that the relative L_2 error incurred with each eliminated singular value is no more than 10^-6, and means that the normalization of the tensor does not impact our choice of decomposition.To begin we consider the example of equation (<ref>). We tested the case where all indices range from 0 to 3 inclusive and found that the greedy algorithm identified the optimal decomposition. This is not surprising, as the greedy algorithm reduces to the brute force algorithm when there are just four indices.We further considered the caseT_ijklm = e^ije^kle^m,where once more all indices range from 0 to 3 inclusive. Once more the greedy algorithm reproduces the result from the brute force algorithm, which is shown in Figure <ref>. Indeed the links between A and B and between B and C are one-dimensional in this example, reflecting the total factorization of equation (<ref>). This results in an overall compression from 4^5=1024 entries to just 36, and is an example of the best case for this sort of tensor compression.The decomposition in Figure <ref> remains optimal even for more complex tensors likeT_ijklm = e^ije^kle^m + sin(ij)cos(kl)tanh(m)because the required bond dimension to encode the correlations between m and the remaining indices is less than that required to encode those between either of i and j or k and l. The greedy algorithm once more performs optimally in this case.To construct more challenging examples we next considered the family of tensors with N indices of dimension 2 where each entry is randomly and independently chosen as either 1 with probability p or 0 with probability 1-p. In the low-p limit tensors drawn from this distribution exhibit non-uniform correlations across all indices because there are few correlations overall owing to most entries vanishing, and so clusters and hence correlations are typical. Figure <ref> shows the factor by which each algorithm was able to compress tensors generated in this fashion as a function of the number of indices N as well as the time required to find this decomposition. For the cases where both algorithms have tractable runtimes they perform very similarly, with the greedy algorithm usually finding optimal or near-optimal solutions. For larger N the brute force algorithm becomes intractable but the greedy algorithm remains performant. This can be seen in the timing curves, where the difference begins at several orders of magnitude and rises rapidly in N even on a logarithmic scale. Beyond N=8 it was not practical to evaluate this algorithm with enough samples for robust averaging, and so the brute force benchmark ends there.One feature of Figure <ref> worth noting is that this class of tensors becomes less compressible as N increases, and for some N the decompositions are actually less efficient than just directly storing the original tensor. This is not just a feature of the greedy algorithm, as the optimal result shows the same trend as far as it has been evaluated. To understand this note that for a given pair of indices the tensor may be viewed as a collection of matrices, each of which possesses those two indices. In the large-N limit the number of such matrices increases rapidly. This increase in the mutual entropy between any pair of indices increases the SVD rank required to maintain the specified accuracy. When this rank approaches the original matrix rank the SVD can actually increase the number of entries being stored from nm up to (n+m)min(n,m), where n and m are the matrix dimensions. This results in an overall increase in the memory required to store the tensor.This incompressibility is actually a common feature of random tensors. For instance Figure <ref> shows the same plot as in the top panel of Figure <ref> but for tensors whose entries are drawn randomly and independently from a normal distribution. For all N the compression ratio is quite poor, in stark contrast to the more structured examples. The two algorithms perform identically for N < 7, and for N=8,9 the brute force algorithm is somewhat better. At larger N the compression ratio of the greedy algorithm improves and plateaus in performance near a compression ratio of unity. We expect that unbiased random tensors ought not to be compressible so this result is likely close to optimal.A case with more physical content is that of a tensor encoding the N-point correlation function for a lattice system. For instance in the 1D classical Ising model with open ends states are weighted asP({s_i}) ∝ e^-J ∑_i=0^N-1 s_i s_i+1,where each s_i is drawn from {0,1}. Interpreting each s_i as an index produces a tensor with N indices, shown in Figure <ref>. This object ought to be quite compressible because the correlations it encodes are local and so factor more readily.Figure <ref> shows the tree decomposition produced by the brute force algorithm for the Ising N-point function with N=9 and J=1. Note that the structure is regular and local, with indices which are neighbours in the Ising model appearing near one another in the tensor tree. This reflects the local nature of correlations in the Ising model.Figure <ref> shows the compression ratio of the Ising N-point function as a function of system size. The two algorithms produce identical results, so in these cases the greedy algorithm finds the optimal tree decomposition. Once more the brute force algorithm proved impossible to benchmark beyond moderate N and so only the greedy algorithm is shown for larger values.Interestingly, the compression ratio in Figure <ref> decreases rapidly as a function of N, in keeping with the intuition that Ising correlations are highly redundant. In particular, the decrease is exponential in N. To understand this note that the tensor size is 2^N. For finite J there is a corresponding correlation length ξ(J). The size of the compressed tensor is set by the dimensions of the internal bonds in the tree. This in turn is set by the number of degrees of freedom above the given error threshold, which is proportional to the entanglement entropy between the two sides of the bond, and so scales as 2^ξ(J)[For a more detailed version of this argument see e.g. <cit.>.] As such the compression ratio scales like 2^ξ(J) - N, which is consistent with the data in Figure <ref>.Figure <ref> shows the same compression ratio and timing data as a function of the site-site coupling J. The timing is largely independent of J, which indicates that it is mainly being set by the combinatorics of the problem and not by the underlying matrix transformations. This highlights the need for efficient means of choosing a decomposition.Once more the two algorithms produce identical results for the compression ratio, with variation only near the transition in compression ratio at J=-2. The increase in compression ratio with J occurs because as J increases correlations become longer-ranged, and so become more difficult to encode in a local model such as a tree decomposition. This manifests as a sharp transition because near J=-2 a specific singular value crosses the threshold ϵ and becomes relevant. As a further test consider the 2D Ising probability functionP({s_i}) ∝ e^-J ∑_⟨ ij ⟩ s_i s_j,where ⟨ ij ⟩ indicates all nearest-neighbour pairs. Figure <ref> shows the compression ratio and timing for this function, again interpreted as a tensor indexed by {s_i}, as a function of J for the case of a 3× 3 open-boundary square lattice. In addition to the brute force and greedy algorithms a variant of the greedy algorithm which uses variable-size tensors was included. This variant leaves tensors of rank greater than three in the final decomposition if the next step of their decomposition would not produce net compression. In that sense it is also greedy in the space of possible final tensor ranks.Once more the algorithms produce very similar results. In this case though the compression ratio is worst for intermediate J. This is matched with an increase in computation time, which makes sense because the SVD in this region is more time consuming. At extreme J the complexity falls because as the J→±∞ the ferromagnetic and antiferromagnetic ground states come to dominate the probability distribution. This makes the tensors more compressible.For most J the regular greedy algorithm outperformed the variable-size one. This reflects the fact that the latter may determine that the next decomposition step produces no net improvement, or indeed produces a worse compression ratio, than doing nothing at all. For random tensors, or indeed for the tensor in equation (<ref>) near J=-1 and J=5, this is a correct determination, but in other cases it can leads to premature termination of the decomposition and worse overall performance.It is worth noting that in this case both greedy algorithms actually outperform the brute force algorithm in small regions near J=-1, J=0 and J=7. These are among the cases where the order of the SVD matters because of the truncation error.A key takeaway from these examples is that the structure of the tensor matters. For instance it is not surprising that models with local structure such as these should be good use cases for the greedy algorithm because it is designed to take advantage of the existence a hierarchy of correlations, identifying the most tightly-correlated indices and moving outward from there.Finally, for comparison purposes consider Example 6.4 of <cit.>. Letg(x,y) ≡1/|y-α x| + 1for fixed α. Furthermore given N∈ℕ letξ_i_1,...,i_N≡∑_μ=1^N 2^μ-N i_μ - 1,where each i_k ∈{0,1}. With this they define the tensorA_i_1,...,i_2N≡ g(ξ_i_1,...,i_N, ξ_i_N+1,...,i_2N).This example is non-trivial because neither the structure of g nor the bitwise indexing of ξ permit any obvious factoring.Table <ref> shows the effective dimension defined in <cit.> for both their adaptive agglomeration scheme and the greedy one introduced in this work. To match the parameters used in their tests the dimension was set to N=8, the tolerance ϵ = 10^-8, and the test was performed for each α∈{0,0.25,0.5,0.75,1.0}. Furthermore to account for possible differences in how the methods determine the SVD tolerance the greedy algorithm was also run with a tolerance of 10^-9.In three cases, namely α = 0, α=0.25 and α=1.0 the greedy algorithm outperformed the agglomeration algorithm. This was true both for ϵ=10^-8 and ϵ=10^-9. In the remaining cases the agglomeration algorithm outperformed the greedy one. In no case was the performance of one substantially different from that of the other. Note also that the runtime complexities of these methods both require a number of SVD or other decompositions which scales as N^2. Hence in both runtime and performance, at least on this test, these methods are comparable.§ CONCLUSIONS High-rank tensors are increasingly common in physical computation, and so there is now a demand for algorithms which can manipulate them efficiently. I have introduced and tested two new algorithms for identifying good tree decompositions of tensors. The first is a straightforward brute-force approach which, up to truncation error, to produces the optimal result at the expense of an exponential number of trials. The second is a greedy algorithm which, while not guaranteed to produce the optimal outcome, performs very well in all cases examined while only requiring a quadratic number of comparisons. These fill a crucial niche among the building blocks for tensor computations.§ ACKNOWLEDGEMENTS I am grateful to the UK Marshall Commission for financial support as well as to Milo Lin and Ravishankar Sundararaman for helpful discussions regarding this work. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.§ SOFTWARE DETAILSThe software used in this work is available under a GPLv3 license at <github.com/adamjermyn/TensorDecomp>, and consists of a collection of Python scripts. For simplicity the implementation of the greedy algorithm does not currently take advantage of the possibility of reusing comparisons. The brute force algorithm enumerates all possible trees by first constructing a tree with one internal node and three indices and then iteratively inserting nodes attached to additional indices in place of edges. These were run on Python v3.6 with NumPy v1.12.1 <cit.> and the plots were created with Matplotlib v1.5.3 <cit.> and the ggplot style, though no specific features of these releases were used.
http://arxiv.org/abs/1708.07471v4
{ "authors": [ "Adam S. Jermyn" ], "categories": [ "physics.comp-ph", "cs.NA" ], "primary_category": "physics.comp-ph", "published": "20170824155649", "title": "Efficient Decomposition of High-Rank Tensors" }
Batch-Based Activity Recognition from Egocentric Photo-Streams [============================================================== We present a useful web interface for tools that participate in the annual confluence competition. § INTRODUCTION In recent years several tools have been developed to automatically prove confluence and related properties of a variety of rewrite formats. These tools compete annually in the confluence competition <cit.> http://coco.nue.riec.tohoku.ac.jp/(CoCo). [<http://coco.nue.riec.tohoku.ac.jp>] Most of the tools can be downloaded, installed, and run on one's local machine, but this can be a painful process. [StarExec provides a VM Image with their environment, which can helpful in case a local setup is essential.] Few confluence tools—we are aware of  <cit.>,  <cit.>, and  <cit.>—provide a convenient web interface to painlessly test the status of a system that is provided by the user.Inspired by the latest web interface of  <cit.>, in this note we present , a web interface that provides a single entry point to all tools that participate in CoCo.is available at <http://cl-informatik.uibk.ac.at/software/cocoweb> The typical use ofwill be to test whether a given confluence problem is known to be confluent or not. This is useful when preparing or reviewing an article, preparing or correcting exams about term rewriting, and when contemplating to submit a challenging problem to the confluence problems (http://cops.uibk.ac.at/Cops) [<http://cops.uibk.ac.at>] database. In particular,is useful is when looking for (killer) examples to illustrate a new technique. For instance, in <cit.> a rewrite system is presented that can be shown to be confluent with the technique introduced in that paper. The authors write “Note that we have tried to show confluence [ …] by confluence checker and , and both of them failed.” Despite having an easy to use web interface, was not tried.could also be useful for the CoCo steering committee when integrating newly submitted problems into Cops.The tools run on the same hardware, which is compatible with a single node of StarExec <cit.> that is used for CoCo, allowing for a proper comparison of tools.In the next section we present the web interface ofby means of a number of screenshots. Implementation details are presented in Section <ref> and in Section <ref> we list some possibilities for extending the functionality ofin the future.§ WEB INTERFACEFigure <ref> shows a screenshot of the entry page of . Problems can be entered in three different ways: [label=(*)]* using the text box,* uploading a file,* entering the number of a system in Cops. The tools that should be executed can be selected from the tools panel on the left.Tools are grouped into categories, similar to the grouping in CoCo except that we merged the certified categories with the corresponding uncertified categories. Multiple tools can be selected. This is illustrated in Figure <ref>. Here we selected the CoCo 2016 and 2015 versions of and the CoCo 2015 version of , and Cop 500 is chosen is input problem.The screenshot in Figure <ref> shows the output ofafter clicking the submit button. The output of the selected tools is presented in separate tabs. The colors of these tabs reveal useful information: Green means that the tool answered yes, red (not shown) means that the tool answered no, and a maybe answer or a timeout is shown in blue. By clicking on a tab, the color is made lighter and the output of the tool is presented. The final line of the output is timing information provided by . The final screenshot (in Figure <ref>) is concerned with the example in <cit.> that we referred to in the introduction. § IMPLEMENTATIONMost ofis built using PHP. User input in forms, i.e., rewrite systems and tool selections, is sent using the HTTP POST method. The dynamic parts of the website, namely folding and unfolding in the tool selection menu and the tabs used for viewing tool output are implemented using JavaScript.To layout the tool selection menu we made extensive use of CSS3 selectors. For instance, the buttons to select tools are implemented as checkboxes with labels that are styled according to whether the checkbox is ticked or not:Drawing the edges of the tree menu is also done using CSS, relying mainly on itsselector.The content of the tool menu, i.e., years, the grouping by categories, and the actual tools, is generated automatically from a directory tree that has the structure of the menu in . There small configuration files reside that specify how the tools are to be run, in case they are selected. Two environment variables are set in such a file, for example the one for the 2012 version of reads as follows:The variablespecifies the directory that contains the tool binary, whilegives the tool invocation, which in turn refers to , the timeout, and , the input rewrite system. Using such configuration files tools are run using the following script, whose first and second argument are the configuration of the tool and input rewrite system respectively:The script uses three different timeouts:is the timeout passed to the tool itself if supported, while afterandthe signalsandare sent to the tool in case it did not terminate on its own volition. When multiple tools are selected,runs them sequentially, in order to avoid interference.§ POSSIBLE EXTENSIONSWe conclude this note with some ideas for future extensions of the functionality of . * By analyzing the format of the input problem, it is often possible to determine the category to which the problem belongs. This information can then be used to restrict tool selection accordingly.* Adding support for other competition categories like ground confluence and unique normal forms is an obvious extension. This, however, limits the possibilities to restrict tool selection mentioned in the previous item.* Selected tools are executed sequentially and so if many tools are selected, the 60 seconds time limit per tool may result in an unacceptably slow response. In that case a timer option is a welcome feature. plain
http://arxiv.org/abs/1708.07876v1
{ "authors": [ "Julian Nagele", "Aart Middeldorp" ], "categories": [ "cs.LO" ], "primary_category": "cs.LO", "published": "20170825202734", "title": "CoCoWeb - A Convenient Web Interface for Confluence Tools" }
Presented at The XXIII International Workshop“High Energy Physics and Quantum Field Theory”NRC «Kurchatov Institute» – IHEP, ProtvinoThe analysis of results from HEP experiments often involves the estimates of the composition of the binned data samples, based on Monte Carlo simulations of various sources. Due to a finite statistic of MC samples they have statistical fluctuation. This work proposes the method of incorporating the systematic uncertainties due to finite statistics of MC samples with negative weights. The possible approximations are discussed and the comparison of different methods are presented. The evaluation of the systematic uncertainties for the finite MC samples in the presence of negative weights Petr [email protected]============================================================================================================ § INTRODUCTIONРезультат измерения в физике высоких энергий часто представляет собой дискретное распределение (гистограмму) числа событий по какому-либо параметру:X = (X_1, X_2, ... )где X_i - значение в i-том интервале гистограммы. Experimental results in high energy physics are often represented as a binned distribution (histogram) of observed events X = (X_1, X_2, ... ), where X_i is a number of events in bin i. Рассмотрим теоретическую модель, описывающая результат эксперимента и зависящая от набора параметров π = (π_1, π_2, ...) (для краткости мы не вводим отдельных обозначений для информационных и неинформационных параметров). Пусть также предсказываемое моделью число событий зависит распределений различных сигнальных и фоновых процессов t = (t^a_1, t^a_2, ... t^b_1, t^b_2, ... t^c_1, t^c_2, ... ), где t^k_i - полученное методом моделирования Монте-Карло значение в i-том интервале гистограммы для процесса k. Тогда в общем случае:m_i = m_i(π, t_i) = m_i(π, t^a_i, t^b_i, t^c_i, ...)где m_i - предсказываемое число событий для i-того интервала гистограммы. The usual method to estimate physical parameters such as particle masses or cross sections from this distribution is toperform some of Bayesian or frequentist analyses based on likelihood function. Likelihood function ℒ(X|m) connect the data with a theoretical model and represente how well the observatios are described by the prediction m = (m_1, m_2, ... ) . This prediction may depend on several different parameters π = (π_1, π_2, ...): nuisance parameters and parameters of interests. In addition if some signals or background processes are known from Monte-Carlo simulations then the likelihood function depends on template distributions t = (t^a_1, t^a_2, ... t^b_1, t^b_2, ... t^c_1, t^c_2, ... ), where t^k_i is a number of events for process k in bin i:ℒ(X|m) = ℒ(X|π, t) = ∏_i P(X_i|π, t_i) = ∏_i P(X_i|π_1, π_2, ..., t^a_i, t^b_i, ...) This is important task to define an adequate likelihood function and take into account all existed statistical and systematic uncertainties present in the analysis.Связь между предсказанием теоретической модели и измеренными в эксперименте данными устанавливаются посредством функции правдоподобия:ℒ(X|π, t) = ∏_i P(X_i|π, t_i) = ∏_i P(X_i|π, t^a_i, t^b_i, ...) Одним из этапов статистического анализа является построение адекватно опиcывающей экпериментальные данные функции правдоподобия, включающей в себя имеющиеся систематические и статистические ошибки измерений и моделирования. Template distributions from Monte-Carlo generators are subject to statistical fluctuations due to finite number of events in samples. The influence of these fluctuations can be expected to be significant in regions of low amounts of Monte-Carlo events. For incorporating such uncertainties into likelihood function Barlow and Beeston proposed a method <cit.> wherein one for every bin i and every process k introduces a new parameter T_i^k corresponding to unknown expected number of events in infinite statistics limit:∏_i P(X_i|π, t_i) →∏_i [ P(X_i|π, T_i) ·∏_k P(t^k_i|T^k_i) ]where for constrain P(t^k_i|T^k_i) Barlow and Beeston assumed a Poisson distribution.On the other hand several of the modern Monte-Carlo generators <cit.> produce a weighted events with both negative and positive weights. In this case the transformation (<ref>) is not applicable. In this paper we provide a method of incorporating uncertainties due to the finite statistics of Monte-Carlo samples in the presence of negative weights. Для учёта систематической ошибки, связанной с конечностью статистики данных Монте-Карло, Барлоу и Бистон предложили метод <cit.>, вводящий в функцию правдоподобияпараметр T_i^k для каждого интервала гистограмы i и каждого процесса k, отвечающий неизвестному ожидаемому числу событий из Монте-Карло моделирования:∏_i P(X_i|π, t_i) →∏_i [ P(X_i|π, T_i) ·∏_k P(t^k_i|T^k_i) ]где для функции плотности вероятности P(t^k_i|T^k_i) полученного из Монте-Карло генератора числа событий t^k_i в оригинальной работе использовалось распределение Пуассона, T_i = (T^a_i, T^b_i, T^c_i, ...).С другой стороны ряд Монте-Карло генераторов <cit.> производятвзвешенные события с отрицательными и положительными весами, так, чтовследствие конечности сгенерированной статистики в ряде кинематических регионах возможно наличие отрицательного суммарного числа событий иприменение фомулы (<ref>) оказывается невозможным.Далее в данной работе предложен метод учёта систематической ошибки, связанной с конечностью статистики данных Монте-Карло, в случае, когда вес событий, полученный из генератора, включает в себя отрицательные значения.§ LIKELIHOOD FUNCTIONS FOR MONTE-CARLO SAMPLES WITH NEGATIVE WEIGHTS В сильно упрощённом виде алгоритм генерации событий в MadGraph5_aMC@NLO можно описать следующим образом <cit.>. Сечение некоторого процесса σ_NLO находится интегрированием двух функций F_H(x) и F_S(x):σ_NLO = ∫ F_H(x) dx + ∫ F_S(x) dxСогласно построению, F_H(x) и F_S(x) конечны и F_H(x)+F_S(x)>0, однако для некоторых x значение F_S(x) < 0. По функциям плотности вероятности, пропорциональным |F_H(x)| и |F_S(x)|, создаются два набора событий - {x}_H и {x}_S соответственно.Событиям приписывается вес w_i^H,S, равный +1, если значение соответствующей функции положительно, и -1, если отрицательно, так, что:σ_NLO = ∫|F_H(x)| dx/N_H·∑_i^N_H w_i^H + ∫|F_S(x)| dx/N_S·∑_i^N_S w_i^Sгде N_H, N_S - число событий в наборах. In simplified form the algorithm of event production in most important example of Monte-Carlo generator with negative weights MadGraph5_aMC@NLO can be described as follow <cit.>. A cross section of some process σ_NLO is calculated by computing the integrals of two functions F_H(x) and F_S(x):σ_NLO = ∫ F_H(x) dx + ∫ F_S(x) dx By definition, the functions F_H(x) and F_S(x) are finite and F_H(x)+F_S(x)>0, but for some values of x the function F_S(x) is negative. Using the absolute values of the integrands |F_H(x)| and |F_S(x)| two set of events are produced - {x}_H and {x}_S respectly with weights w_i^H,S equal to +1 for positive values of functions and weight -1 if function is negative, so:σ_NLO = ∫|F_H(x)| dx/N_H·∑_i^N_H w_i^H + ∫|F_S(x)| dx/N_S·∑_i^N_S w_i^Swhere N_H, N_S are thw number of events in corresponding sets.Таким образом, в пределе бесконечной статистики предсказание числа событий в произвольном интервале [x_i, x_i + Δ x] полученной из MadGraph5_aMC@NLO гистограммы может быть только положительным, а доля событий с отрицательными весами пропорциональна ∫_x_i^x_i + Δ x|F_S(x)| ℋ(-F_S(x)), где ℋ - функция Хевисайда.При заданном конечном числе сгенерированных событий вероятность получить определённую гистограмму в случае только отрицательных весов описывается мультиномиальным распределенем, которое обычно апроксимируют произведением независимых распределений Пуассона (см. например, <cit.>), пренебрегая имеющейся связью меду общим числом событий и суммой событий в интервалах.Аналогично для гистограммы событий только с положительными весами. In this way in the infinite statistics limit the prediction of any observable in any intervals [x_i, x_i + Δ x] of histograms from MadGraph5_aMC@NLO can be only positive, but in the case of finite statistics the prediction could get negative values in some bins. On the other hand the events with negative and positive weights should be treated in the same way during analyses and pass the same cuts to keep the correct cross section value (<ref>). Further we assume that the last condition is satisfied, so for the finite number of generated Monte-Carlo events the probability of obtaining a given one in the case of only positive or negative weights is described by multinomial distribution, which is usually approximated by multiplication of independent Poisson distributions (see for example, <cit.>).Рассмотрим далее для простоты случай одного интервала гистограммы и единственного процесса t, полученного из Монте-Карло. Пусть t^+ сумма всех событий из Монте-Карло генератора с положительными весами, а t^- - с отрицательными, прошедшими все предыдущие этапы физического анализа. Тогда величина t есть разница между двумя распределёнными по Пуассону величинами t^+ и t^- и описывается распределением Скеллама:P(t) = 𝒮(t | T^+, T^-) = ∑_s = max(0,t)^∞𝒫(s|T^+) ·𝒫(s - t|T^-) == e^-(T^+ + T^-)( T^+/T^-)^t/2ℐ_t(2√(T^+ T^-))где ℐ_t - модифицированная функция Бесселя первого рода, 𝒫 - функция Пуассона, T^+ и T^- - параметры распределения Скеллама, отвечающие неизвестныму “истинному” предсказываемому Монте-Карло генератором числу отрицательных и положительных событий.Как уже было указано ранее, здесь и далее мы пренебрегаем имеющейся связью между общим числом событий и суммой положительных и отрицательных событий в интервалах.Подставляя выражение (<ref>) в преобразование (<ref>) получим следующее преобразование для учёта систематической ошибки, связанной с конечностью статистики данных Монте-Карло:∏_i P(X_i|π, t_i) →∏_i [ P(X_i|π, T_i) ·∏_k 𝒮(t^k_i|T^k+_i, T^k-_i) ]где новые неизвестные параметры связаные соотношением T^k_i = T^k+_i - T^k-_i. Let us consider a simple case of single bin and only one generated process with total number of event t. If t^+ is a sum of all positively weighted events from Monte-Carlo samples and t^- is a sum of all negative, then t is a difference of two Poissonian quantities t^+ and t^- and described by Skelleman distribution:P(t) = 𝒮(t | T^+, T^-) = ∑_s = max(0,t)^∞𝒫(s|T^+) ·𝒫(s - t|T^-) = e^-(T^+ + T^-)( T^+/T^-)^t/2ℐ_t(2√(T^+ T^-))where ℐ_t is a modified Bessel functions of the first kind, 𝒫 is Poisson distribution, T^+ and T^- are the parameters of Skellenam distribution, corresponding to unknown “true” prediction of negative and positive events from MC generator.Using the equations (<ref>) in (<ref>) we obtain the following transformation rule for taking into account uncertainties due to the finite statistics of Monte-Carlo samples in the presence of negative weights in likelihood function:∏_i P(X_i|π, t_i) →∏_i [ P(X_i|π, T_i) ·∏_k 𝒮(t^k_i|T^k+_i, T^k-_i) ]where the new parameters are related by the equation T^k_i = T^k+_i - T^k-_i.Однако на практике применение выражения (<ref>) затруднительно, так как фактически используемая функция Скеллама не ограничивает значение эффективного параметра 𝐭^k_i, а только устанавливает связь между 𝐭^k+_i и 𝐭^k-_i.В уравнении (<ref>) не учитывается часть информации, а именно величины t^+ и t^-, по которым производится суммирование. Если эти величины известны, то вместо распределения Скеллама можно записать:P(t) = 𝒫(t^+|T^+) ·𝒫(t^-|T^-) Подставляя (<ref>) в (<ref>) получим:∏_i P(X_i|π, t_i) →∏_i [ P(X_i|π, T_i)·∏_k 𝒫(t^k+_i|T^k+_i) ·𝒫(t^k-_i|T^k-_i) ]The constrain on parameters T^k_i, T^k+_i, T^k-_i in formula (<ref>) may be improved by an independent treatment ofvalues t^+ and t^- in analyses. In this case we get:P(t) = 𝒫(t^+|T^+) ·𝒫(t^-|T^-)and from (<ref>) with (<ref>):∏_i P(X_i|π, t_i) →∏_i [ P(X_i|π, T_i)·∏_k 𝒫(t^k+_i|T^k+_i) ·𝒫(t^k-_i|T^k-_i) ] Число дополнительных параметров при использовании перобразования (<ref>) равно N_p = 2× число процессов × число интервалов в гистограмме.Для уменьшения числа параметров и упрощения вычислений может быть использован метод максимума функции правдоподобия. Пусть ℒ - функция правдоподобия, использующая преобразование (<ref>), тогда воспользовавшись тем, что:ln𝒫(x|μ) = lnμ^x e^-μ/x! = -μ + x ·lnμ - lnx!получим для i-того интервала гистограммы данных:-lnℒ_i= -lnP(X_i|π, T_i) - ∑_k ( -T^k+_i + t^k+_i ·lnT^k+_i - ln(t^k+_i!)) - - ∑_k ( -T^k-_i + t^k-_i ·lnT^k-_i - ln(t^k-_i!))The number of extra parameters in the transformation rule (<ref>) is equal to 2× number of processes × number of bins. We can decrease the number of parameters by using the method of maximum likelihood functionIndeed, if ℒ is a likelihood function with transformation (<ref>) then for bin i one gets:-lnℒ_i= -lnP(X_i|π, T_i) - ∑_k ( -T^k+_i + t^k+_i ·lnT^k+_i - ln(t^k+_i!)) - - ∑_k ( -T^k-_i + t^k-_i ·lnT^k-_i - ln(t^k-_i!)) Требование экстремума даёт следующую систему уравнений: [left = ]align - ∂lnℒ / ∂T^k+_i = 1 - ∂lnP(X_i|π, T_i) / ∂T^k+_i - t^k+_i/T^k+_i = 0 - ∂lnℒ / ∂T^k-_i = 1 - ∂lnP(X_i|π, T_i) / ∂T^k-_i - t^k-_i/T^k-_i = 0 Далее в зависимости от вида функции P(X_i|π, T_i), система уравнений (<ref>) может быть решена аналитически относительно параметров T^k-_i, T^k+_i, либо значения параметров, при которых функции правдоподобия достигает максимума, могут быть найдены численно. The requirement of an extremum gives the following system of equations: [left = ]align - ∂lnℒ / ∂T^k+_i = 1 - ∂lnP(X_i|π, T_i) / ∂T^k+_i - t^k+_i/T^k+_i = 0- ∂lnℒ / ∂T^k-_i = 1 - ∂lnP(X_i|π, T_i) / ∂T^k-_i - t^k-_i/T^k-_i = 0 This system (<ref>) in some cases may be solved analytically for the parameters T^k-_i, T^k+_i,or numerically with some fixed values of the remaining parameters. Рассмотрим также другой метод уменьшения числа параметров модели, связанных с конечностью статистики Монте-Карло <cit.>. Действительно, так как статистические ошибки полученных из моделирования гистограмм независимы для каждого процесса в каждом интервале, то они могут быть скобинированы и учтены единственным эффективным параметром M_i:∏_i P(X_i|m_i) →∏_i P(X_i|M_i) · P(m_i|M_i)В данном приближении для функции плотности вероятности P(m_i|M_i) как правило используется функция Гаусса 𝒢(m_i|M_i, σ_i),где ошибка σ_i получена путём распространения статистических ошибок Монте-Карло гистограмм в данном интервале при фиксированных значениях параметров. The another way to decrease the number of parameters related to finite statistics of Monte-Carlo is known as Barlow-Beeston “light” transformation <cit.>. As the statistical uncertainties for each source in each bin are independent they may be combined and be represented approximately by single effective parameter per bin M_i:∏_i P(X_i|m_i) →∏_i P(X_i|M_i) · P(m_i|M_i)Usually, in this approximation for P(m_i|M_i) usually a Gaussian constrain𝒢(m_i|M_i, σ_i) is used, where the value of σ_i are calculated by propagation of the Monte-Carlo statistical uncertainties in bin i with fixed values of the remaining parameters.Для гистограмм с отрицательными весами преобразование (<ref>) примет вид:∏_i P(X_i|m_i^+ - m_i^-) →∏_i P(X_i|M_i^+ - M_i^-) · P(m_i^+|M_i^+) · P(m_i^-|M_i^-)Число дополнительно вводимых параметров равно 2 · число интервалов в гистограмме. For histograms with negative weights the transformation (<ref>) has the form:∏_i P(X_i|m_i^+ - m_i^-) →∏_i P(X_i|M_i^+ - M_i^-) · P(m_i^+|M_i^+) · P(m_i^-|M_i^-)The number of extra parameters is equal to 2 × number of bins in histogram.Для функции правдоподобия, полученной из модели с использованием преобразования (<ref>), так же можно составить систему уравнений относительно новых параметров, аналогичную (<ref>). В предположении статистики Пуассона и используя функцию Гаусса для связивводимых параметров получим:ℒ_i = 𝒫(X_i|M_i^+ - M_i^-) ·𝒢(m_i^+|M_i^+, σ_i^+) ·𝒢(m_i^-|M_i^-, σ_i^-) -lnℒ_i= -[-(M_i^+ - M_i^-) + X_i ·ln(M_i^+ - M_i^-) - lnX_i!] - -[ (M_i^+ - m_i^+)^2/2(σ_i^+)^2 - lnσ_i^+ √(2 π)] -[ (M_i^- - m_i^-)^2/2(σ_i^-)^2 - lnσ_i^- √(2 π)][left = ]align - ∂lnℒ / ∂M_i^+ = 1 - X_i/M_i^+ - M_i^- - M_i^+ - m_i^+/(σ_i^+)^2 = 0 - ∂lnℒ / ∂M_i^- = 1 + X_i/M_i^+ - M_i^- - M_i^- - m_i^-/(σ_i^-)^2 = 0 For the likelihood function with transformation (<ref>) a system of equations similar to (<ref>) can be obtained. Using the Gaussian constrain one gets:ℒ_i = 𝒫(X_i|M_i^+ - M_i^-) ·𝒢(m_i^+|M_i^+, σ_i^+) ·𝒢(m_i^-|M_i^-, σ_i^-) -lnℒ_i= -[-(M_i^+ - M_i^-) + X_i ·ln(M_i^+ - M_i^-) - lnX_i!] - -[ (M_i^+ - m_i^+)^2/2(σ_i^+)^2 - lnσ_i^+ √(2 π)] -[ (M_i^- - m_i^-)^2/2(σ_i^-)^2 - lnσ_i^- √(2 π)][left = ]align- ∂lnℒ / ∂M_i^+ = 1 - X_i/M_i^+ - M_i^- - M_i^+ - m_i^+/(σ_i^+)^2 = 0 - ∂lnℒ / ∂M_i^- = 1 + X_i/M_i^+ - M_i^- - M_i^- - m_i^-/(σ_i^-)^2 = 0 § THE PERFORMANCE OF METHODSРассмотрим результаты применения описанных методов учета систематической ошибки, связанной с конечностью статистики Монте-Карло, в случае наличия событий с отрицательными весами, применительно к задаче нахождения доверительных интервалов для параметров модели на примере байесовского анализа. In this section few results of study the proposed transformations for taking into account uncertainties due to the finite statistics of Monte-Carlo samples are given. The source code was implemented with statistical package SHTA <cit.>.Произведём сравнение на примере модели, генерирующей набор значений (X, t^+, t^-)при заданных величинах π, T^+, T^-:ℒ_0 = 𝒫(X|π· (C + T^+ - T^-)) ·𝒫(t^+|T^+) ·𝒫(t^-|T^+)Для оценки параметра π были использованы: функция правдоподобия, не учитывающая статистическую ошибку, связанную с конечностью статистики Монте-Карло:ℒ_n = 𝒫(X|π· (C + t^+ - t^-))функция правдоподобия, при построении которой использовалось преобразование (<ref>):ℒ_g = 𝒫(X|π· (C + T)) ·𝒢(t^+-t^-, T, √(t^+ + t^-)) ·ℋ(T)и аналогичная предыдущей, но апроксимирующая произведение Пуассонов функцией Гаусса:ℒ_p = 𝒫(X|π· (C + T^+ - T^-)) ·𝒫(t^+|T^+) ·𝒫(t^-|T^-) ·ℋ(T^+-T^-)где величина C считается известной и введена для того, чтобы исключить длинный хвост апостеорного распределения параметра π, возникающего при T^+ - T^- ∼ 0. From the simple single-bin single-channel model:ℒ_0 = 𝒫(X|π· (C + T^+ - T^-)) ·𝒫(t^+|T^+) ·𝒫(t^-|T^+)a set of events (X, t^+, t^-) may be generated for the fixed values of parameters π, T^+, T^-. Here the constant C is introduced in order to avoid a long tail in posterior distribution of π from T^+ - T^- ∼ 0.To estimate the parameter of interest π we use three different likelihood functions. First of all a naive approach without incorporating uncertainties due to the finite statistics of Monte-Carlo samples:ℒ_n = 𝒫(X|π· (C + t^+ - t^-))a likelihood function with transformation (<ref>):ℒ_p = 𝒫(X|π· (C + T^+ - T^-)) ·𝒫(t^+|T^+) ·𝒫(t^-|T^-) ·ℋ(T^+-T^-)and similar one but with Gaussian approximation for multiplication of two Poissons:ℒ_g = 𝒫(X|π· (C + T)) ·𝒢(t^+-t^-, T, √(t^+ + t^-)) ·ℋ(T)where ℋ is a Heaviside function.В рамках байесовского формализма (с использованием неинформативных априорных вероятностей для всех параметров) были найдены апостеорные функции плотности вероятности параметров моделей (<ref>), (<ref>), (<ref>) и вычислены доверительные интервалы. Результат вычисленийдля двух различных наборов величин π, T^+, T^- приведены в таблице <ref>.The generated set of toy data from (<ref>) is used to perform a Bayesian inference (see for example <cit.>) with non-informative flat priors for all parameters. The posterior probability density functions for parameters were obtained from likelihood functions (<ref>), (<ref>), (<ref>) and the confidence intervals were found. The results for two different set of initial values of π, T^+, T^- are presented in the table <ref>.Из таблицы <ref> видно, что без учёта систематической ошибки, связанной с конечностью статистики Монте-Карло, при использовании функции правдоподобия (<ref>) точность измерений значительно падает, к примеру, только в двух из трёх экспериментов будут получены доверительные интервалы, содержащие истинное значение информационного параметра, при CL = 2σ и первом наборе параметров модели.С другой стороны, гауссовская апроксимация произведения двух пуассонов в ℒ_g даёт схожие результаты в сравнении с точным решением ℒ_p -наибольшее наблюдаемое различие порядка одного процента. From the table <ref> we can see that the difference between solution with multiplication of two Poisson and its Gaussian approximation do not exceed one percent. On the other hand without taking into account the uncertainties due to the finite statistics of Monte-Carlo samples in likelihood function (<ref>) the accuracy of measurements falls significantly. For example, only in two out of three experiments the correct interval will be obtained for CL = 2σ and first set of parameters. § CONCLUSIONIn this work a method of incorporating the systematic uncertainties due to finite statistics of MC samples with negative weights is presented. The influence of this statistical uncertainty can be expected to be high in regions of low amounts of Monte Carlo events and they must be included into the fit. The proposed transformation (<ref>) and its simplified version (<ref>) can be used to construct the correct likelihood function. While using the Gaussian approximation of multiplication of two Poisson distribution in (<ref>) or (<ref>) leads to the known expressions used in differentstatistical packages <cit.><cit.> in different forms, the choice of specific form of likelihood function depends on the analysis and in some cases the more accurate proposed methods can improve the results.В статье был представлен метод учёта систематической ошибки, связанной с конечностью статистики Монте-Карло, являющийся расширением метода Барлоу и Бистон на случай наличия отрицательных весов.Предложенное преобразование (<ref>) и его упрощённая версия (<ref>) подходят могут быть использованы при построении функции правдоподобия. С другой стороны было показано, что результаты, получаемые данным методом в случае одного интервала гистограмы и одной шаблонной гистограммы из Монте-Карло схожи с результатами, получаемыми при использовании гауссовская апроксимация произведения двух пуассонов. Таким образом, если программное обеспечение для статистического анализа использовало функцию гаусса для преобразования Барлоу и Бистон или для его упрощённой версии, то оно может быть использовано при построении функции правдоподобия без изменений. Тем не менее при большем числе интералов данных и большем числе процессов ожидается увеличение в расхождении междуприближением и (<ref>), что может привести к недооценке доверительных интервалов.В заключении хотелось бы выразить благодарность Слабоспицкому С.Р. за плодотворные обсуждения, а также Дудко Л.В. и Воротникову Г.А. § ACKNOWLEDGMENTSI wish to thank L. Dudko S. Slabospitskii and G. Vorotnikov for useful discussions. § SECTION TITLE For bibliography use§.§ Subsection title Don't forget to give each section, subsection, subsubsection, and paragraph a unique label (see Sect. <ref>).For one-column wide figures use syntax of figure <ref> For two-column wide figures use syntax of figure <ref> For figure with sidecaption legend use syntax of figure For tables use syntax in table <ref>.
http://arxiv.org/abs/1708.07708v1
{ "authors": [ "Petr Mandrik" ], "categories": [ "physics.data-an", "hep-ex" ], "primary_category": "physics.data-an", "published": "20170825120403", "title": "The evaluation of the systematic uncertainties for the finite MC samples in the presence of negative weights" }
University of Lyon, UCB Lyon 1, CNRS/IN2P3, IPN Lyon, France Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USARacah Institute of Physics, Hebrew University, Jerusalem, 91904 IsraelInstitut de Recherche sur les Lois Fondamentales de l'Univers, CEA, Université Paris-Saclay, 91191 Gif-sur-Yvette, FranceUniversity of Lyon, UCB Lyon 1, CNRS/IN2P3, IPN Lyon, France University of Lyon, UCB Lyon 1, CNRS/IN2P3, IPN Lyon, FranceThe three-dimensional gravitational velocity field within z∼0.1 has been modeled with the Wiener filter methodology applied to the Cosmicflows-3 compilation of galaxy distances.The dominant features are a basin of attraction and two basins of repulsion.The major basin of attraction is an extension of the Shapley concentration of galaxies.One basin of repulsion, the Dipole Repeller, is located near the anti-apex of the cosmic microwave background dipole.The other basin of repulsion is in the proximate direction toward the 'Cold Spot' irregularity in the cosmic microwave background.It has been speculated that a vast void might contribute to the amplitude of the Cold Spot from the integrated Sachs-Wolfe effect.Key words: large scale structure of universe — galaxies: distances and redshifts § INTRODUCTION All-sky maps of the structure in the universe are increasingly extensive.Here we use the Cosmicflows-3 (CF3) collection of 18,000 galaxy distances <cit.> to study velocities that depart from Hubble expansion on a scale of 0.1c.The data assembly builds on the 8,000 distances of Cosmicflows-2 <cit.> primarily with two new sources.All-sky luminosity-linewidth measurements with infrared Spitzer satellite photometry provide distances to relatively nearby galaxies, giving improved coverage at low galactic latitudes <cit.>.Of greatest importance for the present discussion, though, is the contribution from the 6dFRSv program <cit.> of Fundamental Plane distances to galaxies in the south celestial hemisphere, a zone underrepresented in Cosmicflows-2. § METHODS Our derivation of three-dimensional peculiar velocity and mass density fields from observed distances follows the Wiener filter methodology <cit.> assuming a power spectrum consistent with the Lambda Cold Dark Matter model with the cosmological parameters given by <cit.>. This Bayesian prior is highly constrained by data nearby where coverage is dense and accurate but decays to the exigencies of the power spectrum at large distances where data are sparse and have large errors.The amplitudes of the estimated density and velocity fluctuations are suppressed at large distances due to uncertainties.However, power in dipole and quadruple terms signal the influence of important structures at the farthest extremities of the observational data.A familiar way to represent the velocity field is with vectors, either located at specified galaxies or seeded on a grid <cit.>.Our preference is to use streamlines <cit.>.A streamlinel⃗(s) is computed by integratingd r⃗(s) = v⃗(r⃗(s)) d s.A streamline can be seeded on a regular grid, or at specified positions such as regions of maxima and minima of the potential.Integrating the streamlines over many integration steps, flows from low density regions either leave the computational box or converge onto a basin of attraction.Anti-flow streamlines, the negative of the flow field, proceed from high densities either out of the box or to basins of repulsion <cit.>.The structures to be discussed are at large distances where the peculiar velocity field can be badly compromised by Malmquist bias <cit.>.Our methodology will be discussed at length by Graziani et al. (in preparation).Briefly, distance errors create an artifact of flows toward the peak in the sample distribution because there are more targets scattering away from the peak than into the peak.The peak is set by the convolution of increasing candidates with volume and the loss of candidates at large distances.Except very nearby, the true positions of galaxies is approximately set by their redshifts.A probability distribution can be calculated for the peculiar velocity of each target from a Bayesian analysis based on the assumption of Gaussian errors constrained by the observed distance and error estimate.There is the resolvable complexity that errors that are approximately Gaussian in the distance modulus give log normally distributed errors in peculiar velocities <cit.>. We evaluate the robustness of our results through comparisons between many constrained realizations <cit.>.The Wiener filter analysis establishes the minimum variance mean fields of velocity and mass density and constrained realizations then sample the scatter around the Wiener filter mean field <cit.>. § RESULTSOur Wiener filter analysis, based strictly on peculiar velocities and in ignorance of the distribution of galaxies, produces an intriguing complementary view to structure identified by redshift surveys.Figure 1 and the companion interactive figure[The Sketchfab interactive figure can be launched from https://skfb.ly/6sXCY. Selecting on numbers will take the viewer on a pre-determined path.The model can be manipulated freely with mouse controls.] provide a visual summary based on current information.The extreme peaks and valleys of the gravitational potential inferred in linear theory from the filtered velocity field are represented by iso-potential surfaces, attractors and repellers distinguished by colors.In the top panel, peculiar velocity flow lines start at seeds within the basins of repulsion and end at the major attractors.In the bottom panel, the flow is inverted and anti-flow streamlines seeded in the basins of attractions converge onto the repellers.§.§ Repellers Two repeller sinks are identified in Figure 1.One was alreadyapparent in the Wiener filter study of the precursor Cosmicflows-2 dataset and was named the Dipole Repeller <cit.>.Its location, as computed withthe Cosmicflows-3 dataset, is at galactic glon=94^∘, glat=-16^∘, distance=14,000 , displaced 2^∘ and 2,000  (12%) closer than found previously.The cosine of the anti-alignment with the direction of the cosmic microwave dipole <cit.> is μ=-0.99; the agreement in direction is 9^∘.<cit.> evaluated the uncertainties in the direction and depth of the Dipole Repeller based on constrained realizations and sample cuts with the Cosmicflows-2 dataset and found μ=-0.96±0.04 and distance 16,000 .The new results are in even better agreement with the dipole anti-pole. It was inferred by Hoffman et al. that the Dipole Repeller accounts for roughly half of the Milky Way motion reflected in the dipole.It is the second repeller sink that draws our current attention.The rough location is glon=168^∘, glat=-71^∘, velocity=23,000 .It is in the general region of the negative velocity anomaly in Pisces-Cetus noted by <cit.>.We particularly note that it is in the direction of a feature projected against the so-called CMB Cold Spot <cit.>.The Cold Spot is a fluctuation in the cosmic microwave background at glon=209^∘, glat= -57^∘ with an amplitude that has been argued is difficult to reconcile with the standard Λ Cold Dark Matter model <cit.>.There were early hints of an underdensity of galaxies in the same direction <cit.> suggesting that the Cold Spot may be a manifestation of the integrated Sachs-Wolfe effect <cit.>, the redshifting of radiation in passing through a void due to the asymmetry of the potential in an accelerating universe. <cit.> have strengthened the case for an extremely large void in depth, or chance superposition of several less extreme voids, in the Cold Spot direction extending over the redshift range 0.05 < z < 0.3.<cit.> argue that the underdense region continues to the relative foreground and they name the feature the Eridanus supervoid.Several authors <cit.> have evaluated the possibility that the negative fluctuation in the cosmic microwave background is caused by a vast underdensity in the line-of-sight.The general consensus disfavors the proposition that line-of-sight voids could be important enough to create the Cold Spot from the integrated Sachs-Wolfe effect alone.Nevertheless, coincidence or not, a dominant repeller is identified in a direction that overlaps with the Cold Spot in projection.We evaluate the angular extent and significance of the repeller revealed by the velocity field through the method of constrained realizations mentioned briefly in 2.The density with respect to the mean cosmic density within a sphere of radius R centered at a specified location, δ(R) = (ρ(R)-ρ̅)/ρ̅ is derived from the Wiener filter analysis.The uncertainty in density, σ(R), is sampled from 54 constrained realizations. The basin of the repeller is identified to lie at SGX,SGY,SGZ = [6,500, -21,300, -6,100] .Figure <ref> shows the development of the signal-to-noise -δ(R)/σ(R) with increasing spheres of radius R.There is a complexity associated with Fig. <ref> that requires explanation.The location of the repeller at z ∼ 0.077 lies at the extremity of our data zone (a high density of data to z ∼ 0.05 with sparse supernova sampling to z ∼ 0.1).The solid curve in Fig. <ref> illustrates the density signal to r.m.s. fluctuations within the intersection of spheres of increasing radius centered on the repeller basin and a sphere of radius 25,000  (z ∼ 0.08) centered on our position.The choice of the radius of the sphere centered on us is somewhat arbitrary but roughly captures the region where the Wiener filter density construction carries information that departs from the mean.In the case of the dashed curve in the figure, the same signal/uncertainty information is illustrated but now within spheres centered on a location in the direction of the Cold Spot.The specific center is at SGX,SGY,SGZ = [3,600, -16,700, -12,900] , in the direct line-of-sight of the cosmic microwave background Cold Spot with a distance that is the same as that of the repeller basin.The curve plots -δ(R)/σ(R) within the intersection of spheres centered on the identified location and the 25,000  sphere centered on us.From the solid curve in Fig. <ref> it is seen that the significance of the under density increases with volume, reaching ∼ 2.3σ by R ∼ 12,000 .The peculiar velocity field provides suggestive evidence for an under density on this vast scale.[The signal in the density field is weak at the extremity of the data where densities from the Wiener filter construction tend to the mean. The potential field is better defined by tidal influences at large distances.]We note that the direction to the cosmic Cold Spot lies within this domain.The dashed curve in the figure illustrates the significance of the under density if we shift the assumed center from our preferred location by a distance equivalent to 8,900  to a center along the line-of-sight toward the Cold Spot while keeping the same distance.The significance is reduced but the scale of the implied under density is similar. §.§ AttractorsThe dominant attractor with Cosmicflows-3, as with Cosmicflows-2, is coincident with the Shapley Concentration <cit.>.In detail, there is a shift on the sky of 10^∘ (to glon=306, glat=+22) and distance of +1,500  (to 15,900 ).The new 6dFGSv data <cit.> implies that the mass overdensity in this region extends into the galactic plane and to the south of the Milky Way, with a secondary maximum near to the south celestial pole.However, aside from the immediate Shapley region there is a poor correspondence with observed galaxies <cit.> or X-ray clusters <cit.>.Again, as with the Cold Spot Repeller, the postulated structure is at the extremity of the data zone.It has to be suspected that the true nature of the attractor in this region will only be revealed by a study to greater depth.§ DISCUSSIONIn their discussion of 6dFRSv results, <cit.> compared the velocity field implied by their distance and redshift observations with expectations from two independent redshift surveys <cit.>.With both comparisons, they noted a large scale departure from the redshift survey expectations, with observed peculiar velocities systematically negative in the Pisces-Cetus sector <cit.> and positive in the Centaurus sector, the domain of the Shapley Concentration <cit.>. It is a limitation of the redshift survey approach, though, that there is no account of tidal effects from beyond the range of the survey. Our approach of inferring mass distribution from velocities accesses information on distant structures from tidal signatures.Sensitivity to their characteristics diminish with distance.The repeller in the direction toward the cosmic microwave background Cold Spot is the dominant negative density feature of the Wiener filter construction with Cosmicflows-3.The deviation in direction between the repeller basin and the Cold Spot of 22^∘ is within the uncertainty of our measurement.Our distance to the repeller of ∼ 23,000  should only be considered a rough lower limit.We call the under density identified by the Wiener filter analysis the Cold Spot Repeller because of its coincidence in direction with the Cold Spot fluctuation in the microwave background.We are providing increased evidence for the existence of a substantial void, or succession of voids, in the Cold Spot direction<cit.>.It remains to be determined if the coincidence in direction has a physical basis. The tentative nature of the association of our repeller and the cosmic microwave background Cold Spot must be acknowledged. The observational claim that is made here awaits confirmation with redshift and distance measurements to greater distances. However it is likely that proper resolution will require surveys to twice the current depth, with serious completion to z∼0.1, involving an order of magnitude more galaxies or distance estimators with higher accuracy. AcknowledgementsFinancial support for the Cosmicflows program has been provided by the US National Science Foundation award AST09-08846, an award from the Jet Propulsion Lab for observations with Spitzer Space Telescope, and NASA award NNX12AE70G for analysis of data from the Wide-field Infrared Survey Explorer.Additional support has been provided by the Institut Universitaire de France and the CNES, and the Israel Science Foundation (1013/12). aasjournal natexlab#1#1[Branchini et al.(1999)Branchini, Teodoro, Frenk, Schmoldt, Efstathiou, White, Saunders, Sutherland, Rowan-Robinson, Keeble, Tadros, Maddox, & Oliver]1999MNRAS.308....1B Branchini, E., Teodoro, L., Frenk, C. S., et al. 1999, , 308, 1[Campbell et al.(2014)Campbell, Lucey, Colless, Jones, Springob, Magoulas, Proctor, Mould, Read, Brough, Jarrett, Merson, Lah, Beutler, Cluver, & Parker]2014MNRAS.443.1231C Campbell, L. A., Lucey, J. R., Colless, M., et al. 2014, , 443, 1231[Courtois et al.(2012)Courtois, Hoffman, Tully, & Gottlöber]2012ApJ...744...43C Courtois, H. M., Hoffman, Y., Tully, R. B., & Gottlöber, S. 2012, , 744, 43[Cruz et al.(2008)Cruz, Martínez-González, Vielva, Diego, Hobson, & Turok]2008MNRAS.390..913C Cruz, M., Martínez-González, E., Vielva, P., et al. 2008, , 390, 913[Erdoğdu et al.(2006)Erdoğdu, Lahav, Huchra, Colless, Cutri, Falco, George, Jarrett, Jones, Macri, Mader, Martimbeau, Pahre, Parker, Rassat, & Saunders]2006MNRAS.373...45E Erdoğdu, P., Lahav, O., Huchra, J. P., et al. 2006, , 373, 45[Finelli et al.(2016)Finelli, García-Bellido, Kovács, Paci, & Szapudi]2016MNRAS.455.1246F Finelli, F., García-Bellido, J., Kovács, A., Paci, F., & Szapudi, I. 2016, , 455, 1246[Fixsen et al.(1996)Fixsen, Cheng, Gales, Mather, Shafer, & Wright]1996ApJ...473..576F Fixsen, D. J., Cheng, E. S., Gales, J. M., et al. 1996, , 473, 576[Granett et al.(2010)Granett, Szapudi, & Neyrinck]2010ApJ...714..825G Granett, B. R., Szapudi, I., & Neyrinck, M. C. 2010, , 714, 825[Hoffman(2009)]2009LNP...665..565H Hoffman, Y. 2009, in Lecture Notes in Physics, Berlin Springer Verlag, Vol. 665, Data Analysis in Cosmology, ed. V. J. Martínez, E. Saar, E. Martínez-González, & M.-J. Pons-Bordería, 565–583[Hoffman et al.(2017)Hoffman, Pomarède, Tully, & Courtois]2017NatAs...1E..36H Hoffman, Y., Pomarède, D., Tully, R. B., & Courtois, H. M. 2017, Nature Astronomy, 1, 0036[Hoffman & Ribak(1991)]1991ApJ...380L...5H Hoffman, Y., & Ribak, E. 1991, , 380, L5[Huchra et al.(2012)Huchra, Macri, Masters, Jarrett, Berlind, Calkins, Crook, Cutri, Erdoǧdu, Falco, George, Hutcheson, Lahav, Mader, Mink, Martimbeau, Schneider, Skrutskie, Tokarz, & Westover]2012ApJS..199...26H Huchra, J. P., Macri, L. M., Masters, K. L., et al. 2012, , 199, 26[Kocevski et al.(2007)Kocevski, Ebeling, Mullis, & Tully]2007ApJ...662..224K Kocevski, D. D., Ebeling, H., Mullis, C. R., & Tully, R. B. 2007, , 662, 224[Komatsu et al.(2009)Komatsu, Dunkley, Nolta, Bennett, Gold, Hinshaw, Jarosik, Larson, Limon, Page, Spergel, Halpern, Hill, Kogut, Meyer, Tucker, Weiland, Wollack, & Wright]2009ApJS..180..330K Komatsu, E., Dunkley, J., Nolta, M. R., et al. 2009, , 180, 330[Kovács & García-Bellido(2016)]2016MNRAS.462.1882K Kovács, A., & García-Bellido, J. 2016, , 462, 1882[Mackenzie et al.(2017)Mackenzie, Shanks, Bremer, Cai, Gunawardhana, Kovács, Norberg, & Szapudi]2017arXiv170403814M Mackenzie, R., Shanks, T., Bremer, M. N., et al. 2017, ArXiv e-prints, arXiv:1704.03814[Magoulas et al.(2012)Magoulas, Springob, Colless, Jones, Campbell, Lucey, Mould, Jarrett, Merson, & Brough]2012MNRAS.427..245M Magoulas, C., Springob, C. M., Colless, M., et al. 2012, , 427, 245[Naidoo et al.(2017)Naidoo, Benoit-Lévy, & Lahav]2017arXiv170307894N Naidoo, K., Benoit-Lévy, A., & Lahav, O. 2017, ArXiv e-prints, arXiv:1703.07894[Raychaudhury(1989)]1989Natur.342..251R Raychaudhury, S. 1989, , 342, 251[Rudnick et al.(2007)Rudnick, Brown, & Williams]2007ApJ...671...40R Rudnick, L., Brown, S., & Williams, L. R. 2007, , 671, 40[Sachs & Wolfe(1967)]1967ApJ...147...73S Sachs, R. K., & Wolfe, A. M. 1967, , 147, 73[Scaramella et al.(1989)Scaramella, Baiesi-Pillastrini, Chincarini, Vettolani, & Zamorani]1989Natur.338..562S Scaramella, R., Baiesi-Pillastrini, G., Chincarini, G., Vettolani, G., & Zamorani, G. 1989, , 338, 562[Shapley(1930)]1930BHarO.874....9S Shapley, H. 1930, Harvard College Observatory Bulletin, 874, 9[Sorce et al.(2012)Sorce, Courtois, & Tully]2012AJ....144..133S Sorce, J. G., Courtois, H. M., & Tully, R. B. 2012, , 144, 133[Sorce et al.(2014)Sorce, Tully, Courtois, Jarrett, Neill, & Shaya]2014MNRAS.444..527S Sorce, J. G., Tully, R. B., Courtois, H. M., et al. 2014, , 444, 527[Springob et al.(2014)Springob, Magoulas, Colless, Mould, Erdoğdu, Jones, Lucey, Campbell, & Fluke]2014MNRAS.445.2677S Springob, C. M., Magoulas, C., Colless, M., et al. 2014, , 445, 2677[Strauss & Willick(1995)]1995PhR...261..271S Strauss, M. A., & Willick, J. A. 1995, , 261, 271[Szapudi et al.(2015)Szapudi, Kovács, Granett, Frei, Silk, Burgett, Cole, Draper, Farrow, Kaiser, Magnier, Metcalfe, Morgan, Price, Tonry, & Wainscoat]2015MNRAS.450..288S Szapudi, I., Kovács, A., Granett, B. R., et al. 2015, , 450, 288[Tully et al.(2016)Tully, Courtois, & Sorce]2016AJ....152...50T Tully, R. B., Courtois, H. M., & Sorce, J. G. 2016, , 152, 50[Tully et al.(1992)Tully, Scaramella, Vettolani, & Zamorani]1992ApJ...388....9T Tully, R. B., Scaramella, R., Vettolani, G., & Zamorani, G. 1992, , 388, 9[Tully et al.(2013)Tully, Courtois, Dolphin, Fisher, Héraudeau, Jacobs, Karachentsev, Makarov, Makarova, Mitronova, Rizzi, Shaya, Sorce, & Wu]2013AJ....146...86T Tully, R. B., Courtois, H. M., Dolphin, A. E., et al. 2013, , 146, 86[Vielva et al.(2004)Vielva, Martínez-González, Barreiro, Sanz, & Cayón]2004ApJ...609...22V Vielva, P., Martínez-González, E., Barreiro, R. B., Sanz, J. L., & Cayón, L. 2004, , 609, 22[Watkins & Feldman(2015)]2015MNRAS.450.1868W Watkins, R., & Feldman, H. A. 2015, , 450, 1868[Zaroubi et al.(1999)Zaroubi, Hoffman, & Dekel]1999ApJ...520..413Z Zaroubi, S., Hoffman, Y., & Dekel, A. 1999, , 520, 413
http://arxiv.org/abs/1708.07547v1
{ "authors": [ "Helene M. Courtois", "R. Brent Tully", "Yehuda Hoffman", "Daniel Pomarede", "Romain Graziani", "Alexandra Dupuy" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170824203604", "title": "Cosmicflows-3: Cold Spot Repeller?" }
Institute for Advanced Study, Tsinghua University, Beijing, 100084, ChinaInstitute for Advanced Study, Tsinghua University, Beijing, 100084, ChinaInstitute for Advanced Study, Tsinghua University, Beijing, 100084, China Collaborative Innovation Center of Quantum Matter, Beijing, 100084, China In this letter we present a theorem on the dynamics of the generalized Hubbard models. This theorem shows that the symmetry of the single particle Hamiltonian can protect a kind of dynamical symmetry driven by the interactions. Here the dynamical symmetry refers to that the time evolution of certain observables are symmetric between the repulsive and attractive Hubbard models. We demonstrate our theorem with three different examples in which the symmetry involves bipartite lattice symmetry, reflection symmetry and translation symmetry, respectively. Each of these examples relates to one recent cold atom experiment on the dynamics in the optical lattices where such a dynamical symmetry is manifested. These experiments include expansion dynamics of cold atoms, chirality of atomic motion within a synthetic magnetic field and melting of charge-density-wave order. Therefore, our theorem provides a unified view of these seemingly disparate phenomena. Symmetry Protected Dynamical Symmetry in the Generalized Hubbard Models Hui Zhai December 30, 2023 =======================================================================The Hubbard model lies at the heart of studying the strongly correlated quantum matters <cit.>. It describes either fermions or bosons hopping in a lattice with short-range interactions  <cit.>. Normally the Hubbard model considers a single band situation, and its Hamiltonian is written as Ĥ=Ĥ_0+V̂,where Ĥ_0 is the single-particle term and V̂ represents the on-site interaction between particles. For spinless bosons, V̂=U∑_in_i (n_i-1),where n_i is the density of bosons at site i; for spin-1/2 fermions, V̂=U∑_in_i↑n_i↓,where n_iσ (σ=↑,↓) is the density of fermions with spin σ at site i. These two cases are called bosonic and fermionic Hubbard models, respectively. Here U represents the interaction strength, and U>0 (U<0) means repulsive (attractive) interaction. For the simplest situation, the single-particle Hamiltonian Ĥ_0 only contains the (real-valued) nearest neighboring hopping terms. In more involved settings, it can also contain terms such as the periodic modulation of the on-site energy <cit.>, and the gauge fields can also add extra phases into the hopping coefficients <cit.>. Here we term these interacting models with different Ĥ_0 as the generalized Hubbard models. In the past decades, the Hubbard model is also a central topic for the cold atom quantum simulations <cit.>. The reasons are at least two folds. Firstly, by loading ultracold bosons or fermions into optical lattices, the system is a faithful representation of the bosonic or fermionic Hubbard model, because the multi-bands effect and the longer rang interaction are sufficiently weak that can be safely ignored <cit.>. Secondly, these cold atom systems are particularly suitable for studying quantum dynamics <cit.> in these strongly correlated systems, which is less studied in previous investigations in the content of condensed matter systems. For instance, one can first prepare this many-body system in a certain initial state, and experimentally observe the time evolution of this state. In the past decade, quite a few experiments have carried such investigations. Here we briefly review three of them:Munich 2012: In this experiment from the Munich group, they first prepare the Fermi gas in a band insulator state in the presence of a harmonic trap, and then they turn off the harmonic trap and let the gas expand in a uniform three-dimensional cubic lattice <cit.>. The dramatic finding is that the expansion dynamics is identical between two systems with +U and -U. In a related earlier experiment, the same group also found that a Fermi gas expands (instead of shrinks) when interaction becomes attractive, which is quite counter intuitive <cit.>. Munich 2015: Motivated by many-body localization, the Munich group investigates the relaxation of a charge-density-wave (CDW) state of fermions in the presence of an incommensurate lattice potential <cit.>. They observe how the CDW order evolves in time and saturates at longer times. As a side result, they also find that the dynamics of this CDW order is symmetric between positive and negative U. Harvard 2017: The Harvard group realized a two-leg Harper-Hofstadter model, in which there exists a uniform synthetic magnetic flux though each plaquette <cit.>. They focus on studying the chirality in this model by loading one or two bosons into the ladder. Here the chirality means that the wave function is more concentrated in the upper ladder when atoms move to left (right), while it is more concentrated in the lower ladder when atoms move to the right (left). Such a chiral motion has been observed for the single particle case. However, considering the two-atom case with certain initial state, surprisingly they find that the chirality vanishes if no interaction is applied, and the chirality is induced when the interaction is turned on.One common feature of all these three experiments is that the time evolution of certain observable is symmetric between repulsive and attractive interaction models. Following Ref. <cit.>, we term this symmetry as a kind of “dynamical symmetry". The main result of this letter is to present the following theorem. It shows that the existence of a symmetry for the single particle Hamiltonian [Eq. (<ref>)] is a key to guaranteeing this dynamical symmetry. Therefore, we term the phenomenon described by this theorem as “symmetry protected dynamical symmetry" <cit.>. The significance of this theorem is that it shows that the symmetry of the single particle Hamiltonian can impose a strong constraint on the dynamics induced by the interactions. We will show that all the above three experimental observations can be understood as special examples of this theorem. Theorem. For the Hamiltonian Ĥ=Ĥ_0+V̂, if we can find an antiunitary operator Ŝ = R̂Ŵ, where R̂ is the (antiunitary) time-reversal operator and Ŵ is a unitary operator that satisfies the following conditions: (i) Ŝ anticommutes with Ĥ_0 and commutes with V̂, i.e. {Ŝ,Ĥ_0}= 0,[Ŝ,V̂] = 0; (ii) The initial state | Ψ _0⟩ only acquires a global phase factor under Ŝ, i.e. Ŝ^ - 1| Ψ _0⟩= e^iχ| Ψ _0⟩; (iii) We consider a given Hermitian operator Ô that is even or odd under symmetry operation by Ŝ, i.e. Ŝ^ - 1ÔŜ =±Ô,then we can conclude⟨O(t)⟩ _ +U =±⟨O(t)⟩ _ -U, where ⟨O(t)⟩ _ ± U denotes the expectation value of Ô under the wave function |Ψ(t)⟩=e^iĤt|Ψ_0⟩ with interaction strength ± U in Ĥ, respectively.Proof of the Theorem. The proof of this theorem is straightforward. First, we use condition (i) and obtainŜ^ - 1e^ - i(Ĥ_0 + V̂)tŜ = exp[ iŜ^ - 1(Ĥ_0 + V̂)Ŝt]= exp[- i(Ĥ_0 - V̂)t].Here, R̂^-1i R̂ = - i is used in the first line.Then, with Eq. (<ref>) and using conditions (ii) and (iii), we obtain⟨Ô(t) ⟩ _ + U = ⟨Ψ _0|e^i(Ĥ_0 + V̂)tÔe^ - i(Ĥ_0 + V̂)t| Ψ _0⟩= ⟨Ψ _0|Ŝe^i(Ĥ_0 - V̂)t( Ŝ^ - 1ÔŜ)e^-i(Ĥ_0 -V̂)tŜ^ - 1| Ψ _0⟩= ⟨Ψ _0|e^ - iχ( e^i(Ĥ_0 - V̂)t)( ±Ô)( e^ - i(Ĥ_0 - V̂)t)e^iχ| Ψ _0⟩=±⟨O(t)⟩ _-U.Hence the theorem is proved. Here we should remark that our theorem, as well as the examples below, work equally well for both the bosonic and fermionic Hubbard models with interaction terms Eq. (<ref>) and Eq. (<ref>), respectively. Hereafter different examples have different Ĥ_0 and we use ĉ_iσ to denote the annihilation operator for either a boson or a fermion in site i with spin σ. (For spinless bosons, the σ index can be ignored.)Example 1: We consider particles hop with nearest neighboring hopping only, in which H_0=-J∑_⟨ ij⟩,σc^†_iσc_jσ,where J is the hopping amplitude. This is the simplest case, and it well explains the Munich 2012 experiment <cit.>. In fact, similar discussion specifically made to this model has been presented in Ref. <cit.>. Nevertheless, we view it as one application of our theorem and review it here for comprehensiveness for general readers. Obviously, this Ĥ_0 is invariant under time-reversal operation. If the lattice is a bipartite lattice containing A and B sublattices, say, a square lattice, we have a symmetry operator Ŵ defined asŴ^-1ĉ_iσŴ=-ĉ_iσ, if i∈ A, Ŵ^-1ĉ_iσŴ=ĉ_iσ, if i∈ B. Because for a bipartite lattice, hopping only takes place between A and B sublattices, it is easy to show that, with this choice of Ŵ, Ŝ^-1Ĥ_0Ŝ=-Ĥ_0. And it is also easy to show that this transformation Ŝ leaves V̂ invariant. Thus, we have found an operation Ŝ satisfying condition (i) of our theorem. It is also easy to see that, when the initial state is chosen as a band insulator, it is invariant under Ŝ and condition (ii) is satisfied; and when the observable is density operator n̂_iσ, it satisfies condition (iii) with a plus sign. Thus, our theorem applies. We should also remark, here the bipartite lattice geometry plays a crucial role. If the lattice is not bipartite, say, a triangular lattice, we can not find such a Ŵ.In Fig. <ref>, we numerically demonstrate thisstatement by loading two interacting bosons into two kinds of ladders with different geometries. The initial state is chosen as two bosons placed at two nearest neighboring sites. We find that the time-dependent local densities at different sites obey this dynamical symmetry when the lattice is a square lattice [Fig. <ref>(c)], and do not obey this dynamical symmetry when the lattice is a triangular one [Fig. <ref>(d)]. Example 2: In this case we consider atoms hopping in a square lattice with a uniform magnetic flux ϕ at each plaquette <cit.>, as shown in Fig. <ref>(a). Here we can choose a particular gauge such that the hopping along the x direction acquires a nontrivial phase, and the corresponding Harper-Hofstdter Hamiltonian can be written as <cit.>Ĥ_0= -J_x∑_i,σ[e^i(i_y-1/2)ϕĉ^†_i_x,i_y,σĉ_i_x-1,i_y,σ+h.c.]-J_y∑_i,σ[ĉ^†_i_x,i_y,σĉ_i_x,i_y-1,σ+h.c.]. Because the time-reversal operation will change the flux ϕ to -ϕ, the choice of Ŵ operator as Eqs. (<ref>) and (<ref>) does not work here. Instead, one has to include a reflection into the definition of Ŵ and the reflection axes is a middle line between i_x=0 and i_x=1, as shown by the dashed line in Fig. <ref>(a). That is to say, Ŵ is defined asŴ^-1ĉ_i_x,i_y,σŴ=(-1)^i_x+i_yĉ_i_x,1-i_y,σ,with which Ŝ satisfies condition (i).In the Harvard 2017 experiment <cit.>, they consider a two-leg ladder (with i_y=0 and i_y=1) loaded with bosons. Their initial state is prepared as|Ψ_0⟩ =1/4[(ĉ^†_0,0+ĉ^†_0,1)^2-(ĉ^†_0,0-ĉ^†_0,1)^2]|0⟩=ĉ^†_0,0ĉ^†_0,1|0⟩.This initial state is invariant with respect to the refection defined above, and consequently, is invariant under Ŝ. Furthermore, the chirality they considered is whether the atoms moving to the right is more concentrated in the upper ladder than the atoms moving to the left. To quantify the amount of chirality, they define the shearing Δ y_COM [i.e., the difference between the center-of-mass (COM) displacements along the y direction for the right and left halves] as follows:Δ y_COM(t)=⟨Ô^R_-(t)⟩/⟨Ô^R_+(t)⟩-⟨Ô^L_-(t)⟩/⟨Ô^L_+(t)⟩,where Ô^R_±=∑_i_x>0(n̂_i_x,i_y=1±n̂_i_x,i_y=0), Ô^L_±=∑_i_x<0(n̂_i_x,i_y=1±n̂_i_x,i_y=0).It is straightforward to show thatŜ^-1Ô^R/L_±Ŝ=±Ô^R/L_±,and as a consequence of our theorem, we haveΔy_COM(t)| _ + U. =- Δy_COM(t)| _ - U. .Because Δy_COM is an odd function in U, it must be zero for all time when U=0. This leads to the conclusion that the charity vanishes for the non-interacting case. The insight from this theorem is that this conclusion essentially depends on the choice of the initial state. Our theorem does not hold if the initial state does not respect the symmetry defined in Eq. (<ref>), for instance, we can consider an alternative two-body state|Ψ_0⟩ =1/2(ĉ^†_0,1+ĉ^†_1,0)(ĉ^†_1,1-ĉ^†_0,0)|0⟩,and we shall change the summation in the definition of Ô^R_± to i>1. It is easy to show that this initial state does not respect the symmetry operation Ŝ, and Δy_COM is no longer an odd function in U. Thus, Δy_COM is finite for the non-interacting case. In other word, in order to have the phenomenon of “interaction induced chirality" observed in the Harvard 2017 experiment <cit.>, one condition is that the initial state is chosen to respect this symmetry operation Ŝ that includes reflection.In Fig. <ref>, we show the numerical results for the time evolution of two bosons with these two different initial states, respectively, and the results are fully consistent with the conclusion.Example 3. In this example we consider a one-dimensional model with an extra on-site potential energy, as schematically shown in Fig. <ref>(a), for which the single-particle Hamiltonian takes the form of the Aubry-André model <cit.>Ĥ_0=∑_i,σ[-J(ĉ^†_i,σĉ_i+1,σ+h.c.)+Δcos(2πβ i+θ)n̂_i,σ].Here, Δ is the strength of a superlattice potential, β controls thesuperlattice periodicity, and θ is a phase offset. Here we consider the case that β=p/q is a rational number. Now we discuss the following three different situations:(i) p is odd, q is even and q/2 is also an even integer. In order for the on-site energy term to acquire a minus sign under the symmetry operation, we have to introduce a proper translational operator into the definition of Ŵ, that is,Ŵ^-1ĉ_i,σŴ=(-1)^iĉ_i+q/2,σ.If the initial state is a uniform state, it is invariant under this translation. While in the Munich 2015 experiment <cit.>, they consider a CDW initial state where the density varies alternatively between even and odd sites. Since q/2 is also an even number, the CDW state is also invariant under this translation. They examine the time evolution of the density imbalance between the even and odd site with an operator defined asℐ̂=∑_i∈evenn̂_iσ-∑_i∈oddn̂_iσ/∑_i∈evenn̂_iσ+∑_i∈oddn̂_iσ.In this case, we have Ŝ^-1ℐ̂Ŝ=ℐ̂. Thus, we conclude that for both the uniform and CDW states, ⟨ℐ̂(t)⟩_+U=⟨ℐ̂(t)⟩_-U. In the Munich 2015 experiment, β=532/738≈0.721 which to certain extent can be reasonably approximated by 3/4.In Figs. <ref>(b) and <ref>(c), we illustrate this with a numerical solution of two spin-1/2 fermionic atoms case and β=1/4. The initial state for the uniform and CDW cases are respectively taken as|Ψ_0⟩_uniform=1/√(N)∑_i=1^Nĉ^†_i↑ĉ^†_i↓|0⟩, |Ψ_0⟩_CDW=1/√(N/2)∑_i∈evenĉ^†_i↑ĉ^†_i↓|0⟩.We find that, for both cases, the time-dependent imbalance ⟨ℐ̂(t)⟩ is even in U.(ii) p is odd, q is even but q/2 is an odd integer. In this case, the symmetry operator for Ĥ_0 should still defined as Eq. (<ref>), and a uniform initial state is still invariant under this translation. Nevertheless, since q/2 is now odd, a CDW state defined above does not obey this symmetry. Moreover, in this case, Ŝ^-1ℐ̂Ŝ=-ℐ̂. Thus, we can conclude that, if the initial state is a uniform state, ⟨ℐ̂(t)⟩ is odd in U; and if the initial state is a CDW state, there is no symmetry between positive and negative U. Our numerical calculation for the two-atom case with β=1/6 also confirm this conclusion, as shown in Figs. <ref>(d) and <ref>(e).(iii) q is odd. In this case, no matter p is even or odd, it can be shown that there is no symmetry operator can satisfy Ŝ^-1Ĥ_0Ŝ=-Ĥ_0. Therefore, there is no dynamical symmetry for this case with both the uniform and CDW initial states. Concluding Remark. Our theorem provides one of rare theoretical results for the dynamics in interacting quantum many-body systems that is mathematically rigorous, universal and directly related to experiments. This result reveals profound connection between the symmetry of the single particle Hamiltonian and the interaction induced dynamics. Our results not only explain three different seemingly disparate experiments, but also offer controllable comparative examples that can be verified by future experiments. Our results may also find their usage in future cold atom experiments on the dynamics in the optical lattices, as well as the strongly correlated solid-state materials. Acknowledgment. This work is supported by MOST under Grant No. 2016YFA0301600 and NSFC Grant No. 11325418 and No. 11734010.99Hubbard1964 J. Hubbard, Proc. Roy. Soc. (London) A 281, 401 (1964). Fisher1989 M. P. A. Fisher, P. B. Weichman, G. Grinstein, and D. S. Fisher Phys. Rev. B 40, 546 (1989).HubbardModelReview1996 A. Georges, G. Kotliar, W. Krauth, and M. J. Rozenberg, Rev. Mod. Phys. 68, 13 (1996).SachdevBook1999 S. Sachdev, Quantum Phase Transitions (Cambridge University Press, Cambridge, UK, 1999). KeimerReview2015 B. Keimer, S. A. Kivelson, M. R. Norman, S. Uchida, and J. Zaanen, Nature 518, 179 (2015).Munich2015 M. Schreiber, S. S. Hodgman, P. Bordia, H. P. Lüschen, M. H. Fischer, R. Vosk, E. Altman, U. Schneider, and I. Bloch, Science 349, 842 (2015). Bloch2013 M. Aidelsburger, M. Atala, M. Lohse, J. T. Barreiro, B. Paredes, and I. Bloch, Phys. Rev. Lett. 111, 185301 (2013). Ketterle2013 H. Miyake, G. A. Siviloglou, C. J. Kennedy, W. C. Burton, and W. Ketterle, Phys. Rev. Lett. 111, 185302 (2013).JakschZoller2005 D. Jaksch, and P. Zoller, Ann. Phys. 315, 52 (2005). BlochReview2008 I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008). BlochReview2012 I. Bloch, J. Dalibard, and S. Nascimbène, Nat. Phys. 8, 267 (2012).LewensteinBook2012 M. Lewenstein, A. Sanpera, and V. Ahufinger, Ultracold Atoms in Optical Lattices: Simulating Quantum Many-Body Systems (Oxford University Press, Oxford, UK, 2012).AltmanReview2015 E. Altman, arXiv:1512.00870.Munich2012 U. Schneider, L. Hackermüller, J. P. Ronzheimer, S. Will, S. Braun, T. Best, I. Bloch, E. Demler, S. Mandt, D. Rasch, and A. Rasch, Nat. Phys. 8, 213 (2012).Munich2010 L. Hackermüller, U. Schneider, M. Moreno-Cardoner, T. Kitagawa, T. Best, S. Will, E. Demler, E. Altman, I. Bloch, and B. Paredes, Science 327, 1621 (2010). Harvard2017 M. E. Tai, A. Lukin, M. Rispoli, R. Schittko, T. Menke, D. Borgnia, P. M. Preiss, F. Grusdt, A. M. Kaufman, and M. Greiner, Nature 546, 519 (2017).footnote In condensed matter physics, the terminology of “symmetry protected topological phase " means the existence of certain symmetry can guarantee topological classification and topological nontrivial phases. Following the same spirits, we introduce the terminology of “symmetry protected dynamical symmetry" which indicates that the existence of certain symmetry of the static Hamiltonian can guarantee the dynamical symmetry during time evolution. Harper1955 P. G. Harper, Proceedings of the Physical Society. Section A 68, 874 (1955). Hofstadter1976 D. R. Hofstadter, Phys. Rev. B 14, 2239 (1976). AAModel1980 S. Aubry and G. André, Ann. Israel Phys. Soc 3, 18 (1980).
http://arxiv.org/abs/1708.08070v2
{ "authors": [ "Jinlong Yu", "Ning Sun", "Hui Zhai" ], "categories": [ "cond-mat.quant-gas", "cond-mat.str-el" ], "primary_category": "cond-mat.quant-gas", "published": "20170827091454", "title": "Symmetry Protected Dynamical Symmetry in the Generalized Hubbard Models" }
dcuexampleExample[section] theoremTheorem[section]conjectureConjecture[section] lemmaLemma[section] propositionProposition[section] remarkRemark[section] corollaryCorollary[section] definitionDefinition[section] equationsectionRandomized Dimension Reduction for Monte Carlo Simulations Nabil Kahalé ESCP Europe, Labex ReFi, Big data research center,75011 Paris, France; e-mail: [email protected] 30, 2023 =========================================================================================================================[, ](),a,24pt andKahalé Randomized Dimension Reduction Randomized Dimension Reduction for Monte Carlo SimulationsNabil KahaléESCP Europe, Labex Réfi, Big data research center, 75011 Paris, France, [email protected] dimension reduction, variance reduction,effective dimension, Markov chains, Monte Carlo methodsinforms2014 We presenta new unbiased algorithm thatestimates the expected value of f(U) via Monte Carlosimulation, where U is a vector of d independent random variables, and f is a function of d variables. We assume that f does not depend equally on all its arguments.Under certain conditions we prove that, for the same computational cost, the variance ofour estimator is lower than the variance of the standard Monte Carlo estimator by a factor of orderd.Our method can be used to obtaina low-variance unbiased estimatorfor the expectationof a function of the state ofa Markov chain at a given time-step. We study applications tovolatility forecasting and time-varying queues. Numerical experiments show that our algorithm dramatically improves upon the standard Monte Carlo method for large values of d, and is highly resilient to discontinuities.Keywords:dimension reduction; variance reduction;effective dimension; Markov chains; Monte Carlo methods Randomized Dimension Reduction for Monte Carlo Simulations Nabil Kahalé ESCP Europe, Labex ReFi, Big data research center,75011 Paris, France; e-mail: [email protected] 30, 2023 ========================================================================================================================= § INTRODUCTIONMarkov chains arise in a variety of fields such as finance, queuingtheory,and social networks. While much research has been devoted to the study of steady-states of Markov chains, several practical applications rely on the transient behavior of Markov chains. For example, the volatility of an index can be modelled as a Markov chain using the GARCH model <cit.>. Financial institutions conductingstress tests may need to estimate the probability that the volatilityexceeds a given level in a few years from now. Also, due tothe nature of human activity, queuing systemsinareas such as health-care, manufacturing, telecommunication and transportation networks, have often time-varying features and do not have a steady-state. For instance, empirical data show significant daily variation intraffic in wide-area networks <cit.> and vehicular flow on roads <cit.>. Estimating the expected delay of packets ina wide-area network at a specific time of the day (12pm, say) could be used to dimension suchnetworks. Similarly, estimating the velocity of cars in a region at 6pm could be used to designtransportationnetworks. In the same vein, consider the problem of estimating the queue-length at the end of a business day in a call center that operates with fixed hours. In such call centers, knowing how many calls would still need to be answered at 5pm could be an important metric that would be needed in estimating their staffing requirements.Methods to determine appropriate staffing levels in call centers and other many-server queueing systems with time-varying arrival rates have been designed in <cit.>. Also, approximation tools have been developed to studytime-varying queues (see <cit.> and references therein).However, in many situations, there are noanalytical tools, except Monte Carlo simulation,to study accurately systems modeled by aMarkov chain. A drawback of Monte Carlo simulation is its high computation cost. This motivates the need to design efficient simulation tools to study the transient behavior ofMarkov chains, with or without time-varying features.This paper gives a newunbiased algorithm to estimate E(f(U)), whereU=(U_1,…,U_d) is a vector of d independentrandom variables U_1,…,U_d taking values in a measurable space F, andfis a real-valued Borel-measurable function on F^d such thatf(U) is square-integrable. For instance,F can be equal to ℝ or to any vector space over ℝ. Under certain conditions, we show thatouralgorithm yields substantial lower variance than the standard Monte Carlo method for the same computational effort.Our techniques can be used to efficiently estimate the expected value of a function of the state of aMarkov chain at a given time-step d, for a class ofMarkov chains driven by independent random variables. An alternativealgorithm for Markov chains estimation, based on Quasi-Monte Carlosequences, that substantially improves upon standard Monte Carlo in certain numericalexamples, is given in <cit.>, withbounds onthe variance proven for special situations where the state space of the chain is a subset of the real numbers. In a standard Monte Carlo scheme, E(f(U))is estimated by simulatingnindependent vectors in F^d having the same distribution as U, and taking the average of f over the n vectors.In the relatedQuasi-Monte Carlo method (see <cit.>),f is evaluated at a predetermined deterministic sequence of points. In severalapplications, the efficiency of Quasi-Monte Carlo algorithmscan be improved by reordering the U_i's and/or making a change of variables, so that the value off(U) dependsmainly on the first fewU_i's. For instance, the Brownian bridge construction and principal components analysis have been used <cit.>to reduce the error in the valuation offinancial derivatives via Quasi-Monte Carlo methods (see <cit.> for related results). The relative importance of the first variables canformally be measured by calculating the effective dimension in the truncation sense, a concept defined in <cit.>: when the first variables are important, the effective dimension in the truncation senseis low in comparison to the nominal dimension.It is proven in <cit.>that Quasi-Monte Carlo methods areeffective for a class of functions where the importance of U_i decreases with i. <cit.>apply Quasi-Monte Carlo methods to queueing simulationand option pricing, and examine their connection to the effective dimension.The truncation dimension and a related notion, the effective dimension in the superposition sense, are studied in <cit.>. It is shown in <cit.>that the Brownian bridge and/or principal components analysis algorithms substantially reduce the truncation dimension of certain financial instruments. Alternative linear transformations have been proposed in <cit.>to reduce the effective dimension of financial derivatives and improve the performance of Quasi-Monte Carlo methods. Other previously known variance reduction techniques have exploited theimportance of certain variablesor states.For instance,stratified sampling alongimportant directionsis used in pricingpath-dependent options <cit.>.Importance sampling methods aim to increase the number ofsamples that hit an important set via a change of measure technique <cit.>.When d=2 andf(U_1,U_2) is more influenced by U_1 than by U_2, and the expected time to generate U_1 is much lower than the expected time to generate U_2, the splitting technique <cit.> simulates several independent copies of U_1 for each copy of U_2.<cit.> give the variance of the splitting estimator and the optimalnumber of copies of U_1, and show that the splitting technique is related to the conditionalMonte Carlo method. Multilevel splitting techniques are oftenused for variance reduction inthe estimation of rare event probabilities <cit.>. The idea is tosplit each path that reaches an important region into a number of subpaths in order to produce more paths that hit the rare event set. The rare event probability is then evaluated via a telescoping product. <cit.> analysemultilevel splitting techniques that estimatefunctionals of Markov chains with a discrete state space and ofergodic Markov chains in their steady state. <cit.> analyse the performance of multilevel splitting techniques for rare event estimation and give, under certain conditions,the optimal degree of splitting as the probability of the event goes to 0. Multilevel splitting methods have had many applications, such as the estimation of network reliability <cit.> and ofrare events in Jackson networks <cit.>. Multilevel splitting techniques for rare event simulation with finite time constraints are analysed in <cit.>. A comprehensive surveyon multilevel splitting techniques with applications to rare event simulations, sampling from complicated distributions, Monte Carlo counting, and randomized optimization, can be found in <cit.>.Another technique, the multilevel Monte Carlo(MLMC) method introduced in <cit.>,which relies onlow dimensional approximations of the function to be estimated,dramatically reduces the computational complexity of estimating an expected value arising from a stochastic differential equation.Related randomized multilevel methods that produceunbiased estimators for equilibrium expectations of functionals defined on homogeneous Markov chains have been provided in <cit.>. These methods apply tothe class of positive Harris recurrent Markov chains, and to chains that are contracting on average. It is shown in <cit.> that similar randomized multilevel methods can be used to efficiently computeunbiased estimators for expectations of functionals of solutions to stochastic differential equations. The MLMC method has had numerous other applications (e.g., <cit.>).The basic idea behind our algorithm is that, iffdoes not dependequally on all its arguments, the standard Monte Carlo method can be inefficient because it simulates all d arguments of f at each iteration. Assuming thatthe expected time needed to simulate f(U) isof orderd and that the variance off(U) is upper and lower-bounded by constants, the expected time needed to achievevarianceϵ^2 by standard Monte Carlo simulation isΘ(dϵ^-2). In contrast, our algorithm simulatesat each iterationa random subsetofarguments of f, and reuses the remaining arguments from the previous iteration. Under certain conditions, we show thatour algorithm estimates E(f(U)) with varianceϵ^2 in O(d+ϵ^-2) expected time. We also establish central limit theorems on the statistical error of our algorithm. When d=2, our method is very similar to the splitting technique in <cit.>. Our approach can thus be viewed as a multidimensional version of this technique. However, in contrast with existing multilevel splitting algorithms where splitting decisions typically depend on the current state, in our method, thearguments of f to be redrawnare independent ofpreviously generated copies ofU.In order tooptimizethe performance of our estimator, we minimize the asymptotic product of the variance and expected running time, in the same spirit as stratified sampling <cit.>, the splitting technique <cit.>), MLMC <cit.>, and related methods <cit.>.This minimization is performed via a new geometric algorithm that solves in O(d) time a d-dimensional optimisation problem. Our geometric algorithm is of independent interest and can be used to solve an optimization problem of the same type that wassolved in <cit.> in O(d^3) time. We are not aware of other previous algorithms that solve this problem.This work extends the research in <cit.>, where the variance properties of the randomized estimator presented in this paper were announced withoutproof.Our method has the following features:* Under certain conditions, it estimates E(f(U)) with varianceϵ^2 in O(d+ϵ^-2) expected time. We are not aware of any previous method that achieves, under the same conditions, such a tradeoff between theexpected running time and accuracy. In contrast with stratified sampling, which can be performed in practiceonly along a small number of dimensions <cit.>, our methodis targeted at high-dimensional problems. The efficiency of our method typically increases with d, even though it can be used in principle for anyd≥2.* It is easy to implement, does not make any continuity assumptions on f, nor does itrequirea detailed knowledge of the structure of f or U. In contrast,importance sampling,multilevelsplittingand multilevel Monte Carlo methods can achieve substantial variance reduction by exploiting thestructure of the simulated model. The standard Quasi-Monte Carlo method does notnecessitatea detailed knowledge of the model structure, but makes regularity assumptions on the function to be integrated, and its efficiency does notincrease with d. The rest of the paper is organized as follows.<ref> presents our generic randomized dimension reductionalgorithm and analyses its performance. <ref>describes the aforementioned geometric algorithm and gives a numericalimplementation of the randomized dimension reduction algorithm.<ref>provides applications to Markov chains. <ref> presents and analyses a deterministic version of our algorithm. <ref> compares our algorithm to a class of MLMC algorithms. <ref> gives numerical simulations. <ref> contains concluding remarks. Most proofs are contained in the appendix or the on-line supplement. The connection between our approach and the ANOVA decomposition and truncation dimension is studied in the appendix.The appendixon-line supplement explores further the relation between our method, the splitting technique, and the conditional Monte Carlo method, and contains more numerical simulations. § THEGENERIC RANDOMIZED DIMENSION REDUCTION ALGORITHM§.§ The algorithm descriptionWe assume that all random variables in this paper are defined on the same probability space (Ω,ℱ,ℙ). Our algorithmestimates E(f(U)) by performing n iterations, where n isan arbitrary positive integer. The algorithm samples more often the first arguments of f than the last ones. It implicitly assumes that, roughly speaking, the importanceof the i-th argument offdecreases with i. In many Markov chain examples, the last random variables are more important than the first ones, but our algorithmcan still be used efficiently after re-ordering the random variables, as described in detail in <ref>. A generalapproach to rank input variables according to their importance is described in <cit.>, but we will not use such an approach in our examples.LetA={(q_0,…,q_d-1)∈ℝ^d:1=q_0≥q_1≥⋯≥q_d-1>0}. Throughout the paper,q=(q_0,…,q_d-1) denotes an element of A.Our generic algorithm takes such a vectorq as parameter. Let (N_k),k≥ 1,bea sequence ofindependent random integers in [1,d] such that ℙ(N_k>i)=q_i for 0≤ i≤ d-1 and k≥ 1.The algorithm simulates n copies V^(1),…,V^(n) of U and consists of the following steps: * First iteration. Simulate a vector V^(1) that has the same distribution as U and calculate f(V^(1)). * Loop. In iteration k+1, where 1≤ k≤ n-1, let V^(k+1) be the vector obtained fromV^(k) by redrawing the first N_k components of V^(k), and keeping the remaining components unchanged. Calculate f(V^(k+1)).* Output the averageof f(V^(1)),…,f(V^(n)).More formally, consider a sequence (U^(k)),k≥ 1,of independentcopies of U such that the two sequences(N_k),k≥ 1, and(U^(k)),k≥ 1, are independent. Define the sequence (V^(k)),k≥1,in F^das follows: V^(1)=U^(1) and, for k≥1, the first N_k components of V^(k+1) are the same as the corresponding components of U^(k+1), and the remaining components of V^(k+1)are the same as the corresponding components of V^(k). The algorithm then outputsf_n≜f(V^(1))+…+f(V^(n))/n.Note that f_n is an unbiased estimator of E(f(U)) since V^(k)d= U for 1≤ k≤ n. §.§ Performance analysisFor ease of presentation, weignore the time needed to generate N_k and the running time of the third step of the algorithm. For 1≤ i≤ d, let t_ibethe expected time needed to generate V^(k+1) and calculate f(V^(k+1)) when N_k=i.Equivalently,t_i isthe expected time needed to perform Step 2 of the algorithm when N_k=i. Thus, t_iisthe expected time needed to re-draw the first i components of U andrecalculatef(U), andt_d isthe expected time needed to simulate f(U). By convention,t_0=0. Weassume for simplicity that t_i is a strictly increasing function of i. In many examples (see<ref> and <ref>), it can be shown that t_i=O(i). As ℙ(N_k=i)=q_i-1-q_i for 1≤ i≤ d and k≥1, where q_d=0, the expected running time of a single iteration of our algorithm, excluding the first one, is equal to T, whereT≜∑^d_i=1(q_i-1-q_i)t_i=∑^d-1_i=0q_i(t_i+1-t_i).For 0≤ i≤ d, define C(i)≜(E(f(U)|U_i+1,…,U_d)). Thus, C(0)= (f(U)), while C(d)=0, and we can interpret C(i) as the variance captured by the last d-i components of U. Note that if f depends only on its first i arguments, then f(U) is independent of (U_i+1,…,U_d), and soE(f(U)|U_i+1,…,U_d)=E(f(U)),which implies that C(i)=0. More generally, if the lastd-i arguments of f are not important, the conditional expectation E(f(U)|U_i+1,…,U_d) is “almost” constant, and itsvarianceC(i)should besmall. Thus C(i)/C(0) can be used to measure the importance of the last d-i components of U. As shown in the appendix, whenU is uniformly distributed on the d-dimensional unit cube [0,1]^d, this ratio coincides with a global sensitivity index for the subset {i+1,…,d},defined in <cit.>in termsof the ANOVA decomposition of f. Proposition <ref> below shows that (C(i)), 0≤ i≤ d,is always a decreasing sequence,and gives an alternative expression for C(i),which can be viewed as avariant of Theorem 2 of <cit.>. The sequence (C(i)), 0≤ i≤ d, is decreasing. If U'_1,… ,U'_i arerandom variables such thatU'_jd= U_jfor 1≤ j≤ i, and U'_1,… ,U'_i,U are independent, then C(i)=(f(U),f(U'_1,… ,U'_i,U_i+1,…,U_d)). Theorem <ref> below establishes a formal relationship between the variance of f_n and the C(i)'s. Letν^* be the vector of ℝ^d+1 withν^*_0=C(0) and ν^*_i=2C(i) for 1≤ i≤ d.For n≥1,n(f_n)≤∑^d-1_i=0ν^*_i-ν^*_i+1/q_i. Furthermore, the LHS of (<ref>) converges to its RHS as n goes to infinity. As, for ν=(ν_0,…,ν_d)∈ℝ^d×{0},∑^d-1_i=0ν _i-ν _i+1/q_i=ν _0+∑^d-1_i=1ν _i(1/q_i-1/q_i-1), the RHS of (<ref>) is a weighted combination of the C(i)'s, with positive weights. Thus, the smallerthe C(i)'s, the smallerthe RHS of (<ref>). Furthermore, as C(i) is the variance of the conditional expectation E(f(U)|U_i+1,…,U_d), which can be considered asa smoothed version of f(U), we expect our algorithm to be resilient to discontinuities of f. Denote by ℝ_+the set of nonnegative real numbers. For q∈ A, and ϑ=(ϑ_0,…,ϑ_d)∈{0}×ℝ_+^d, and ν=(ν_0,…,ν_d)∈ℝ_+^d×{0}, set R(q;ϑ,ν)=(∑^d-1_i=0q_i(ϑ_i+1-ϑ_i))(∑^d-1_i=0ν _i-ν _i+1/q_i).The expected time needed to perform n iterations of the algorithm, includingthe first one, isT_n=(n-1)T+t_d. Theorem <ref> and (<ref>) imply thatT_n(f_n) converges to R(q;t,ν^*) as n goes to infinity, where t=(t_0,…,t_d).By (<ref>),R(q;ϑ,ν) is an increasing function with respect to ν, i.e. R(q;ϑ,ν)≤ R(q;ϑ ,ν') for ν≤ν', where the symbol ≤ between vectors represents componentwise inequality. Similarly,it is easy to see thatR(q;ϑ ,ν) is increasing with respect to ϑ. Let T^tot(q,ϵ) be the total expected timeit takes for our algorithm to guarantee that Std(f_n)≤ϵ. Corollary <ref> below gives an upper bound on T^tot(q,ϵ)in terms of R(q;t,ν^*). It alsoimplies that,if R(q;t,ν^*) is upper bounded by a constant independent of d,and(f(U))=Θ(1), then our algorithm outperforms the standard Monte Carlo algorithm by a factor of order t_d. More precisely, running our algorithm for n=⌈ t_dT^-1⌉ iterations has the same expected cost, up to a constant, as a single iteration of the standard Monte Carlo method, but produces an unbiased estimator ofE(f(U)) with O(1/t_d) variance. For ϵ>0,T^tot(q,ϵ)≤ t_d+ R(q;t,ν^*)ϵ^-2.Furthermore, if n=⌈ t_dT^-1⌉,the expected running time of n iterations of the algorithm is at most 2t_d, and (f_n)≤R(q;t,ν^*)/t_d.Proof. Theorem <ref> and (<ref>) imply that n(f_n)T≤ R(q;t,ν^*). Thus,Std(f_n)≤ϵ for n=⌈ R(q;t,ν^*)T^-1ϵ^-2⌉. The expected time needed to calculate f_n isT_n, which is upper-bounded by t_d+ R(q;t,ν^*)ϵ^-2 since n-1≤R(q;t,ν^*)T^-1ϵ^-2. Hence (<ref>). On the other hand, ifn=⌈ t_dT^-1⌉, then T_n≤2t_d since(n-1)T≤ t_d, and (<ref>) holds since nT≥ t_d. Theorem <ref> below establishes a central limit theorem on f_n. It also establishes a central limit theorem on the estimate ofE(f(U)) that can be obtained with acomputational budget c, using the framework described by <cit.>. Denote by Ñ (c)the number of iterations generatedby our algorithm incunits of computation time. In other words, Ñ (c) is the maximum integer n such that f_n is calculated within ctime (with f_0≜0). As the time to calculate f_n is random, Ñ (c) is a random integer. Let ⇒denote weak convergence (see <cit.>).As n→∞, √(n)(f_ n-E(f(U))) ⇒ N(0,σ^2 ),whereσ^2=∑^d-1_i=0ν^*_i-ν^*_i+1/q_i.Furthermore, as c→∞,√(c)(f_Ñ(c)-E(f(U))) ⇒ N(0,R(q;t, ν^*)).In light of above, we will use R(q;t,ν^*)tomeasure the performance of our algorithm. The smaller the C(i)'s and t_i's, the smallerR(q;t,ν^*), and the better the performance of our algorithm.Proposition <ref>belowshows thatC(i) is small if f is well-approximatedby a function of its first i arguments.For1≤ i≤ d, if f_i is ameasurable function from F^ito ℝ such that f_i(U_1,…,U_i) is square-integrable, then C(i)≤(f(U)-f_i(U_1,…,U_i)).§.§ Explicit and semi-explicit distributions An optimal choice for q is a one that minimizes R(q;t,ν^*). A numerical algorithm that performs such minimization is presented in <ref>. This subsection gives explicit or semi-explicit choices for q,with corresponding upper-bounds onR(q;t,ν^*).Proposition <ref> below gives upper bounds on R(q;t,ν^*)ift_i=O(i)and (C(i))decreases at a sufficiently high rate. It implies in particular that, ift_i=O(i) and C(i)=O((i+1)^γ) withγ<-1, then T^tot(q,ϵ)=O(d+ ϵ^-2). Assume that d≥2 and there are constants c and c' and γ<0 independent of d such that t_i≤ ci and C(i)≤ c' (i+1)^γ for 0≤ i≤ d. Then, for q_i=(i+1)^(γ-1)/2,0≤ i≤ d-1,there is aconstant c_1 independent of d such thatR(q;t,ν^*) ≤ c_1,γ<-1,c_1ln^2(d),γ=-1,c_1d^γ+1,-1<γ<0.and T^tot(q,ϵ) ≤ c_2(d+ϵ^-2),γ<-1,c_2(d+ln^2(d)ϵ^-2),γ=-1,c_2(d+ d^γ+1ϵ^-2), -1<γ<0.Below is a simple example where C(0)=1 and the C(i)'s do not meet the conditions of Proposition <ref>. Supposethat F=ℝ and thatU_1,…,U_d are square-integrable real-valuedrandom variables with unit variance. Assumethatf(x_1,…,x_d)=d^-1/2(∑^d_j=1x_j) for (x_1,…,x_d)∈ℝ^d.Asf depends equally on its arguments,our algorithm does not improve upon the standard Monte Carlo method.Since E(f(U)|U_i+1,…,U_d)=d^-1/2(E(U_1+⋯+U_i)+U_i+1+⋯+U_d), C(i)=(d-i)/d. The conditional variance C(i) decreases very slowly i since C(d/2) has the same order of magnitude as C(0). Thus the C(i)'s do not meet the conditions of Proposition <ref>. Whenupper-bounds on the C(i)'s and t_i's satisfying a convexity condition are known, Proposition <ref> below gives an explicit vector q together with an upper bound on R(q;t,ν^*). Assume that t_i≤ϑ_i for 0≤ i≤ d, where ϑ_0,…,ϑ_d is a strictly increasing sequence with ϑ_0=0. Assume further that ν_0,…,ν_d-1 are positive real numbers such thatC(i)≤ν_i for 0≤ i≤ d-1, andthat the sequence θ_i=ν_i+1-ν_i/ϑ_i+1-ϑ_i, 0≤ i≤ d-1, isincreasing (by convention, ν_d=0). Then, forq_i=√(θ_i/θ_0), 0≤ i≤ d-1,R(q;t,ν^*)≤ 2(∑_i=0^d-1√((ν_i-ν_i+1)(ϑ_i+1-ϑ_i)))^2. Proof.We first observe that θ_i≤θ_d-1<0 for 0≤ i≤ d-1. Thus qis well-defined and belongs to A.Let ϑ=(ϑ_0,…,ϑ_d). As t≤ϑ andν^*≤2ν, and sinceR(q;.,.) is increasing with respect to its second and thirdarguments, we haveR(q;t,ν^*)≤ R(q;ϑ,2ν).This concludes the proof.Proposition <ref>belowyields an upper bound on√(R(q;t,ν^*)) in terms of a weighted sum of the square roots ofthe C(i)'s, for a semi-explicit vector q. Assumethat C(d-1)>0. If, for 0≤ i≤ d-1,q_i=√(t_1C(i)/t_i+1C(0)), then R(q;t,ν^*) ≤8(∑^d-1_i=0(√(t_i+1)-√(t_i))√(C(i)))^2.Proposition <ref> belowgives an explicitdistribution which is optimal up to a logarithmic factor, withoutrequiring any prior knowledge on the C(i)'s. Foranyq∈ A, R(q;t,ν^*)≥∑^d-1_i=0C(i)(t_i+1-t_i). Furthermore, if q_i=t_1/t_i+1 for 0≤ i≤ d-1,thenR(q;t,ν^*)≤ 2(1+ln (t_d/t_1))∑^d-1_i=0C(i)(t_i+1-t_i).§.§ A Lipschitz function exampleAssumethat F=ℝ and thatU_1,…,U_d are square-integrable real-valuedrandom variables, with σ_1≥⋯≥σ_d>0, where σ_i is the standard deviation of U_i. Assume also thatf(x_1,…,x_d)=g(∑^d_j=1x_j) for (x_1,…,x_d)∈ℝ^d, where g is a real-valued 1-Lipschitz function on ℝ that can be calculated in constant time. For instance,f(x_1,…,x_d)=max(∑^d_j=1x_j-K,0), where K is a constant, satisfies this condition. Assume further thateach U_i canbe simulated in constant time.For 1≤ k≤ n, let S_k be the sum of all components of V^(k). Thus S_k+1 can be calculated recursively in O(N_k) time by adding to S_kthe firstN_k components of V^(k+1) and subtractingthe firstN_k components of V^(k). Hencet_i≤ ci, for some constant c. In order to bound the C(i)'s, we show that f(U) can be approximated by f_i(U_1,…,U_i), wheref_i(x_1,…,x_i)=g(∑^i_j=1x_j+∑^d_j=i+1E(U_j))for (x_1,…,x_i)∈ℝ^i. Let ||Z||=√(E(Z^2)) for a real-valued random variable Z. By Proposition <ref>,C(i) ≤ ||f(U)-f_i(U_1,…,U_i)||^2≤||∑^d_j=i+1(U_j-E(U_j))||^2= (∑^d_j=i+1U_j)= ∑^d_j=i+1σ_j^2. The second equation follows from the assumption that g is1-Lipschitz. By applying Proposition <ref>, with ϑ_i=ci and ν_i=∑^d_j=i+1σ_j^2, and settingq_i=σ_i+1/σ_1,0≤ i≤ d-1, we infer thatR(q;t,ν^*)≤2c(∑^d_i=1σ_i)^2. Thus, ifσ_i=O(i^γ), with γ<-1, then R(q;t,ν^*)=O(1) and T^tot(q,ϵ)=O(d+ ϵ^-2). Also, by (<ref>) and a standard calculation, C(i)=O((i+1)^2γ+1)for 0≤ i≤ d. As 2γ+1<-1, Proposition <ref> is also applicable in this case. §.§.§ A polynomial counter-example Assume thatf(x_1,…,x_d)=∑^d_j=1x_jx_1^j.In general, in a standard implementation,t_1=Ω(d), since calculating f(U) after redrawing U_1typically takesΘ(d) steps. Thus, T=Ω(d), for any q∈ A. On the other hand, by (<ref>), the RHS of (<ref>) is lower-bounded by (f(U)). Thus, R(q;t,ν^*)=Ω(d(f(U)), which corresponds to to the time-variance product of the standard Monte Carlo algorithm. Therefore, astandard implementation of the Randomized Dimension Reduction algorithm does not outperform the standard Monte Carlo algorithm in this example.§ THE OPTIMAL DISTRIBUTIONWe now seek to calculate a vector q that minimizes R(q;t,ν^*). Given a vectorν in ℝ^d×{0} whose first d components are positive,Theorem <ref> below gives a geometric algorithm that finds in O(d) time a vector q^* that minimizes R(q;t,ν) under the constraint that q∈ A. Note that the vector q^* depends on ν.In <cit.>, a dynamic programming algorithm thatcalculates such a vector q^* inO(d^3) time has been described. Let ν'=(ν'_0,…,ν'_d)∈ℝ^d+1 be such that the set{(t_i,ν'_i):0≤ i≤ d} forms the lower hull of the set{(t_i,ν_i):0≤ i≤ d}. In other words,ν' is the supremumof all sequences in ℝ^d+1 such that ν'≤ν andthe sequence (θ_i) isincreasing, whereθ_i=ν'_i+1-ν'_i/t_i+1-t_i, 0≤ i≤ d-1.For instance, if d=6, with t_i=i and ν=(20,21,13,8,7,2,0), thenν'=(20,16,12,8,5,2,0), as illustrated in Fig. <ref>. <ref> shows how to calculate ν' in O(d) time.Let νbe a vector in ℝ^d×{0} whose first d components are positive. For 0≤ i≤ d-1,setq^*_i=√(θ_i/θ_0), where θ_i is given by (<ref>), and let q^*=(q^*_0,…,q^*_d-1). Then q^*=min_q∈ AR(q;t,ν), and R(q^*;t,ν)=(∑_i=0^d-1√((ν'_i-ν'_i+1)(t_i+1-t_i)) )^2.§.§ Lower hull calculation Given ν, the following algorithm, due to <cit.>,first generates recursively a subset B(j) of{1,…,d}, 2≤ j≤ d, then calculates ν' via B(d). The algorithm runs in O(d) time. * Set B(2)={1,2}.*For j=3 to d, denote by i_1<⋯<i_mthe elements of B(j-1). Let k be the largest element of {2,…,m} such that (t_i_k,ν_i_k) lies below the segment [(t_i_k-1,ν_i_k-1),(t_j,ν_j)], if such k exists, otherwise let k=1. SetB(j)={i_1,…,i_k,j}. *For i=1 to d, let i' and i” be two elements of B(d) with i'≤ i≤ i”. Set ν'_i so that (t_i,ν'_i) lies on the segment [(t_i',ν_i'),(t_i”,ν_i”)].§.§ Estimating the C(i)'sThe calculation of a vector q^*∈ Athat minimizesR(q;t,ν^*) requires the knowledge of the C(i)'s. Proposition <ref> below can be used to estimateC(i)via Monte Carlo simulation. Assuming that f(U) can be approximated by a function of its first i arguments, we expect that both components of the product in the RHS of (<ref>)to be small, on average. Thus, (<ref>) can be considered as a “control variate” version of (<ref>), and should yield a more accurate estimate of C(i) via Monte Carlo simulation for large values of i.Assume that U'_1,… ,U'_d, andU”_i+1,…,U”_d, arerandom variables such that U'_jd= U_jfor 1≤ j≤ d, andU^”_jd= U_jfor i+1≤ j≤ d, and U'_1,… ,U'_d,U,U^”_i+1,…,U^”_d are independent. Then C(i)=E((f(U)-f(U_1,… ,U_i,U'_i+1,…,U'_d))(f(U'_1,… ,U'_i,U_i+1,…,U_d)-f(U'_1,… ,U'_i,U^”_i+1,…,U^”_d)).§.§ Numerical algorithm Building upon the previously discussed elements, the algorithm that we have used for our numerical experiments is as follows. Itconstructs a vector (ν_0,…,ν_d) and uses it as a proxy for ν^*. * For i=0 to d-1, ifi+1 is a power of 2, estimateC(i)by Monte Carlo simulation with 1000 samples via Proposition <ref>.*Set ν_0=C_(0), andν_d=0.For 1≤ i≤ d-1, let ν_i=2C(j),where j is the largest index in [0,i] such that j+1 is a power of 2. *For i=d-1 down to 1, set ν_i←max(ν_i,ν_i+1). Setν_0←max(ν_0,ν_1/2).*Let (ν'_0,…,ν'_d)∈ℝ^d+1 be such that the set{(t_i,ν'_i):0≤ i≤ d} forms the lower hull of the set{(t_i,ν_i):0≤ i≤ d}. For 0≤ i≤ d-1,setq_i=√(θ_i/θ_0), where θ_i is given by (<ref>).* Calculate T via (<ref>). For 0≤ i≤ d-1, setq_i←min(1,max(q_i,T/t_i+1ln(t_d/t_1))).*Run Steps 1 through 3 of the generic randomized dimension reduction algorithm of <ref> using q.The purpose of Steps 3 and 5 is to reduce the impact onq of statistical errors that arise in Step 1. Because of statistical errors,Step 1 may underestimate or overestimate the C(i)'s. Step 3 guarantees that the ν_i's are non-negative, so that the q_i's can be calculated in Step 4. Step 5 yields a cap on the RHS of (<ref>) by ensuring that the q_i's are not too small. Using  (<ref>) and the proof of Proposition <ref>, and assuming that t_d≥2t_1, it can be shown that Step 5increases Tby at most a constant multiplicative factor.An alternative way to implement our algorithm is to skip Steps 1 through 5 andrun the generic algorithm withq_i=t_1/t_i+1 for 0≤ i ≤ d-1. By Proposition <ref>, the resulting vector q is optimal up to a logarithmic factor.§ APPLICATIONS TO MARKOV CHAINSIn queueing systems, the performance metrics at a specific time instant are heavily dependent on the last busy cycle, i.e., the events that occurred after the queue was empty for the last time. Thus, the performance metrics depend a lot more on the last random variables driving the system than on the initial ones. Nevertheless, we can apply our algorithm to queueing systems byusing a time-reversal transformation inspired from <cit.>. More generally, using such a time-reversal transformation, this section shows that our algorithm can efficiently estimate the expected value of a function of thestate of a Markov chain at time-step d, for a class of Markov chains driven by independent random variables. Let (X_m), 0≤ m≤ d, be a Markov chain with state-space F' and deterministic initial value X_0. Assumethatthere are independentrandom variables Y_i,0≤ i≤ d-1, that take values inF, and measurable functions g_i from F'× F to F' such that X_i+1=g_i(X_i,Y_i) for 0≤ i≤ d-1. We want to estimateE(g(X_d)) for a given positive integer d, where g is a deterministic real-valued measurable function on F' such that g(X_d) is square-integrable. For 1≤ i≤ d, set U_i=Y_d-i. It can be shown byinduction thatX_d=G_i(U_1,…,U_i,X_d-i), where G_i, 0≤ i≤ d, is a measurable function from F^i× F' to F', and so there is a real-valued measurable function f on F^d with g(X_d)=f(U_1,…,U_d). We can thususe ourrandomized dimension reduction algorithm to estimate E(g(X_d)). Recall that, in iteration k+1 in Step 2 of the generic algorithmof<ref>, conditioning on N_k=i, the first i arguments of f are re-drawn, and the remaining arguments are unchanged. This is equivalent to re-drawing the last i random variables driving the Markov chain, and keeping the first d-i variables unchanged. In light of above, the generic randomized dimension reductionalgorithm for Markov chains estimation takes as parameter a vector q∈ Aand consists of the following steps:* First iteration. Generate recursivelyX_0,…,X_d. Calculate g(X_d).*Loop. In iteration k+1, where 1≤ k≤ n-1, keepX_0,…,X_d-N_k unchanged, and calculate recursively X_d-N_k+1,…,X_d by re-drawingY_d-N_k,…,Y_d-1, where N_k is a random integer in [1,d] such that ℙ(N_k>i)=q_i. Calculate g(X_d).*Output the averageof g over the n copies of X_d generated in the first two steps.We assume that g and the g_i's can be calculated in constant time, and that the expected time needed to simulate each Y_i is upper-bounded by a constant independent of d. Thus, given N_k, the expected time needed to perform iteration k+1 is O(N_k). Hence t_i≤ ci, for some constant c independent of d. Proposition <ref> below shows that, roughly speaking, C(i) is small ifX_d-i andX_d are “almost” independent. For 0≤ i≤ d, we have C(i)= (E(g(X_d)|X_d-i)). By Proposition <ref>, if there are constants c'>0 and γ<-1 independent of d such that C(i)≤ c'(i+1)^γ for 0≤ i≤ d-1, thenR(q;t,ν^*) is upper-bounded by a constant independent of d, where q_i=(i+1)^(γ-1)/2 for 0≤ i≤ d-1. The analysis in <cit.>, combined with Proposition <ref>, suggests that C(i) decreases exponentially with ifor a variety of Markov chains.For x∈𝔽', and 0≤ i≤ d, let X_i,x=G_i(U_1,…,U_i,x). In other words,X_i,x is the state of the chain at time-step difthe chain is at state x at time-step d-i.Intuitively, weexpect X_i,x to be close to X_d for large i if X_d depends mainly on the last Y_j's. By Proposition <ref>,ifg(X_i,x) is square-integrable, C(i)≤||g(X_d)-g(X_i,x)||^2.In the following examples,we prove that under certain conditions, R(q;t,ν^*) is upper-bounded by a constant independent of dfor an explicit vector q∈ A, and so T^tot(q,ϵ)= O(d+ ϵ^-2). §.§ GARCH volatility modelIn theGARCH(1,1) volatility model (see <cit.>), thevarianceX_iof an index return between day i and day i+1, as estimated at the end of dayi, satisfies the following recursion:X_i+1= w +α X_iY_i^2+β X_i, i≥0, where w, α and β are positive constants with α+β<1, and Y_i,i≥0, are independent standard Gaussian random variables. The variable Y_i is known at the end of day i+1. At the end of day 0, givenX_0≥0, a positive integer d and a real number z, we want to estimate ℙ(X_d> z). In this example, F=F'=ℝ, and g_i(x,y)= w +α xy^2+β x, with g(u)=1{u> z} for u∈ℝ. Proposition <ref> below shows that C(i) decreases exponentially with i. There is a constant κ independent of d such that C(i)≤κ(α+β)^i/2 for 0≤ i≤ d-1.By applying Proposition <ref> withϑ_i=ci andν_i=κ(α+β)^i/2,and setting q_i=(α+β)^i/4 for 0≤ i≤ d-1, we infer that R(q;t,ν^*) isupper-bounded bya constant independent of d.§.§ G_t/D/1 queueConsider a queuewhere customers arrive at time-step i, 1≤ i ≤ d, and are served by a single server in order of arrival.Service times are all equal to 1.Assume the system starts empty at time-step 0, and that A_icustomers arrive at time-step i, 0≤ i≤ d, whereA_0=0 and the A_i's are independent square-integrable random variables. Let X_i be the number of customers waiting in the queueat time-step i.ThenX_0=0and (X_i) satisfies the Lindley equationX_i+1=(X_i+Y_i)^+, for 0≤ i≤ d-1, with Y_i=A_i+1-1. We want to estimate E(X_d). In this example, g is the identity function, F=F'=ℝ, and g_i(x,y)=(x+y)^+. Proposition <ref> below shows that C(i) decreases exponentially with i under certain conditions on theservice times.Ifthere are constants γ>0 and κ<1 independent of d such thatE(e^γ Y_i)≤κ for 0≤ i≤ d-1, thenC(i)≤γ'κ^i for 0≤ i≤ d-1, where γ' is a constant independent of d.By applying Proposition <ref> with ϑ_i=ci and ν_i=γ'κ^i, and settingq_i=κ^i/2 for 0≤ i≤ d-1, we conclude that, under the assumption of Proposition <ref>, R(q;t,ν^*) isupper-bounded bya constant independent of d. The assumption in Proposition <ref> can be justified as follows. Given i∈[0,d-1], if E(A_i)<1 and the function h(γ)= E(e^γ Y_i) is bounded on a neighborhood of 0, then h'(0)=E(Y_i)<0. As h(0)=1, there isγ>0 such thath(γ)<1, and (<ref>)holds forκ=h(γ).The assumption in Proposition <ref> says thatγ and κ can be chosen independently of i and of d. §.§ M_t/GI/1 queue ConsideraM_t/GI/1 queuewhere customers are served by a single server in order of arrival.We assume that customers arrive according to a Poisson process with positive and continuous time-varying rate λ_s≤λ^*, where λ^* is a fixed positive real number.The service times are assumed to bei.i.d. andindependent of the arrival times. Assume that the system starts empty at time 0. For simplicity, we assume that the number of customers that arrive in any bounded time interval is finite (rather than finite with probability 1). Consider acustomer present in the system at a given time s. If the customer has been served for a period of length τ,its remaining service time is equal to its service time minus τ, and if the customer is in the queue, its remaining service time is equal to its service time. The residual work W_s at time s is defined as the sum of remaining service times of customers present in the system at s. We want to estimate the expectation of W_θ, whereθ is a fixedtime. Let d=⌈λ^^*θ⌉,and assume that d≥2. For 0≤ i≤ d, let X_i=W_iθ/d be theresidual work at time iθ/d. For 0≤ i≤ d-1, let Y_i be the vector that consists ofarrival andservice times of customersthat arrive during the interval (iθ/d,(i+1)θ/d].In this example, g is the identity function, F isequal to the set of real-valued sequences with finite support, and F'=ℝ. Let 0≤ s<s'. If no costumers arrive in (s,s']then W_s'=(W_s-s'+s)^+. On the other hand, if no costumers arrive in (s,s')and a customer with service time S arrives at s', then W_s'=S+(W_s-s'+s)^+.Thus, given the set of arrival and service times of customers that arrive in (s,s'], we can calculate iteratively W_s' from W_s. This implies thatX_i+1 is a deterministic measurable function of X_i and Y_i, for 0≤ i≤ d-1. Proposition <ref> below shows that C(i) decreases exponentially with i under certain conditions on the arrival and service times.For 0≤ s≤θ, letZ_θ(s) be the cumulative service time of costumers that arrive in [s,θ]. Assume there are constants γ>0 and κ<1 independent of d such that, for 0≤ s≤ s'≤θ and s'-s≤1/λ^*,E(e^γ (Z_θ(s')-Z_θ(s)-1/λ^*))≤κ.Then C(i)≤γ'κ^i/2 for 0≤ i≤ d-1, where γ' is a constant independent of d. By applying Proposition <ref> with ϑ_i=ci and ν_i=γ'κ^i/2, and setting q_i=κ^i/4 for 0≤ i≤ d-1, we conclude that, under the assumption of Proposition <ref>, R(q;t,ν^*) isupper-bounded bya constant independent of d.The assumption in Proposition <ref> can be justified as follows. For 0≤ s≤ s'≤θ and s'-s≤1/λ^*, the cumulative service times of customers that arrive in [s,s') isZ_θ(s')-Z_θ(s). If E(Z_θ(s')-Z_θ(s))<s'-s and h(γ)= E(e^γ (Z_θ(s')-Z_θ(s)-1/λ^*)) is bounded on a neighborhood of 0, then h'(0)<0. Thush(γ)<1 for some γ>0 and (<ref>)holds forκ=h(γ).The assumption in Proposition <ref> says thatγ and κ can be chosen independently of d, s and s'. §.§ Time-dependent contracting Markov chainsIn <cit.>, an algorithm that gives unbiased estimatorsfor equilibrium expectations associated with real-valued functionals defined on time-homogeneous contracting Markov Chains is described. This subsection defines the related notion of time-dependent contracting Markov Chain and shows that our algorithm ...§ DETERMINISTIC DIMENSION REDUCTIONThis section studies a deterministic dimension reduction algorithm that performs the same steps as the generic randomized dimension reduction algorithm of <ref>, but uses a deterministic integral sequence (N_k), k≥1, taking values in [1,d], to estimate E(f(U)). As for the randomized algorithm, denote by f_n the output of thedeterministic dimension reductionalgorithm, and by T_n its expected running time, wheren is the number of iterations, including the first one. The sequence(N_k), k≥1, may depend on n.As the algorithm generates n copies of U,the random variablef_n is an unbiased estimator of E(f(U)).Assume that C(d-1)>0 and let q̂=min_q∈ AR(q;t,C), where C denotes the vector (C(0),…,C(d)). The existence of q̂ follows from Theorem <ref>. Define the integers μ_0,…,μ_d-1 recursively as follows. Let μ_0=1 and, for 1≤ i≤ d-1, letμ_i be the largest multiple of μ_i-1 in the interval [0,1/q̂_i], i.e.μ_i=μ_i-1⌊1/μ_i-1q̂_i⌋.It can be shown by induction that μ_i is well-defined and positive. Let q̅ be the vector in A defined by q̅_i=1/μ_i,for 0≤ i≤ d-1. As ⌊ x⌋≤ x<2⌊ x⌋ for x≥1, we have μ_i≤1/q̂_i<2μ_i. It follows that, for 0≤ i≤ d-1, q̂_i≤q̅_i< 2q̂_i.Define the sequence(N̅_k), k≥ 1,as follows:N̅_k=max{i∈[1,d]:k is a multiple ofμ_i-1}. As μ_0=1, suchi always exists. For 0≤ i≤ d-1 andk≥1, if N̅_k=j with j>i,thenk is a multiple of μ_j-1, and sok is a multiple of μ_i, sinceμ_j-1/μ_i is an integer. Conversely, if k is a multiple of μ_i then, by construction, N̅_k>i. HenceN̅_k>i ⇔ k≡ 0μ_i.Given i∈[0,d-1], the inequality N̅_k>i occurs once as k ranges in a set of μ_i consecutive positive integers. The sequence (N̅_k) can thus be considered as a deterministic counterpart to the random sequence (N_k) generated by the randomized dimension reduction algorithm when q=q̅. Theorem <ref>below gives a lower bound on the performance of the deterministic dimension reduction algorithm for any sequence (N_k), k≥ 1, and analyses the algorithm when N_k=N̅_k for k≥ 1.For n≥1 and any deterministic sequence (N_k), k≥ 1,T_n(f_n)≥ R(q̂;t,C). IfN_k=N̅_k for k≥ 1 then, forn≥1,T_n=t_d+∑^d-1_i=0⌊(n-1)q̅_i⌋(t_i+1-t_i),and n(f_n)≤∑^d-1_i=0C(i)-C(i+1)/q̅_i.Furthermore, the LHS of (<ref>) converges to its RHS as n goes to infinity. Using again the framework of <cit.>, we measure the performance of an estimator via the work-normalized variance, i.e. the product of the variance and expected running time. If N_k=N̅_k for k≥1 then, by Theorem <ref>,T_n(f_n)→ R(q̅;t,C)as n goes to infinity. Furthermore, it follows from (<ref>) and (<ref>) that R(q̅;t,C)≤2R(q̂;t,C). Thus, up to a factor of 2, the sequence (N̅_k) asymptoticallyminimizes the work-normalized variance of the deterministic dimension reduction algorithm. Moreover, by definition of q̂, and since C≤ν^*, R(q̂;t,C)≤R(q^*;t,C)≤R(q^*;t,ν^*).R(q̂;t,C) ≤ R(q^*;t,C) ≤ R(q^*;t,ν^*), whereq^*=min_q∈ AR(q;t,ν^*). Hence R(q̅;t,C)≤2R(q^*;t,ν^*). Similarly, as ν^*≤2C, R(q^*;t,ν^*)≤ R(q̅;t,ν^*)≤ 2R(q̅;t,C).Thus, the asymptotic work-normalized variances of the randomized dimension reduction algorithm, with q=q^*,and of the deterministic dimension reduction algorithm, with N_k=N̅_k fork≥ 1, are within a factor of 2 from each other. Proposition <ref> below shows that if, after generating the first copy of U, we generatethe next n(q_0-q_1) samples by only changing the first component of the U in the previous iteration, and the nextn(q_1-q_2) samples by only changing the first two components of the U in the previous iteration, and so on, the resulting algorithm is asymptotically less efficient than standard Monte Carlo.Assume that C(d-1)>0. Let q∈ A with q_d-1<1. IfN_k=i for 1≤ i≤ d and integer k∈(n(1-q_i-1),n(1-q_i)], then n(f_n)→∞ as n goes to infinity. § COMPARISON WITH A CLASS OFMULTILEVEL ALGORITHMSWecompare our method to a class of MLMC algorithms, adapted from <cit.>,that efficientlyestimate E(f(U)) under the assumption that f is approximated,in the L^2 sense, by functions of its first arguments. Under conditions described in <ref>, we prove that, up to a constant, therandomized dimension reduction algorithm is at least asefficient as this class of MLMCalgorithms. It should be stressed, however, that there may exist other MLMC algorithms that estimate E(f(U)) more efficiently thanthe class of MLMC algorithms described below. §.§ TheMLMC algorithms description and analysisLet L be a positive integer and let (m_l), 0≤ l≤ L, be a strictlyincreasing integral sequence, with m_0=0 and m_L=d. For 1≤ l≤ L, letϕ_l be a square-integrable random variable equal to a deterministic measurable function of U_1,…,U_m_l, withϕ_L=f(U). Theϕ _l's are chosen so that,as l increases, ϕ_l gets closer to f(U), in the L^2 sense.For instance, Lcan be proportional to ln(d),the m_l's can increase exponentially with l, and ϕ_l could equalf(U_1,…,U_m_l,x,…,x_d-m_l), for somex∈ F. For 1≤ l≤ L, letϕ̂_l be the average of n_l independent copies of ϕ_l-ϕ_l-1 (with ϕ_0≜0), where n_l is a positive integer to be specified later. Assume thatthe estimators ϕ̂_1,…,ϕ̂_L are independent. AsE(f(U))=∑^L_l=1E(ϕ_l-ϕ_l-1), ϕ̂=∑^L_l=1ϕ̂_l is an unbiased estimator of E(f(U)). Following the analysis in <cit.>, (ϕ̂)=∑^L_l=1V_l/n_l, where V_l≜(ϕ_l-ϕ_l-1) for 1≤ l≤ L. The expectedtime needed to simulate ϕ̂ isT_ML≜∑^L_l=1n_lt̂_l, wheret̂_lis the expected time needed to simulate ϕ_l-ϕ_l-1.As the variance of the average of n i.i.d. square-integrable random variables is proportional to 1/n, for ϵ>0, we need ⌈(ϕ̂)ϵ^-2⌉independent samples ofϕ̂ to achieve an estimator variance at mostϵ^2. Thus the total expected time T^MLMC(ϵ) needed for the MLMC algorithm to estimate E(f(U)) with variance at most ϵ^2 satisfies the relation T^MLMC(ϵ)=Θ(T_ML+T_ML(ϕ̂)ϵ^-2).The first term in the RHS of (<ref>) accounts for the fact that ϕ̂is simulated at least once. As shown in <cit.>, the work-normalized varianceT_ML(ϕ̂) is minimized when the n_l's are proportional to √(V_l/t̂_l) (ignoring the integrality constraints on the n_l's), in which case T_ML(ϕ̂)=(∑^L_l=1√(V_lt̂_l))^2.In line with <cit.>, if t̂_l=O(2^l) and ||ϕ_l-ϕ_L||^2=O(2^β l), with β<-1, where the constants behind the O-notation do not depend on d, then T_ML(ϕ̂) is upper-bounded by a constant independent of d. This can be shown by observing thatV_l≤||ϕ_l-ϕ_l-1||^2≤(||ϕ_l-ϕ_L||+||ϕ_l-1-ϕ_L||)^2. Theorem <ref> belowshows that, under certain conditions,the randomized dimension reduction method is, up to amultiplicative constant,at least asefficient as the class of MLMC methods described above.Indeed, under the assumptions of Theorem <ref>, by (<ref>), T^tot(q,ϵ)=O(d+T_ML(ϕ̂)ϵ^-2). On the other hand, T_ML≥t̂_L≥ĉd since m_L=d. Thus (<ref>) implies that T^MLMC(ϵ)≥ c'(d+T_ML(ϕ̂)ϵ^-2), for some constant c'.Assume that there are constants c and ĉ independent of d such thatt_i≤ ci for 1≤ i≤ d, and t̂_l≥ĉm_l for 1≤ l≤ L, and that C(d-1)>0. ThenR(q;t,ν^*)≤ (32c/ĉ)T_ML(ϕ̂)if, for 0≤ i≤ d-1,q_i=√(C(i)/(i+1)C(0)). § NUMERICAL EXPERIMENTSOur simulation experiments, using the examples in <ref>,were implementedin the C++ programming language. The randomized dimension reduction algorithm (RDR)was implemented as described in <ref>. The deterministic dimension reduction algorithm(DDR) was implemented similarly withN_k=N̅_k. In both algorithms, nwas chosen so that the expected total number ofsimulations of the U_i'sin iterations 2 through nis approximately 10d. The actual total number of simulations of the U_i's, denoted by “Cost” in our computer experiments, is about 11d because it includes the d simulations of the first iteration.We haveimplemented the multilevel algorithm (MLMC) described in <ref>, with L=⌊log_2(d)⌋+1,and m_l=⌊2^l-Ld⌋ for 1≤ l≤ L, and ϕ_l=f(U_1,…,U_m_l,X_0,…,X_0_d-m_l). The V_l's were estimated by Monte Carlo simulation with 1000 samples, and then_l's were scaled up so thatthe actual total number of simulations of the U_i's is about 11d. We have also implemented arandomized quasi-Monte Carlo method (QMC) with a random shift <cit.>. Our implementationusesthe C++ program availableathttp://web.maths.unsw.edu.au/˜fkuo/sobol<http://web.maths.unsw.edu.au/ fkuo/sobol>to generate d-dimensional Sobol sequences of length n=4096. For practical reasons linked to computing time and storage cost, the QMC algorithm was tested for d up to 10^4.In Tables <ref> through <ref> and in the appendixon-line supplement, the variable Stdrefers to the standard deviation of f_n for the RDR and DDR algorithms,to the standard deviation of ϕ̂ for the MLMC algorithm, and to the standard deviation of the Quasi-Monte Carlo estimator for the QMC algorithm.The variable Std and a 90% confidence interval for E(f(U)) were estimated using 1000 independent runs of these two algorithms. For the RDR algorithm, a 90% confidence interval forthe variable Cost was reported as well. Thevariance reduction factor VRF is defined asVRF=d(f(U))/Cost×Std^2.We estimated (f(U)) byusing 10000 independent samples of U. §.§ GARCH volatility modelTable <ref> shows results of our simulations of the GARCH volatilitymodel for estimatingℙ(X_d>z), with z=4.4×10^-5, X_0=10^-4,α=0.06, β=0.9, and w=1.76×10^-6. As expected, the variable Cost is about 11d for theRDR, DDR, and MLMCalgorithms. For these algorithms, the variable Cost × Std^2 is roughly independent of d, andthe variance reduction factors are roughly proportional to d. In contrast, for the QMC algorithm, the variable Cost × Std^2 is roughly proportional to d, and the variance reduction factors are roughly constant. The90% confidence intervalof the RDR algorithm running time has anegligible length in comparison to the running time. TheRDR algorithm outperforms the MLMC algorithm by about a factor of 10, and the QMC algorithm by a factor ranging from 5 to 20. In all our numerical experiments, the DDR algorithm outperforms the RDR algorithm by a factor between 1 and 2. §.§ G_t/D/1 queueAssume thatA_i has a Poisson distribution with time-varying rate λ_i=0.75+0.5cos(π i/50),for 1≤ i≤ d, (recall that A_0=0). These parameters are taken from <cit.>. Table <ref> estimates E(X_d), and Table <ref> gives VRFs in the estimation ofℙ(X_d> z), for selected values of z. Once again, fortheRDR, DDR, and MLMC algorithms, the variable Cost × Std^2 is roughly independent of d, and the variance reduction factors are roughly proportional to d. The VRFs of theRDRand DDRalgorithms in Table <ref>are greater than or equal to the corresponding VRFs in Table <ref>, which confirms the resiliency of these algorithms to discontinuities of g. In contrast, the VRFs of the MLMC algorithm in Table <ref> are lower than the corresponding VRFsin Table <ref>. The RDR algorithm outperforms the MLMC algorithm bya factorranging from 1 to 2in Table <ref>, and a factor ranging from 2 to 17in Table <ref>. Table <ref> estimates E(X_d) for shifted values of d. The results in Table <ref> are similar to those of Table <ref>, but the values of E(X_d) in Table <ref> are significantly smallerthan those in Table <ref>. This can be explained by observing that λ_d is maximized (resp. minimized) at the values of d listed in Table <ref> (resp. Table <ref>). §.§ M_t/GI/1 queueAssume thatλ_s=0.75+0.5cos(π s/50) for s≥0. These parameters are taken from <cit.>. Assume further that, for j≥1, the service time S_j for the j-th customer has a Pareto distribution withℙ(S_j≥ z)=(1+z/α)^-3 for z≥0, for some constant α>0. A simple calculation shows that E(S_j)=α/2.In our simulations, we have set d=⌈θ⌉. Table <ref> givesour simulation results for estimating ℙ(W_θ> 1) when α=2, and Table <ref> lists VRFs for estimating ℙ(W_θ> 1) for selected values of α. Here again, fortheRDR, DDR, and MLMC algorithms, the variable Cost × Std^2 is roughly independent of d, and the VRFs are roughly proportional to d. The RDR algorithm outperforms the MLMC algorithm by a factor ranging from1 to10,depending on the value of α. In Table <ref>, theRDR, DDR, and MLMC algorithms become less efficient as α increases. This can be explained by noting that, asα increases, the length of the last busy cycleincreases as well, which renders W_θ more dependent on the first Y_i's.§ CONCLUSIONWe have describeda randomized dimension reduction algorithm thatestimates E(f(U)) via Monte Carlosimulation, assuming that f does not depend equally on all its arguments.We formally prove that under some conditions, in order to achieve an estimator varianceϵ^2, our algorithm requires O(d + ϵ^-2) computations as opposed to O(d ϵ^-2) under the standard Monte Carlo method. Our algorithm can be used to efficiently estimate the expected value of a function of the state of aMarkov chain at time-step d, for a class of Markov chains driven by random variables.The numerical implementation of our algorithm uses a new geometric procedureof independent interest that solvesin O(d) time a d-dimensional optimisation problem that waspreviously solved in O(d^3) time. We have argued intuitively that our method is resilient to discontinuities of f, and have described and analysed a deterministic version of our algorithm. Our numerical experiments confirmthat our approach highly outperforms thestandard Monte Carlomethod for large values of d, and show its high resilience to discontinuities. Whether our approach can be combined with the Quasi-Monte Carlo method to produce a provably efficient estimator is left for future work. § ACKNOWLEDGMENTSThis research has been presented at the9th NIPS Workshop on Optimization for Machine Learning, Barcelona, December 2016, the StochasticMethods in Finance seminar at École des Ponts Paris-Tech, March 2018, and the 23rd International Symposium on Mathematical Programming,Bordeaux, July 2018.The author thanksBernard Lapeyre, seminar and conference participants for helpful conversations. He is grateful to three anonymous referees,an anonymous associate editor, and Baris Ata (department editor), for insightful comments and suggestions. This work was achieved through the Laboratory of Excellence on Financial Regulation (Labex ReFi) under the reference ANR-10-LABX-0095. It benefitted from a French government support managed by the National Research Agency (ANR).This canexplained by the fact that the performance of our algorithm (resp. multilevel algorithms) actually relies on the weak dependence of (resp. f(U)) on U_i+1,…,U_d, as i gets large, as shown formally inSections <ref> and <ref>. Furthermore, while smoothing techniques are often needed to efficiently implement multilevel <cit.> and QMC algorithms (see <cit.> and references therein) in the presence of discontinuities, no continuity assumptions are made in the analysis of our algorithm, and we did notneed to smooth f in our numerical experiments. This can be explained by the fact thatthe performance of our algorithm relies onproperties of conditional expectations of f, which can be considered assmoothed versions of f. Note that it is not always possible to smooth f explicitly, especially if Fis multi-dimensional. § PROOF OF PROPOSITION <REF>For 0≤ i≤ d, letf^(i)=E(f(U)|U_i+1,…,U_d).Thus, C(i)=(f^(i)). By the tower law, for 0≤ i≤ d-1,E(f^(i)|U_i+2,…,U_d)=f^(i+1),and soC(i+1)=(E(f^(i)|U_i+2,…,U_d)).As the variance decreases by taking the conditional expectation, it follows that C(i+1)≤ C(i), as desired. We now prove (<ref>).Let W=(U'_1,… ,U'_i,U_i+1,…,U_d).Since U and W are conditionally independent given U_i+1,…,U_d, and E(f(W)|U_i+1,…,U_d)=f^(i),E(f(U)f(W)|U_i+1,…,U_d))=(f^(i))^2.Hence, by the tower law, E(f(U)f(W))=E((f^(i))^2).On the other hand, using the tower law once again,E(f(U))=E(f(W))=E(f^(i)),and so the RHS of (<ref>) is equal to (f^(i)), as required. § PROOF OF THEOREM <REF>We first prove Lemma <ref> below, which follows by classical calculations (see e.g. <cit.>). Let (Z_k), k≥1, be an homogeneous stationary Markov chain in ℝ^d, and let g be a real-valuedBorel-measurable function on ℝ^d such that g(Z_1) is square-integrable, and a_j=(g(Z_1),g(Z_1+j)) is non-negative for j≥0. Assume that∑^∞_j=1a_j is finite. Then n^-1(∑^n_m=1g(Z_m))≤ a_0+2∑^∞_j=1a_j. Furthermore, the LHS of (<ref>) converges to its RHS as n goes to infinity.Proof. Since (Z_k), k≥1, is homogeneous and stationary, (g(Z_m)g(Z_m+j))=a_j for m≥1 and j≥0. Thus, (∑^n_m=1g(Z_m))= ∑^n_m=1(g(Z_m))+2∑_1≤ m<m+j≤ n(g(Z_m)g(Z_m+j))= na_0+2∑^n_j=1(n-j) a_j. Hencen^-1(∑^n_m=1g(Z_m))= a_0+2∑^n_j=1n-j/na_j, which implies (<ref>).Using Lebesgue's dominated convergence theorem concludes the proof.We now prove Theorem <ref>. Since the C(i)'s and the variance of f_n remain unchanged if we add a constant to f, we can assume without loss of generality that E(f(U))=0. For 0≤ i≤ d-1, set p_i=1-q_i. Letm<k be two integers in [1,n]. For 0≤ i≤ d-1, ℙ(max_m≤ j<kN_j≤ i)= p_i^k-m and so, for 1≤ i≤ d-1,ℙ(max_m≤ j<kN_j= i)= p_i^k-m-p_i-1^k-m. For i∈ [1,d], conditional on the eventmax_m≤ j<kN_j= i, the first icomponents of V^(m) and V^(k) are independent, andthelast d-icomponents of V^(m) and V^(k) arethe same. This is because, if N_j=i, with m≤ j<k,the vector V^(m)and the firsticomponents of V^(j+1) are independent. Thus,by Proposition <ref>, E(f(V^(m))f(V^(k))|max_m≤ j<kN_j=i)=C(i).As C(d)=0, it follows from Bayes' formula that E(f(V^(m))f(V^(k))) = ∑_i=1^d-1ℙ(max_m≤ j<kN_j= i)C(i)= ∑^d-1_i=1(p_i^k-m-p_i-1^k-m)C(i).Let a_j=(f(V^(1)),f(V^(1+j))). Thus, for j>0, a_j=∑^d-1_i=1(p_i^j-p_i-1^j)C(i) is non-negative, and∑_j=1^∞ a_j = ∑^d-1_i=1(p_i/1-p_i-p_i-1/1-p_i-1)C(i)= ∑^d-1_i=1(1/1-p_i-1/1-p_i-1)C(i)= -C(1)+∑^d-1_i=1C(i)-C(i+1)/q_iis finite. Since a_0=C(0), it follows thata_0+2∑^∞_j=1a_j=C(0)-2C(1)+2∑^d-1_i=1C(i)-C(i+1)/q_i.We conclude the proof using Lemma <ref>. § PROOF OF PROPOSITION <REF> Let U'_1,… ,U'_i be random variablessatisfying the conditions ofProposition <ref>, and W=(U'_1,…,U'_i,U_i+1,…,U_d). Since (U_1,…,U_i) and W are independent, (f_i(U_1,…,U_i),f(W))=0.Similarly, (f_i(U_1,…,U_i),f_i(U'_1,…,U'_i))=(f(U),f_i(U'_1,…,U'_i))=0.Thus, by Proposition <ref> andbilinearity of the covariance,C(i) = (f(U)-f_i(U_1,…,U_i),f(W)-f_i(U'_1,…,U'_i))≤ Std(f(U)-f_i(U_1,…,U_i)) Std(f(W)-f_i(U'_1,…,U'_i))= (f(U)-f_i(U_1,…,U_i)). The last equation follows by observing that f(W)-f_i(U'_1,…,U'_i) d=f(U)-f_i(U_1,…,U_i). § PROOFS OF PROPOSITIONS <REF> AND <REF> We first prove the following.Let ν=(ν_0,…,ν_d) be an element of ℝ^d×{0}. Assume that ν_0,…,ν_d-1 are positive, and that the sequence (ν_i/t_i+1),0≤ i≤ d-1, is decreasing. For 0≤ i≤ d-1, set q_i=√((t_1ν_i)/(ν_0 t_i+1)). ThenR(q;t,ν)≤4(∑^d-1_i=0√(ν_i)(√(t_i+1)-√(t_i)))^2. Proof. Applying the inequality x-y≤2√(x)(√(x)-√(y)), which holds for x≥0 and y≥0, to x=ν_i and y=ν_i+1 yields∑^d-1_i=0ν_i-ν_i+1/q_i ≤2√(ν_0/t_1)∑_i=0^d-1(√(ν_i)-√(ν_i+1))√(t_i+1)= 2√(ν_0/t_1)∑_i=0^d-1√(ν_i)(√(t_i+1)-√(t_i)). Similarly, since t_i+1-t_i≤2√(t_i+1)(√(t_i+1)-√(t_i)), ∑^d-1_i=0q_i(t_i+1-t_i)≤2√(t_1/ν_0)∑^d-1_i=0√(ν_i)(√(t_i+1)-√(t_i)). Taking the product completes the proof. We now prove Proposition <ref>. Sett'=(t'_0,…,t'_d), with t'_i=ci, 0≤ i≤ d, and let ν=(ν_0,…,ν_d-1,0), with ν_i=c' (i+1)^γ, 0≤ i ≤ d-1.Thus R(q;t,ν^*)≤ R(q;t',2ν) sincet≤ t' and ν^*≤ 2ν. By Proposition <ref>,R(q;t',2ν)≤8cc'(∑^d-1_i=0(i+1)^γ/2(√(i+1)-√(i)))^2.As √(i+1)-√(i)≤(i+1)^-1/2 for i≥0,it follows that R(q;t,ν^*)≤8 cc' (∑_i=1^di^(γ-1)/2)^2.The inequality ∑_i=1^di^(γ-1)/2≤1+∫^d_1x^(γ-1)/2 dx implies that∑^d-1_i=0i^(γ-1)/2≤ 1+2/(γ+1),γ<-1,1+ln(d),γ=-1,2d^(γ+1)/2/(γ+1), -1<γ<0.We conclude that there is a constant c_1 such that (<ref>) holds. This concludes the proof of Proposition <ref>.Corollary <ref> then implies the existence of a constant c_2 such that (<ref>) holds. We now prove Proposition <ref>.By applying Proposition <ref> toν=(2C(0),…,2C(d-1),0), it follows thatR(q;t,ν) is upper-bounded by the RHS of (<ref>). Since R(q;t,ν^*)≤ R(q;t,ν), this implies (<ref>). § PROOF OF PROPOSITION <REF>Since R(q;t,ν) is increasing with respect to ν,R(q;t,ν^*) ≥R(q;t,C(0),…,C(d))= (∑^d-1_i=0C(i)-C(i+1)/q_i)(∑^d-1_i=0q_i(t_i+1-t_i)) .However, as∑^d-1_j=0q_j (t_j+1-t_j)≥ q_it_i+1 for 0≤ i≤ d-1, and since (C(i)) is a decreasing sequence,(∑^d-1_i=0C(i)-C(i+1)/q_i)(∑^d-1_j=0q_j(t_j+1-t_j)) ≥∑^d-1_i=0(C(i)-C(i+1))t_i+1.This implies (<ref>) since ∑^d-1_i=0(C(i)-C(i+1))t_i+1 =∑^d-1_i=0C(i)(t_i+1-t_i).Assume now that q_i=t_1/t_i+1 for 0≤ i≤ d-1. Since R(q;t,ν) is increasing with respect to ν, R(q;t,ν^*) ≤R(q;t,2C(0),…,2C(d))= 2(∑^d-1_i=0C(i)-C(i+1)/q_i)(∑^d-1_i=0q_i(t_i+1-t_i))= 2(∑^d-1_i=0(C(i)-C(i+1))t_i+1)(∑^d-1_i=0t_i+1-t_i/t_i+1) .Furthermore, as (t_i+1-t_i)/t_i+1≤ln (t_i+1/t_i) for 1≤ i≤ d-1,∑^d-1_i=0t_i+1-t_i/t_i+1≤1+ln (t_d/t_1).Using (<ref>) once again yields (<ref>). § RELATION WITH THE ANOVA DECOMPOSITION AND THE TRUNCATION DIMENSION This section assumes that f is a square-integrable function on [0,1]^d, and that each U_i isuniformly distributed on [0,1], with (f(U))>0. Consider a decomposition of f in the following form: f=∑_Y⊆{1,…,d}f_Y,where f_Y is a measurable function on [0,1]^d andf_Y(u) depends on u only through(u_j)_j∈ Y,for u=(u_1,…,u_d)∈[0,1]^d. For instance, f_∅ is a constant, f_{j}(u) is a function of u_j, and f_{j,k}(u) is a function of (u_j,u_k). The relation (<ref>) is called ANOVA representation of f if, for Y⊆{1,…,d},any vector u∈[0,1]^d, and anyj∈ Y,∫^1_0 f_Y(u_1,…,u_j-1,x,u_j+1,…,u_d) dx=0.It can be shown <cit.> that there is a unique ANOVA representation of f, thatthe f_Y's are square-integrable, and that (f(U))=∑_Y⊆{1,…,d}σ_Y^2,where σ_Y is the standard deviation of f_Y(U). Furthermore,for 0≤ i≤ d-1,E(f(U)|U_i+1,…,U_d)=∑_Y⊆{i+1,…,d}f_Y(U),and the covariance between f_Y(U) and f_Y'(U) is null if Y≠ Y'. Hence, for 0≤ i≤ d-1, C(i)=∑_Y⊆{i+1,…,d}σ_Y^2. It follows that C(i) is equal to the variance corresponding to the subset {i+1,…,d}, as defined in <cit.>.Thus, C(i)/C(0) is equal to the global sensitivity index S_{i+1,…,d} for the subset {i+1,…,d}(see <cit.>).Proposition <ref> belowrelates the performance of our algorithm to the truncation dimension d_t of f, defined in <cit.> as d_t≜∑_Y⊆{1,…,d},Y≠∅max(Y)σ_Y^2/(f(U)).Under the conditions in Proposition <ref>, ift_d=Θ(d) and d_t is upper bounded by a constant independent of d, the asymptotic (as n goes to infinity) work-normalized varianceof our algorithm is O(ln(d)(f(U))), whereas thework-normalized varianceof the standard Monte Carlo algorithm isΘ(d(f(U))).Assume that there is a real number c such that t_i≤ ci for 1≤ i≤ d, and that q_i=1/(i+1) for 0≤ i≤ d-1. Then R(q;t,ν^*)≤2c(1+ln(d))d_t(f(U)). Proof. For0≤ i≤ d-1, we can rewrite (<ref>) as C(i)=∑_Y⊆{1,…,d},Y≠∅1{i<min(Y)}σ_Y^2. Hence,∑ ^d-1_i=0C(i)= ∑_Y⊆{1,…,d},Y≠∅∑^d-1_i=01{i<min(Y)}σ_Y^2= ∑_Y⊆{1,…,d},Y≠∅min(Y)σ_Y^2≤d_t(f(U)).Let t'_i=ci for 0≤ i≤ d, and t'=(t'_0,…,t'_d). The proof of (<ref>) shows that this relation still holds if t is replaced by any increasing sequence of length d+1 starting at 0. Replacing t by t' implies that R(q;t',ν^*)≤ 2c(1+ln(d))∑ ^d-1_i=0C(i)≤2c(1+ln(d))d_t(f(U)).Moreover, R(q;t,ν^*)≤ R(q;t',ν^*)since t≤ t'. This completes the proof. § PROOF OF THEOREM <REF>We use thefollowing proposition, whose proof follows immediately from (<ref>).If ν∈ℝ^d×{0} andν'∈ℝ^d×{0} are such that ν'≤ν,andq∈ A, then R(q;t,ν')≤ R(q;t,ν),with equality ifν_0=ν'_0 and, for 1≤ i≤ d-1,(ν_i-ν'_i)(q_i-1- q_i)=0. By definition of the lower hull,(θ_i), 0≤ i≤ d-1, is an increasing sequence. Furthermore, θ_d-1<0 since it is equal to the slope of a segment joining (t_i,ν_i) to(t_d,0), for some i∈[0,d-1]. Hence θ_i<0 for 0≤ i≤ d-1, and so q^* is well defined and belongs to A. Furthermore, (ν'_i), 0≤ i≤ d, is a decreasing sequence, and ν'_d=ν_d=0. On the other hand, by (<ref>),R(q^*;t,ν')=(∑_i=0^d-1√((ν'_i-ν'_i+1)(t_i+1-t_i)) )^2.Since, by the Cauchy-Schwartz inequality, for all non-negative sequences (x_i) and (y_i), (∑^d-1_i=0√(x_iy_i))^2≤(∑^d-1_i=0x_i)(∑^d-1_i=0y_i), it follows that R(q^*;t,ν')≤ R(q;t,ν') for q∈ A. Furthermore, by Proposition <ref>, R(q;t,ν')≤ R(q;t,ν), and soR(q^*;t,ν')≤ R(q;t,ν). On the other hand,(ν_i-ν'_i)(q^*_i-1- q^*_i)=0 for 1≤ i≤ d-1. This is because, ifν_i≠ν'_i, then the point (t_i,ν_i) does not belong to the lower hull of the set{(t_j,ν_j):0≤ i≤ d}. Hence(t_i,ν'_i) belongs to the segment ( (t_i-1,ν'_i-1),(t_i+1,ν'_i+1), which implies that θ_i-1= θ_i and q^*_i-1= q^*_i. Thus, as ν_0=ν'_0, Proposition <ref> shows thatR(q^*;t,ν)= R(q^*;t,ν').This implies (<ref>) and thatR(q^*;t,ν)≤ R(q;t,ν) for q∈ A, as desired. § PROOF OF PROPOSITION <REF>The random vectors W^(1)=(U_1,… ,U_i,U'_i+1,…,U'_d), W^(2)=(U'_1,… ,U'_i,U_i+1,…,U_d), and W^(3)=(U'_1,… ,U'_i,U^”_i+1,…,U^”_d) have the same distribution as U. HenceE((f(U)-f(W^(1)))(f(W^(2))-f(W^(3)))=(f(U)-f(W^(1)),f(W^(2))-f(W^(3))). As W^(1) and W^(2) are independent, (f(W^(1)),f(W^(2))=0.Similarly,(f(W^(1)),f(W^(3))=(f(U),f(W^(3))=0.By bilinearity of the covariance, it follows thatE((f(U)-f(W^(1)))(f(W^(2))-f(W^(3)))=(f(U),f(W^(2)).We conclude the proof using Proposition <ref>. § PROOF OF PROPOSITION <REF>We first prove the following Markov property.Let i∈[0,d]. If H is a bounded random variable which is measurable with respect to the σ-algebra generated by X_i,Y_i,…,Y_d-1,thenE(H|Y_0,…,Y_i-1)=E(H|X_i). Proof. Let ℋ be the vector space of bounded real-valued random variables H satisfying (<ref>). Clearly, the constant random variables belong toℋ. Let (H_m), m≥0, be an increasing sequence ofpositive elements ofℋ such that H=sup_m≥0H_m is bounded. For m≥0,E(H_m|Y_0,…,Y_i-1)=E(H_m|X_i).By the conditionalLebesgue dominated convergence theorem <cit.>, the LHS (resp. RHS) of (<ref>) converges to E(H|Y_0,…,Y_i-1)(resp. E(H|X_i)) as m goes to infinity, and so H∈ℋ. Let 𝒢(resp.𝒢')be the set ofbounded real-valuedrandom variables which are measurable with respect to the σ-algebra generated by X_i (resp. (Y_i,…,Y_d-1)),and let 𝒞 be the set of random variables of the form GG', with G∈𝒢 and G'∈𝒢'. ForG∈𝒢 and G'∈𝒢',E(GG'|Y_0,…,Y_i-1)= GE(G'|Y_0,…,Y_i-1) = GE(G'). The first equation holds since X_i is a measurable function of Y_0,…,Y_i-1, which implies that Gis measurable with respect to the σ-algebra generated by Y_0,…,Y_i-1. The second equation follows from the independence of(Y_i,…,Y_d-1) and (Y_0,…,Y_i-1). Similarly, since(Y_i,…,Y_d-1) andX_i are independent,E(GG'|X_i)=GE(G'|X_i)=GE(G').Thus, GG'∈ℋ,and so 𝒞⊆ℋ. As 𝒞 is closed under pointwise multiplication, by the monotone class theorem <cit.>, ℋ contains all bounded random variables which are measurable with respect to the σ-algebra generated by the elements of 𝒞. Since Y_i,…,Y_d-1 and X_i belong to 𝒞, it follows thatℋ contains all bounded random variables which are measurable with respect to the σ-algebra generated by X_i,Y_i,…,Y_d-1. This completes the proof. As g(X_d) is a measurable function of (X_i,Y_i,…,Y_d-1), for any integer m, the random variable min(m,g^+(X_d)) belongs to ℋ. By the conditionalLebesgue dominated convergence theorem, taking the limit as m goes to infinity implies thatE(g^+(X_d)|Y_0,…,Y_i-1)=E(g^+(X_d)|X_i).A similarly equation holds for g^-(X_d).Thus,E(g(X_d)|Y_0,…,Y_i-1)=E(g(X_d)|X_i).Replacing i with d-i implies thatE(g(X_d)|U_i+1,…,U_d)=E(g(X_d)|X_d-i). Taking the variance of both sides concludes the proof. § PROOF OF PROPOSITION <REF>For 0≤ i≤ d-1, set Z_i=α Y_i^2+β, so that X_i+1= w+Z_iX_i.It can be shown by induction on i thatX_d= w+ w Z_d-1Z'+Z” for 2≤ i≤ d,whereZ'=∑^i-1_j=1∏^j_k=2Z_d-kandZ”=Z_d-i⋯ Z_d-1X_d-i. By convention,the product over an empty set is equal to 1. Since X_i,0 is equal to the state of the Markov chain at step d if X_d-i=0, it follows that X_i,0= w+ w Z_d-1Z'. Note that E(Z”)=(α+β)^iE(X_d-i) as E(Y_i^2)=1. By <cit.>, for 0≤ m≤ d,E(X_m)≤max(X_0, w /1-α-β),and soE(Z”)≤max(X_0, w /1-α-β)(α+β)^i.Since the density of Y_d-1 is upper-bounded by 1/2, for γ≤γ',ℙ(γ≤ Z_d-1≤γ')= 2 ℙ(√((γ-β)^+/α)≤ Y_d-1≤√((γ'-β)^+/α))≤ √((γ'-β)^+/α) - √((γ-β)^+/α)≤ √(γ'-γ/α). The last equation follows from the inequality √(y')-√(y)≤√(y'-y), which holds for 0≤ y≤ y'. On the other hand, as X_i,0≤ X_d,g(X_d)-g(X_i,0) =1 X_i,0≤ z<X_d,0Thus, by (<ref>), C(i) ≤ ||g(X_d)-g(X_i,0)||^2= ℙ(X_i,0≤ z< X_d). But ℙ(X_i,0≤ z< X_d)= ℙ (z-Z”- w/ w Z'<Z_d-1≤z- w/ w Z')=E(ℙ (z-Z”- w/ w Z'<Z_d-1≤z- w/ w Z'|Z',Z”))≤ E(√(Z”/α w Z'))≤ κ(α+β)^i/2 ,where κ is a constantthat depends only on w, X_0, α and β. The third equation follows from (<ref>) and the independence of Z_d-1 and (Z',Z”). The last equation follows from Jensen's inequality and the inequality Z'≥1. Hence C(i)≤κ(α+β)^i/2 for 2≤ i≤ d. This inequality also holds for 0≤ i≤1 by replacing κ with max(κ,1/(α+β)).If Y and Y' are independent and X is a measurable function ofY, and X' is a measurable function of (X,Y'), then E(X'|Y)=E(X'|X). § PROOF OF PROPOSITION <REF>By classical calculations (see, e.g., <cit.>), it can be shown by induction on d that X_d=max _0≤ j≤ dS_j, whereS_j=∑^d-1_k=d-jY_k for 1≤j≤ d, with S_0=0. For0≤ i≤ d-1, since X_i,0is the number of customers in the queue at time-step d if there are no costumers in the queue at time-step d-i, it can be shown by induction on d that X_i,0=max _0≤ j≤ iS_j.HenceX_d=max(X_i,0,max _i+1≤ j≤ dS_j).Thus X_d-X_i,0≤max _i+1≤ j≤ dS_j^+, and soC(i) ≤ ||X_d-X_i,0||^2≤ ∑ _j=i+1^d||S_j^+||^2.On the other hand, since the Y_i's are independent, E(e^γ S_j)≤κ^j for 0≤ j≤ d. Furthermore, as (x^+)^2≤2e^x for x∈ℝ, γ^2(S_j^+)^2≤ 2e^γ S_j. Taking expectations implies that γ^2||S_j^+||^2≤ 2κ^j, and so C(i)≤γ'κ^i, where γ'=2γ^-2/(1-κ). § PROOF OF PROPOSITION <REF>It can be shown by induction on the number of arrivals in (θ',θ](see also <cit.>)that, if the system is empty at time θ'∈[0,θ], W_θ=sup_s∈[θ',θ]{Z_θ(s)-(θ -s)}. Hence X_d=sup_s∈[0,θ]{Z_θ(s)-(θ -s)}.Also, for 0≤ i≤ d, since X_i,0 is the residual work at timeθ if the residual work at time (d-i)θ/d is 0,X_i,0=sup_s∈[(d-i)θ/d,θ]{Z_θ(s)-(θ -s)}.Thus, by a calculation similar to the proof of Proposition <ref>, X_d-X_i,0≤sup_0≤ s≤(d-i) θ/d{Z_θ(s)-(θ -s)}^+. For non-negative integer j, setS_j=Z_θ((θ-j/λ ^*)^+)-j-1/λ ^*. Fixs∈[0,(d-i) θ/d], and let j=⌈(θ-s)λ^*⌉. As θ-j/λ ^*≤ s and Z_θ is a decreasing function, Z_θ(s)≤ Z_θ((θ-j/λ ^*)^+).Since θ-s≥(j-1)/λ ^*, this implies that Z_θ(s)-(θ -s)≤ S_j. But2j≥ i asj≥ iλ^*θ/d and λ^*θ≥ d/2. HenceX_d-X_i,0≤sup_j≥ i/2S_j^+.By calculations similar to the proof of Proposition <ref>, itfollows that C(i)≤∑ _j≥ i/2||S_j^+||^2. On the other hand, (<ref>) implies that E(e^γ (S_j+1-S_j))≤κ for j≥0, and soE(e^γ S_j)≤ e^γ /λ^*κ^j. We conclude the proof in a way similar to the proof of Proposition <ref>.Supplementary Material§ PROOF OF THEOREM <REF> If 𝒞 is a collection of random variables, denote by σ[𝒞] theσ-algebra generated by the elements of 𝒞. Let (ξ_k), k≥1, be a stationary sequence of real-valued random variables. For k≥1, define the σ-algebras ℱ_k=σ[ξ_n:1≤ n≤ k] and ℱ'_k=σ[ξ_n: n≥ k].Following <cit.>, for n≥1, let φ_n=sup_k≥1sup{ℙ (B'|B)-ℙ (B'):B∈ℱ_k, ℙ(B)>0, B'∈ℱ'_k+n}.The sequence (φ_n) is used to study the mixing proprieties of the sequence(ξ_k). Theorem <ref> below establishes a functional central limit theorem on the partial sums of (ξ_k) under certain conditions on (φ_n). Theorem <ref>follows immediately fromTheorem 19.2 and the discussion on p. 203 of <cit.>. Let D_∞ denote the set of real-valued functions on the interval [0,∞)that are right-continuous and have left-hand limits, endowed with the Skorokhod topology (see <cit.>). Assume that(ξ_k), k≥1, is stationary, thatξ_1 is square-integrable, and that ∑_n=0^∞√(φ_n)<∞. Then the seriesσ̅^2=(ξ_1)+2∑^∞_k=2(ξ_1,ξ_k)converges absolutely. If σ̅>0and S_n=∑^n_k=1(ξ_k-E(ξ_k)) then, asn→∞,S_⌊ ns⌋/√(n)⇒σ̅B_s in the sense of D_∞, where B_s is a standard Brownian motion.We show that the conditions of Theorem <ref> hold when ξ_k=f(V^(k)) for k≥1. As (V^(k)), k≥1, is a stationary Markov chain, the sequence (f(V^(k))), k≥1, is stationary. We first prove the following lemma, which follows intuitively from the fact that if all components ofthe copy of Uare redrawn at a certain iteration, future copies of U are independent of past ones.For positive integers n and k, if B∈ℱ_k and B'∈ℱ'_k+n, then B and B'∩ I are independent, whereI=I_k,n={ω∈Ω:∃ j∈[k,k+n-1],N_j(ω)=d}. Proof.Let 𝒢_k=σ[N_j,U^(j+1):j≥ k]. Fix l≥ k+n. By construction, if ω∈ I, there is j∈[k,k+n-1] is such thatV^(j+1)(ω)=U^(j+1)(ω).Hence,if ω∈ I, the vectorV^(l) does not depend V^(k). More precisely, it can be shown by induction that there is a measurable function H:F^d(l-k)×ℝ^(l-k)→𝔽^d such that, forω∈ I,V^(l)(ω)=H(U^(k+1)(ω),…,U^(l)(ω),N_k(ω),…,N_l-1(ω)).For ω∈Ω, let G(ω)=f(H(U^(k+1)(ω),…,U^(l)(ω),N_k(ω),…,N_l-1(ω))).Thus, f(V^(l)(ω))=G(ω) forω∈ I. Asthe maps f and H are measurable, and since the random variables U^(k+1),…,U^(l) and N_k,…,N_l-1 are measurable with respectto 𝒢_k,the random variable G is measurable with respectto 𝒢_k as well.Forz∈ℝ,let B_z={ω∈Ω:f(V^(l)(ω))<z}. SinceI∈𝒢_k and B_z∩ I={ω∈Ω:G(ω)<z}∩ I, we have B_z∩ I∈𝒢_k. Thus B_z∈𝒢'_k for any real number z, where𝒢'_k={B∈ℱ:B∩ I∈𝒢_k}. It is easy to check that𝒢'_k is a σ-algebra.We conclude thatf(V^(l)) is measurable with respect to 𝒢'_k, for any l≥ k+n, and so ℱ'_k+n⊆𝒢'_k. Consequently, if B∈ℱ_k and B'∈ℱ'_k+n, thenB'∈𝒢'_k, and so B'∩ I∈𝒢_k. Sinceℱ_k and 𝒢_k are independent, so are B and B'∩ I. We now prove the theorem. For positive integers n and k, let B∈ℱ_k with ℙ(B)>0 and B'∈ℱ'_k+n, and define I as in Lemma <ref>. Thus,ℙ(B'∩ I|B)=ℙ(B'∩ I). Since ℙ( B')≤ℙ(B'∩ I)+ℙ( I^c), it follows that ℙ(B')≤ℙ( B'|B)+ℙ( I^c). Replacing B' with its complement and noting that ℙ( I^c)=(1-q_d-1)^n, we conclude that|ℙ( B'|B)-ℙ(B')|≤ (1-q_d-1)^n. Henceφ_n≤(1-q_d-1)^n, and so (<ref>) holds. Thus the conclusions of Theorem <ref> hold for the sequence (f(V^(k))), k≥1. It follows from (<ref>) and (<ref>) that σ̅=σ. By (<ref>),asn→∞,⌊ ns⌋(f_⌊ ns⌋-E(f(U)))/√(n)⇒σ̅B_sin D_∞. Setting s=1, which can be justified by applying thecontinuous mapping theorem <cit.>with the projection map <cit.>, implies (<ref>).Let τ_k denote the random amount time to generate V^(k) and calculate f(V^(k)). Thus E(τ_k)=T. By the strong law of large numbers, with probability 1,1/n∑^n_k=1τ_k→ Tas n goes to infinity.By (<ref>), (<ref>) and <cit.>, it follows that√(c)(f_Ñ(c)-E(f(U))) ⇒ N(0,Tσ^2 ),as c→∞. Since Tσ^2 = R(q;t, ν^*), this implies (<ref>).§ PROOF OF THEOREM <REF> AND OF PROPOSITION <REF> For convenience, set N_0=N̅_0=d. For 0≤ i≤ d-1, let 𝒮_i=𝒮_i(n)={k∈[0,n-1]:N_k>i}.Letu_i(0)=0,u_i(1),…,u_i(|𝒮_i|-1) be the elements of 𝒮_isorted in increasing order.Setu_i(|𝒮_i|)=n and let Q(i)=Q(i,n)=∑^|𝒮_i|_l=1(u_i(l)-u_i(l-1))^2. Note that 𝒮_0={0,…,n-1} and Q(0)=n. The following lemma gives an alternative characterization of Q(i).For 0≤ i≤ d-1, ∑_1≤ m<k≤ n1{max_m≤ j<kN_j≤ i}=1/2(Q(i)-n). Proof. Given integers m and k with 1≤ m<k≤ n, the condition max_m≤ j<kN_j≤ i holds if and only if [m,k)∩𝒮_i=∅. It is thus equivalent to the existence of an integer l∈[1,|𝒮_i|] such that u_i(l-1)<m<k≤ u_i(l). Hence ∑_1≤ m<k≤ n1{max_m≤ j<kN_j≤ i}=1/2∑^|𝒮_i|_l=1(u_i(l)-u_i(l-1))(u_i(l)-u_i(l-1)-1). We conclude the proof by noting that ∑^|𝒮_i|_l=1(u_i(l)-u_i(l-1))=n.Lemma <ref> below relates the variance of f_n to the Q(i)'s and C(i)'s.For n≥1,(f_n)=n^-2∑_i=0^d-1Q(i)(C(i)-C(i+1)). Proof. As in theproof of Theorem <ref>, weassume without loss of generality that E(f(U))=0. Letm<k be two integers in [1,n]. By arguments similar to those that lead to (<ref>),E(f(V^(m))f(V^(k)))=C(max_m≤ j<kN_j).Since, for 0≤ l≤ d,C(l) = ∑_i=l^d-1(C(i)-C(i+1))= ∑_i=0^d-11{l≤ i}(C(i)-C(i+1)), it follows thatE(f(V^(m))f(V^(k)))=∑_i=0^d-11{max_m≤ j<kN_j≤ i}(C(i)-C(i+1)).As (f_n)=n^-2(∑_m=1^nE((f(V^(m)))^2)+2∑_1≤ m<k≤ nE(f(V^(m))f(V^(k)))),and E((f(V^(m)))^2)=C(0) for 1≤ m≤ n, we conclude that(f_n)=n^-2(nC(0)+2∑_i=0^d-1∑_1≤ m<k≤ n1{max_m≤ j<kN_j≤ i}(C(i)-C(i+1))).Thus, by Lemma <ref>,(f_n)=n^-2(nC(0)+∑_i=0^d-1(Q(i)-n)(C(i)-C(i+1))), which, after some simplifications, completes the proof. We now prove Theorem <ref>. By the Cauchy-Schwartz inequality, for 0≤ i≤ d-1 andany sequence (x_l), 1≤ l≤𝒮_i,(∑^|𝒮_i|_l=1x_l)^2≤|𝒮_i|(∑^|𝒮_i|_l=1x_l^2).Replacing x_l with u_i(l)-u_i(l-1) shows that n^2≤|𝒮_i|Q(i). By Lemma <ref>, it follows that(f_n)≥∑_i=0^d-1C(i)-C(i+1)/|𝒮_i|. For 0≤ k≤ n-1, the expected running time of iteration k+1 is t_i if N_k=i. Furthermore, for i∈[1,d], there are |𝒮_i-1|-|𝒮_i|integers k in [0,n-1] with N_k=i, where 𝒮_d≜∅, and soT_n=∑^d_i=1(|𝒮_i-1|-|𝒮_i|)t_i=∑^d-1_i=0|𝒮_i|(t_i+1-t_i). Thus,T_n(f_n)≥ R(q;t,C), where q_i=|𝒮_i|/n for 0≤ i≤ d-1. This implies (<ref>). Assume now that N_k=N̅_k for k≥1. Then, by (<ref>), for 0≤ i≤ d-1, 𝒮_i={k∈[0,n-1]:kis a multiple of μ_i}. Thus|𝒮_i|=1+⌊(n-1)q̅_i⌋ which, by (<ref>), implies (<ref>). Sinceu_i(l)=lμ_i for 0≤ l≤ |𝒮_i|-1,Q(i)= μ_i∑^|𝒮_i|-1_l=1(u_i(l)-u_i(l-1)) +(n-max(𝒮_i))^2= μ_imax𝒮_i+(n-max(𝒮_i))^2. As 0≤ n-max(𝒮_i)≤μ_i, it follows that μ_i( n-μ_i)≤ Q(i)≤μ_in. By Lemma <ref>, this implies (<ref>) and shows that, as n goes to infinity, the LHS of (<ref>) converges to its RHS.We now prove Proposition <ref>. Assume that (N_k) satisfies the conditions in the proposition, and that nq_d-1>1.Then 𝒮_d-1 consists of theintegers belonging to {0}∪(n-nq_d-1,n), and so Q(d-1)≥ n^2(1-q_d-1)^2. By Lemma <ref>, it follows that(f_n)≥ (1-q_d-1)^2C(d-1), which completes the proof. § PROOF OF THEOREM <REF>We first prove the following lemma. Let (ν_i), 0≤ i≤ d, be a decreasing sequence such that ν_m_l≤(ϕ_L-ϕ_l) for 0≤ l≤ L, with ν_d=0. Then ∑_i=0^d-1√(ν_i/i+1)≤2∑^L_l=1√(m_lV_l). Proof. Since the sequence (ν_i) is decreasing and (i+1)^-1/2≤2(√(i+1)-√(i)) for i≥0,∑_i=j^k-1√(ν_i/i+1)≤2(√(k)-√(j))√(ν_j),for 0≤ j≤ k≤ d. Hence,∑_i=0^d-1√(ν_i/i+1) = ∑^L-1_l=0∑_i=m_l^m_l+1-1√(ν_i/i+1)≤ 2∑^L-1_l=0(√(m_l+1)-√(m_l))√(ν_m_l)≤ 2∑^L-1_l=0(√(m_l+1)-√(m_l))Std(ϕ_L-ϕ_l)= 2∑^L_l=1√(m_l)(Std(ϕ_L-ϕ_l-1)-Std(ϕ_L-ϕ_l)) ≤ 2∑^L_l=1√(m_lV_l),where the last equation follows by sub-linearity of the standard deviation.We now prove Theorem <ref>. Since ϕ_l is square-integrable and is a measurable function ofU_1,…,U_m_l, byProposition <ref>, C(m_l)≤(ϕ_L-ϕ_l) for 0≤ l≤ L. By Proposition <ref>, the sequence (C(i)), 0≤ i≤ d, is decreasing, and so it satisfies the conditions of Lemma <ref>. Thus,∑_i=0^d-1√(C(i)/i+1)≤2∑^L_l=1√(m_lV_l). Furthermore, byProposition <ref>, R(q;t,ν^*) ≤8c(∑^d-1_i=0(√(i+1)-√(i))√(C(i)))^2.Since √(i+1)-√(i)≤(i+1)^-1/2, it follows thatR(q;t,ν^*) ≤32 c(∑^L_l=1√(m_lV_l))^2.Using (<ref>) concludes the proof. § RELATION WITHSPLITTING AND CONDITIONAL MONTE CARLO Like the splitting algorithm described in <cit.> when d=2, our method samples more often important random variables.This section explores further the relation between our method and the splitting and conditional Monte Carlo methods.Fix n≥1 and assume for simplicity that the sequence (N_k) is deterministic. We first analyse the relation between our method and the conditional Monte Carlo method in the general case, then show thatthe generic dimension reduction algorithm for Markov chains estimation canbe efficiently cast as asplitting algorithm.§.§ Relation with conditional Monte CarloDefine𝒮_ivia (<ref>) for 0≤ i≤ d-1, and set𝒮_d={0}.For 0≤ i≤ d and m∈𝒮_i, let𝒮_i,m={k∈[m,n-1]:(m,k]∩𝒮_i=∅}, and f_i,m=1/|𝒮_i,m|∑^_k∈𝒮_i,mf(V^(k+1)).Thus, if (m,m') are consecutive elements of 𝒮_i∪{n}, then 𝒮_i,m={m,…, m'-1}.In particular, for 0≤ m≤ n-1, we have 𝒮_0,m={m}, and sof_m,0=f(V^(m+1)). Similarly, 𝒮_d,0={0,…,n-1} and f_d,0=f_n. For 0≤ i≤ d,m∈𝒮_i, and k ∈𝒮_i,m, we haveN_l≤ i for m< l≤ k, and sothelast d-icomponents of V^(k+1) and V^(m+1) arethe same. In other words,V^(k+1)_j=V_j^(m+1),for0≤ i<j≤dandk ∈𝒮_i,m.We can thus view f_i,mas a discrete analog to E(f(V^(m+1))|V_i+1^(m+1),…,V_d^(m+1)). The following proposition shows that the random variables f_i,m, for0≤ i≤ d and m∈𝒮_i,can be calculated inductively via (<ref>). This is reminiscent of the conditional Monte Carlo method, where an expectationis estimated via an average of conditional expectations.For 1≤ i≤ d and m∈𝒮_i,f_i,m=∑_k∈𝒮_i-1∩𝒮_i,m|𝒮_i-1,k|/|𝒮_i,m|f_i-1,k,and|𝒮_i,m|=∑_k∈𝒮_i-1∩𝒮_i,m|𝒮_i-1,k|. Proof. Wefirst show that𝒮_i,m=⋃_k∈𝒮_i-1∩𝒮_i,m𝒮_i-1,k.Ifk∈𝒮_i-1∩𝒮_i,m and k'∈𝒮_i-1,k, we have (m,k]∩𝒮_i=∅ and(k,k']∩𝒮_i-1=∅. As 𝒮_i⊆𝒮_i-1 and m≤ k≤ k', this implies that(m,k']∩𝒮_i=∅, and so k'∈𝒮_i,m.Thus ⋃_k∈𝒮_i-1∩𝒮_i,m𝒮_i-1,k⊆𝒮_i,m. Conversely, given k'∈𝒮_i,m, let k=max([0,k']∩𝒮_i-1).Since m∈[0,k']∩𝒮_i-1, the integer k is well-defined and m≤ k. Since k≤ k' and (m,k']∩𝒮_i=∅, we have (m,k]∩𝒮_i=∅. Hence k∈𝒮_i,m, and sok∈𝒮_i-1∩𝒮_i,m.Furthermore,(k,k']∩𝒮_i-1=∅ by (<ref>).Thusk'∈𝒮_i-1,k, and so 𝒮_i,m⊆⋃_k∈𝒮_i-1∩𝒮_i,m𝒮_i-1,k. This implies (<ref>).Moreover,if k∈𝒮_i-1 and j∈𝒮_i-1,k, then (k,j]∩𝒮_i-1=∅, and so k=max([0,j]∩𝒮_i-1). Thus,if k and k' are distinct elements of 𝒮_i-1, the sets 𝒮_i-1,k and 𝒮_i-1,k' are disjoint. Together with (<ref>), this immediately implies (<ref>) and (<ref>).§.§ The Markov chains caseUsing the same notation and assumptions as in <ref>, weshow how to cast the generic dimension reduction algorithm for Markov chains estimation as asplitting algorithm. For each integer k in[1,n], define the Markov chain(X_i^(k)), 0≤ i≤ d, by induction on i as follows: X_0^(k)=X_0 and X^(k)_i+1=g_i(X^(k)_i,V^(k)_d-i) for 0≤ i≤ d-1. Then it can be shown by induction that X_d^(k)=G_d(V^(k),X_0), and that g(X_d^(k))=f(V^(k)). The generic dimension reduction algorithm for Markov chains estimation described in <ref>thus outputs the average of g(X_d^(1)),…,g(X_d^(n)). Furthermore, it can be shown by induction that, for 0≤ i ≤ d and 1≤ k≤ n, the random variable X_i^(k) is a deterministic function of (V^(k)_j), d-i<j≤ d. On the other hand, by (<ref>), for 0≤ i≤ d and0≤ k≤ n-1, if m=max([0,k]∩𝒮_d-i) then k∈𝒮_d-i,m. The integer m is well-defined since 0∈𝒮_d-i. By (<ref>), the last i components of V^(k+1) and V^(m+1) arethe same, and so X^(k+1)_i=X_i^(m+1). However, for 0≤ i≤ d-1 and k∈𝒮_d-i-1, we have V^(k+1)_d-i=U^(k+1)_d-i since N_k≥ d-i, and soX^(k+1)_i+1=g_i(X^(m+1)_i,U^(k+1)_d-i),with m=max([0,k]∩𝒮_d-i). We cancalculateX_i^(k+1) by induction on i via (<ref>)for0≤ i≤ d andk∈𝒮_d-i. Recalling that 𝒮_0={0,…,n-1}, this allows us to simulate X_d^(1),…,X_d^(n). The generic dimension reduction algorithm for Markov chains estimation described in <ref> can thus be rewritten as follows:*Generate N_k for 1≤ k≤ n-1 and calculate the sets 𝒮_i, 0≤ i≤ d. *Set X_0^(1)=X_0.*For i=0,…,d-1 and k∈𝒮_d-i-1, sample a copy U^(k+1)_d-i of Y_i and calculateX_i+1^(k+1) via (<ref>).*Output the averageof g(X_d^(1)),…,g(X_d^(n)).Step 3 generates one or several copies of X_i+1 for each copy of X_i, and so the above algorithm may be viewed as a splitting algorithm. Assume now thatN_k=N̅_k for k≥1.Step 1 can then be implemented via (<ref>). In (<ref>),m=0 if i=0 and, by (<ref>),m=⌊ k/μ_d-i⌋μ_d-i if 1≤ i≤ d-1.The expected running times of this algorithm and of the algorithm in <ref>arewithin a constant from each other as they areboth proportional to the number of sampled copies of the U_i's. § FURTHER NUMERICAL EXPERIMENTS§.§ Comparison with Quasi-Monte CarloConsider the G_t/D/1 queue with the same parameters as in <ref>. Table <ref> estimates E(X_d), and Table <ref> gives VRFs in the estimation ofℙ(X_d> z), for selected values of z. Our numerical results for the RDR and DDR algorithms are similar to those of <ref>. Once again, forthe RDR and DDR algorithms, the variable Cost × Std^2 is roughly independent of d, and the variance reduction factors are roughly proportional to d. In contrast, for the QMC algorithm, the variable Cost × Std^2 is roughly proportional to d, and the variance reduction factors are roughly constant. The VRFs of the RDR and DDR algorithms in Table <ref> are, in general, greater than or equal to the corresponding VRFs in Table <ref>, which confirms the resiliency of these algorithms to discontinuities of g. In contrast, the VRFs of the QMC algorithmin Table <ref> are lower than the corresponding VRFsin Table <ref>. The RDR and DDR algorithmsoutperform the QMC algorithm. The VRFs of the QMC algorithm in Tables <ref> and <ref> are of the same order of magnitude as those obtained by <cit.>, who have reduced the variance in the simulation of a M/M/1 queue by afactor ranging between 5 and 10 via lattice rules.§.§ G_t/D/1 queue with time-varying amplitudeConsider a G_t/D/1 queue whereA_i has a Poisson distribution with time-varying rateλ_i=(1-1/ln (i+2))(0.75+0.5cos(π i/50)),for 1≤ i≤ d (recall that A_0=0). Thus, up to a time-varying multiplicative factor, λ_ihas the same expression as in <ref>. Table <ref> estimates E(X_d), and Table <ref> gives VRFs in the estimation ofℙ(X_d> z), for selected values of z. Our numerical results are similar to those of <ref>, except that there is a big difference in the means at different times.Once again, forthe RDR, DDR, and MLMC algorithms, the variable Cost × Std^2 is roughly independent of d, and the variance reduction factors are roughly proportional to d. The VRFs of the RDR and DDR algorithms in Table <ref>are, in most cases, greater than or equal to the corresponding VRFs in Table <ref>, which confirms the resiliency of these algorithms to discontinuities of g. In contrast, the VRFs of the MLMC algorithm in Table <ref> are lower than the corresponding VRFsin Table <ref>. The RDR algorithm outperforms the MLMC algorithm bya factorranging from 1 to 2in Table <ref>, and a factor ranging from 2 to 14in Table <ref>. §.§ A multi-frequency G_t/D/1 queueConsider a G_t/D/1 queue whereA_i has a Poisson distribution with time-varying rateλ_i=0.75+0.2cos(π i/50)+0.1cos(π i/5000)+0.05cos(π i/500000),for 1≤ i≤ d, with A_0=0. Table <ref> estimates E(X_d). Once again, forthe RDR, DDR, and MLMC algorithms, the variable Cost × Std^2 is roughly independent of d, and the variance reduction factors are roughly proportional to d.The RDR algorithm outperforms the MLMC algorithm byabout a factorof 1.5. §.§ Comparison with a long-runaverage estimatorIn several examples, when d is large, the Markov chain (X_m), 0≤ m≤ d, has some notion of stationarity, and so E(g(X_d))can be estimatedvia a suitable long-run average. A drawback of such an estimator is that it is biased. We compare below a long-run averageestimator with the RDR and DDR algorithms. In Tables <ref> through <ref>, the RDR and DDR algorithmsuse 10^9/dsamples, while the long-run algorithm uses 11×10^9/dsamples. Thus, for eachof the three algorithms and each d,thetotal number of simulations of the U_is throughout the independent samples is roughly 11×10^9. The bias of the long-run averageestimator is calculated by taking thedifference with the DDR estimator.For the GARCH volatility example of <ref>, a natural long-run averageestimator for (X_d > z) is 1/d∑^d_i=11{X_i>z}.Table <ref>compares this estimator with the RDR and DDR estimators.The work-normalized variance of each of the three algorithms is roughly independent of d. The work-normalized variance of the long-run averageestimatoris smaller than those of the RDR and DDR algorithms by about a factor of 3 and 2, respectively.The bias of the long-run averageestimator decreases asd increases, but is much larger than the standard deviations of the long-run averageand DDR estimators. For the G_t/D/1 queue example of <ref> (resp. <ref>), the arrival rate is periodic (resp. almost periodic) withperiod100, and so a long-run average estimator for E(X_d) is 1/⌈ d/100⌉∑^⌈ d/100⌉-1_i=0X_d-i*100.Tables <ref> and <ref> compares this estimator with the RDR and DDR estimators. The work-normalized variance of the long-run averageestimator is larger than those of the RDR and DDR estimators bya factor ranging from 1.5 and 2.2. The bias of the long-run algorithm is not reported in Table <ref> because it is not statistically significant. In Table <ref>, however, the bias of the long-run averageestimator is much larger than the standard deviations of the long-run averageand DDR estimators.Consider now theM_t/GI/1 queue example of <ref>, and assume that θ is an integer. In our experiments, we have set d=θ and X_i=W_i for 0≤ i≤ d,and so a long-run averageestimator for ℙ(W_θ> 1) is 1/⌈ d/100⌉∑^⌈ d/100⌉-1_i=01{X_d-i*100>1}.Table <ref> compares this estimator with the RDR and DDR estimators. The work-normalized variance of the long-run averageestimator is larger than those of the RDR and DDR estimators by about a factor of 1.3 and 2, respectively. Here again, the bias of the long-run algorithm is not reported because it is not statistically significant.In summary, for large d and Markov chains with periodic features, E(g(X_d)) can beestimated via a suitable biased long-run averageestimator. The order of magnitude of the biasdepends on the application and on the value ofd. The bias of the long-run averageestimator isdifficult to evaluate without using an alternative estimator, though. In the examples in this subsection, the work-normalized variances of the long-run average, RDR and DDR estimators have the same order of magnitude. In theG_t/D/1 queue example of <ref>, however, the arrival rate is periodic withperiod10^6. This example does not seem to admit a suitable long-runaverage estimator for the values of d listed in Table <ref>.
http://arxiv.org/abs/1708.07466v3
{ "authors": [ "Nabil Kahale" ], "categories": [ "stat.CO" ], "primary_category": "stat.CO", "published": "20170824154750", "title": "Randomized Dimension Reduction for Monte Carlo Simulations" }
Statistical mechanics of specular reflections from fluctuating membranes and interfaces David R. Nelson December 30, 2023 =======================================================================================[1]Mathematics Section, École Polytechnique Fédérale de Lausanne, Station 8, CH-1015 Lausanne, Switzerland, [email protected] [2]Université de Genève, Section de mathématiques, 2-4 rue du Lièvre, CP 64, CH-1211 Genève 4, Switzerland, [email protected], [email protected] A new explicit stabilized scheme of weak order one for stiffand ergodic stochastic differential equations (SDEs)is introduced. In the absence of noise, the new method coincides with the classical deterministic stabilized scheme (or Chebyshev method) for diffusion dominated advection-diffusion problems and it inherits its optimal stability domain size that grows quadratically with the number of internal stages of the method. For mean-square stable stiff stochastic problems, the scheme has anoptimal extended mean-square stability domain that grows at the same quadratic rate as the deterministic stability domain sizein contrast to known existing methods for stiff SDEs [A. Abdulle and T. Li. Commun. Math. Sci., 6(4), 2008, A. Abdulle, G. Vilmart, and K. C. Zygalakis, SIAM J. Sci. Comput., 35(4), 2013]. Combined with postprocessing techniques, the new methods achieve a convergence rate of order two for sampling the invariant measure of a class of ergodic SDEs, achieving a stabilized version of the non-Markovian scheme introduced in [B. Leimkuhler, C. Matthews, and M. V. Tretyakov, Proc. R. Soc. A, 470, 2014]. Keywords: explicit stochastic methods, stabilized methods, postprocessor, invariant measure, ergodicity, orthogonal Runge-Kutta Chebyshev, SK-ROCK, PSK-ROCK. AMS subject classification (2010): 65C30, 60H35, 65L20, 37M25 § INTRODUCTION We consider Itô systems of stochastic differential equations of the form dX(t) = f(X(t)) dt + ∑_r=1^mg^r(X(t)) dW_r(t), X(0)=X_0 where X(t) is a stochastic process with values in ^d, f:^d→^d is the drift term, g^r:^d→^d, r=1,…, m are the diffusion terms, and W_r(t), r=1,…,m, are independent one-dimensional Weiner processes fulfilling the usual assumptions. We assume that the drift and diffusion functions are smooth enough and Lipschitz continuous to ensure the existence and uniqueness of a solution of (<ref>) on a given time interval (0,T). We consider autonomous problems to simplify the presentation, but we emphasise that the scheme can also be extended to non-autonomous SDEs. A one step numerical integratorfor the approximation of (<ref>) at time t=nh is a discrete dynamical system of the formX_n+1 = Ψ(X_n,h,ξ_n)where h denotes the stepsize and ξ_n are independent random vectors.Analogously to the deterministic case, standard explicit numerical schemes for stiff stochastic problems, such as the simplest Euler-Maruyama method defined as X_n+1 = X_n + hf(X_n) + ∑_r=1^mg^r(X_n) Δ W_n,r, X(0)=X_0, where Δ W_n,r=W_r(t_n+1)-W_r(t_n) are the Brownian increments, face a severe timestep restriction <cit.>, and one can use an implicit or semi-implicit scheme with favourable stability properties. In particular, it is shown in <cit.> that the implicit θ-method of weak order one is mean-square A-stable if and only if θ≥ 1/2, while weak order two mean-square A-stable are constructed in <cit.>. An alternative approach is to consider explicit stabilized schemes with extended stability domains, as proposed in <cit.>. In <cit.> the deterministic Chebyshev method is extended to the context of mean-square stiff stochastic differential equations with Itô noise, while the Stratonovitch noise case is treated in <cit.>. In place of a standard small damping, the main idea in <cit.> is to use a large damping parameter η optimized for each number s of stages to stabilize the noise term. This yields a family of Runge-Kutta type schemes with extended stability domain with size L_s≃ 0.33 s^2. This stability domain size was improved to L_s≃ 0.42 s^2 in <cit.> where a family ofweak second order stabilized schemes (and strong order one under suitable assumptions) is constructed based on the deterministic orthogonal Runge-Kutta-Chebyshev method of order 2 (ROCK2) <cit.>.For ergodic SDEs, i.e., when (<ref>) has a unique invariant measureμ satisfyingfor each test function ϕ and for any deterministic initial condition X_0=x,lim_T →∞1/T∫_0^Tϕ(X(s))ds =∫_ϕ(y)dμ(y), ,one is interested in approximating numerically the long-time dynamics and to design numerical scheme with a unique invariant measure such that | lim_N →∞1/N+1∑_n=0^Nϕ(X_n)-∫_ϕ(y)dμ(y)|≤ C h^r, where C is independent of h small enough and X_0. In such a situation, we say that the numerical scheme has order r with respect to the invariant measure. For instance, the Euler-Maruyama method has order 1 with respect to the invariant measure. In <cit.> the following non-Markovian scheme with the same cost as the Euler-Maruyama method was proposed for Brownian dynamics, i.e where the vector field is a gradient f(x)=-∇ V(x) and the noise is additive (g(x)=σ), X_n+1 = X_n + hf(X_n) + σΔ W_n,j+Δ W_n+1,j/2, X(0)=X_0, and it was shown in <cit.> that (<ref>) has order 2 with respect to the invariant measure for Brownian dynamics. However, the admissible stepsizes for such an explicit method to be stable may face a severe restriction and alternatively to switching to drift-implicit methods, one may ask if a stabilized version of such an attractive non-Markovian scheme exists.In this paper we introduce a new family of explicit stabilized schemes with optimal mean-square stability domain of size L_s=Cs^2, where C≥ 2-4/3η and η≥ 0 is a smallparameter. We emphasize that in the deterministic case, L_s=2s^2 is the largest, i.e. optimal, stability domain along the negative real axis for an explicit s-stage Runge-Kutta method <cit.>.We note that the Chebyshev method (<ref>) (with η=0) realizes such an optimal stability domain. The new schemes have strong order 1/2 and weak order 1. The main ingredient for the design of the new schemes is to consider second kind Chebyshev polynomials, in addition to the usual first kind Chebyshev polynomials involved in the deterministic Chebychev method and stochastic extensions <cit.>. For stiffstochastic problems, the stability domain sizes are close to the optimal value 2s^2 and in the deterministic setting the method coincide with the optimal first orderexplicit stabilized method. Thus these methods are more efficient than previously introduced stochastic stabilized methods <cit.>. For ergodic dynamical systems, in the context of the ergodic Brownian dynamics, the new family of explicit stabilized schemes allows for a postprocessing <cit.> (see also <cit.> in the context of Runge-Kutta methods) to achieve order two of accuracy for sampling the invariant measure. In this context, our new methods can be seen as a stabilized version of the non-Markovian scheme (<ref>) introduced in <cit.>. This paper is organized as follows. In Section <ref>, we introduce the new family of schemes with optimal stability domain and we recall the main tools for the study of stiff integrators in the mean-square sense. We then analyze its mean-square stability properties (Section <ref>), and convergence properties (Section <ref>). In Section <ref>, using a postprocessor we present a modificationwith negligible overcost that yields order two of accuracy for the invariant measure of a class of ergodic overdamped Langevin equation. Finally, Section <ref> is dedicated to the numerical experiments that confirm our theoretical analysis and illustrate the efficiency of the new schemes. § NEW SECOND KIND CHEBYSHEV METHODS In this section we introduce our new stabilized stochastic method. We first briefly recall the concept of stabilized methods. In the context of ordinary differential equations (ODEs),dX(t)/dt = f(X(t)),X(0)=X_0,and the Euler method X_1=X_0+hf(X(0)), a stabilization procedurebased on recurrence formula has been introduced in <cit.>. Its construction relies on Chebyshev polynomials (hence the alternative name “Chebyshev methods"), T_s(cos x) =cos (s x)and it is based on the explicit s-stage Runge-Kutta methodK_0 =X_0,K_1 = X_0+h μ_1 f(K_0), K_i = μ_ihf(K_i-1)+ν_iK_i-1+κ_iK_i-2, j=2,…,s,X_1 =K_s,where ω_0=1+η/s^2,ω_1=T_s(ω_0)/T'_s(ω_0),μ_1=ω_1/ω_0, and for all i=2,…,s, μ_i=2ω_1T_i-1(ω_0)/T_i(ω_0),ν_i=2ω_0T_i-1(ω_0)/T_i(ω_0),κ_i=-T_i-2(ω_0)/T_i(ω_0)=1-ν_i.One can easily check that the (family of) methods (<ref>) has the same first order accuracy as the Euler method (recovered for s=1). In addition, the scheme (<ref>) has a low memory requirement (only two stagesshould be stored when applying the recurrence formula) and it has a good internal stability with respect to round-off errors <cit.>. The attractive feature of such a scheme comes from its stability behavior. Indeed, the method (<ref>) applied to thelinear test problem dX(t)/dt=λ X(t) yields, using the recurrence relation T_j(p) = 2p T_j-1(p) - T_j-2(p), where T_0(p)= 1,T_1(p)= p,with p=λ h,X_1=R_s,η(p) X_0=T_s(ω_0+ω_1 p)/T_s(ω_0) X_0,where the dependence of the stability function R_s,η on the parameters s and η is emphasized with a corresponding subscript.The real negative interval (-C_s(η)· s^2 ,0) is included in thestability domain of the methodS:={p∈ℂ; |R_s,η(p)|≤ 1}. The constant C_s(η) ≃ 2-4/3 η depends on the so-called damping parameter η and for η=0,it reaches the maximal value C_s(0)=2. Hence, given the stepsize h, for systems with a Jacobian having large real negative eigenvalues (such as diffusion problems) with spectral radius λ_max at X_n, the parameter s for the next step X_n+1 can be chosen adaptively as[The notation [x] stands for the integer rounding of real numbers.]s=[√(hλ_max+1.5/2-4/3 η)+0.5],see <cit.> in the context of deterministically stabilized schemes of order two with adaptative stepsizes. The method (<ref>)is much more efficient as its stability domain increases quadratically with the number s of function evaluations while a composition of sexplicit Euler steps(same cost) has a stability domain that only increases linearly with s. In Figure <ref>(a) we plot the complex stability domain {p∈ ;|R_s,η(p)| ≤ 1} for s=7 stages and different values η=0, η=0.05 and η=3.98, respectively. We also plot in Figure <ref>(b) the corresponding stability function R_s,η(p) as a function of p real, to illustrate that the stability domain along the negative real axis corresponds to the values for which |R_s,η(p)|≤1. We observe that in the absence of damping (η=0), the stability domain includes the large real interval [-2· s^2,0] of width 2· 7^2=98. However for all p that are a local extrema of the stability function, where |R_s,η(p)|=1, the stability domain is very thin and does not include a neighbourhood close to the negative real axis. To make the scheme robust with respect to small perturbations of the eigenvalues, it is therefore needed to add some damping and a typical value is η=0.05, see for instance the reviews <cit.>. The advantage is that the stability domain now includes a neighbourhood of the negative real axis portion. The price of this improvement is a slight reduction of the stability domain size C_η s^2, where C_η≃ 2-4/3η. Chebyshev methods have been first generalized for Itô SDEs in <cit.> (see <cit.> for Stratonovitch SDEs) with the following stochastic orthogonal Runge-Kutta-Chebyshev method (S-ROCK):[A variant with analogous stability properties is proposed in <cit.> with g^r(K_s) replaced by g^r(K_s-1) in (<ref>).] K_0 = X_0 K_1 = X_0+μ_1hf(X_0) K_i = μ_ihf(K_i-1)+ν_iK_i-1+κ_iK_i-2, i=2,…,s, X_1 = K_s + ∑_r=1^mg^r(K_s)Δ W_r,where the coefficients μ_i,ν_i,κ_i are defined in (<ref>),(<ref>). In contrast to the deterministic method (<ref>), where η is chosen small and fixed (typically η=0.05), in stochastic case for the classical S-ROCK method <cit.>, the damping η=η_s is not small and chosen as an increasing function of s that plays a crucial role in stabilizing the noise and in obtaining an increasing portion of the true stability domain (<ref>) as s increases. In the context of stiff SDEs, a relevant stability concept is that of mean-square stability. A test problem widely used in the literature is <cit.> , dX(t)= λ X(t)dt + μ X(t) dW(t), X(0)=1, in dimensions d=m=1 with fixed complex parameters λ,μ. Note that other stability test problem in multiple dimensions are also be considered in <cit.> and references therein. The exact solution of (<ref>) is called mean-square stable if lim_t→∞𝔼(|X(t)|^2)=0 and this holds if and only if (λ,μ) ∈^MS, where ^MS = {(λ,μ)∈ℂ^2 ; (λ)+1/2|μ|^2<0}. Indeed, the exact solution of (<ref>) is given by X(t)=exp((λ+1/2μ^2) t+μ W(t)), and an application of the the Itô formula yields (|X(t)|^2)=exp(( (λ)+1/2μ^2) t) which tends to zero at infinity if and only if (λ)+1/2μ^2<0. We say that a numerical scheme {X_n} for the test problem (<ref>) is mean-square stable if and only if lim_n→∞(|X_n|^2) =0. For a one-step integrator applied to the test SDE (<ref>), we obtain in general a induction of the form X_n+1= R(p,q,ξ_n)X_n, where p=λ h, q =μ√(h), and ξ_n is a random variable (e.g. a Gaussian ξ_n∼𝒩(0,1) or a discrete random variable). Using (|X_n+1|^2)= (|R(p,q,ξ_n)|^2) (|X_n|^2), we obtain the mean-square stability condition <cit.> lim_n →∞(|X_n|^2)=0 (p,q)∈_num, where we define 𝒮_num={(p,q)∈ℂ^2 ;|R(p,q,ξ)|^2<1 }. The function R(p,q,ξ_n) is called the stability function of the one-step integrator. For instance, the stability function of the Euler-Maruyama method (<ref>) reads R(p,q,ξ)=1+p+qξ and we have (|R(p,q,ξ)|^2)=(1+p)^2 + q^2. We say that a numerical integrator is mean-square A-stable if ^MS⊂_num. This means that the numerical scheme applied to (<ref>) is mean-square stable for all all h>0 and all (λ,μ) ∈^MS for which the exact solution of (<ref>) is mean-square stable. An explicit Runge-Kutta type scheme cannot however be mean-square stable because its stability domain _num is necessary bounded along the p-axis. Following <cit.>, we consider the following portion of the true mean-square stability domain 𝒮_a= {(p,q) ∈ (-a,0) × ; p+1/2 |q|^2 < 0}, and define for a given method L=sup{a>0 ; 𝒮_a⊂𝒮_num}. We search for explicit schemes for which the length L of the stability domain is large. For example, for the classical S-ROCK method <cit.>, the value η=3.98 is the optimal damping maximising L for s=7 stages and we can see in Figure <ref> that this damping reduces significantly the stability domain compared to the optimal deterministic domain. The new S-ROCK method, denoted SK-ROCK (for stochastic second kind orthogonal Runge-Kutta-Chebyshev method) introduced in this paper is defined as K_0 = X_0 K_1 = X_0+μ_1hf(X_0+ν_1 Q) +κ_1Q K_i = μ_ihf(K_i-1)+ν_iK_i-1+κ_iK_i-2, i=2,…,s. X_1 = K_s, where Q=∑_r=1^mg^r(X_0)Δ W_r, and μ_1=ω_1/ω_0,ν_1=s ω_1/2,κ_1=s ω_1/ω_0 and μ_i,ν_i,κ_i, i=2,…,s are given by (<ref>), with a fixed small damping parameter η. In the absence of noise (g^r=0,r=1,…, m, deterministic case), this method coincides with the standard deterministic order 1 Chebychev method, see the review <cit.>. We observe that the new class of methods (<ref>) is closely related to the standard S-ROCK method (<ref>). Comparing the two schemes (<ref>) and (<ref>), the two differences are on the one hand that the noise term is computed at the first internal stage K_1 for (<ref>), whereas it is computed at the final stage in (<ref>), and on the other hand, for the new method (<ref>) the damping parameter η involved in (<ref>) is fixed and small independently of s (typically η=0.05), whereas for the standard method (<ref>), the damping η is an increasing function of s, optimized numerically for each number of stages s. If we apply the above scheme (<ref>) tothe linear test equation (<ref>),we obtain X_n+1 = R(p,q,ξ_n) X_n,where (|R(p,q,ξ)|^2) = A(p)^2 + B(p)^2 q^2, and A(p)=T_s(ω_0+ω_1 p)/T_s(ω_0) B(p)=U_s-1(ω_0+ω_1 p)/U_s-1(ω_0)(1+ ω_1/2 p) correspond to the drift and diffusion contributions, respectively.The above stability function(see Lemma <ref> in Section <ref>) is obtained by using the recurrence relation for the first kind Chebyshev polynomials (<ref>) and the similar recurrence relation for the second kind Chebyshev polynomials U_j(p) = 2p U_j-1(p) - U_j-2(p), where U_0(p)= 1,U_1(p)=2p. Notice that the relation T_s'(p)=s U_s-1(p) between first and second kind Chebyshev polynomials will be repeatedly used in our analysis. In Figure <ref>(b)(d), we plot the mean-square stability domain of the SK-ROCK method for s=7 and s=20 stages, respectively and the same small damping η=0.05 as for the deterministic Chebyshev method. We observe that the stability domain has length L_s≃ (2-4/3η)s^2. For comparison, we also include in Figure <ref>(a)(c) the mean-square stability domain of the standard S-ROCK method with smaller stability domain size L_s≃ 0.33· s^2. In Figure <ref>, we plot the stability function (|R(p,q,ξ)|^2) in (<ref>) as a function of p for various scaling of the noise for s=7 stages and damping η=0.05. We see that it is bounded by 1 for p∈(-2(1-2/3η)s^2,0) which is proved asymptotically in Theorem <ref>. The case q=0 corresponds to the deterministic case, and we see in Figure <ref>(a), the polynomial (|R(p,0,ξ)|^2)=A(p)^2. Noticing that (|R(p,q,ξ)|^2) is an increasing function of q, the case q^2=-2p represented in Figure <ref>(c) corresponds to the upper border of the stability domain _L defined in (<ref>) (note that this is the stability function value along the dashed boundary in Figure <ref>), while the scaling q^2=-p in Figure <ref>(c) is an intermediate regime. In Figures <ref>(b)(c), we also include the drift function A(p)^2 (red dotted lines) and diffusion function B(p)^2 q^2 (blue dashed lines), and it can be observed that their oscillations alternate, which means that any local maxima of one function is close to a zero of the other function. This is not surprising because A(p) and B(p) are related to the first kind and second kind Chebyshev polynomials, respectively, corresponding to the cosine and sine functions. This also explains how a large mean-square stability domain can be achieved by the new SK-ROCK method (<ref>) with a small damping parameter η, in contrast to the standard S-ROCK method (<ref>) from <cit.> that uses a large and s-dependent damping parameter η with smaller stability domain size L_s≃ 0.33· s^2(see Figures <ref>(a)(c)). § MEAN-SQUARE STABILITY ANALYSIS In this section, we prove asymptotically that the new SK-ROCK methods have an extended mean-square stability domain with size Cs^2 growing quadratically as a function of the number of internal stages s, where the constant C ≥ 2-4/3η is the same as the optimal constant of the standard Chebyshev method in the deterministic case, using a fixed and small damping parameter η. Let s≥ 1 and η≥ 0. Applied to the linear test equation dX=λ X dt + μ X dW, the scheme (<ref>) yields X_n+1 = R(λ h,μ√(h),ξ_n) X_n where p=λ h, q=μ√(h),ξ_n ∼(0,1) is a Gaussian variable and the stability function given by R(p,q,ξ) = T_s(ω_0+ω_1 p)/T_s(ω_0) + U_s-1(ω_0+ω_1 p)/U_s-1(ω_0)(1+ ω_1/2 p) q ξ. Indeed, we take advantage that T_j and U_j have the same recurrence relations (<ref>),(<ref>), and only the initialization changes with T_1(x)= x and U_1(x)= 2x, we deduce Q=X_0 μ√(h)ξ, and we obtain by induction on i≥ 1, K_i=T_i(ω_0+ω_1p)/T_i(ω_0)X_0+U_i-1(ω_0+ω_1p)/T_i(ω_0) (1+ω_1p/2)s ω_1Q and we use T_s'(x)=xU_s-1(x) and sω_1/T_s(ω_0)=1/U_s-1(ω_0), which yields the result for X_1=K_s. For a positive damping η, we prove the following main result of this section, showing that a quadratic growth L≥ (2-4/3 η)s^2 of the mean-square stability domain defined in (<ref>) is achieved for all η small enough and all stage number s large enough. There exists η_0>0 and s_0 such that for all η∈ [0,η_0] and all s≥ s_0, for all p∈ [-2ω_1^-1,0] and p + 1/2|q|^2 ≤ 0, we have (|R(p,q,ξ)|^2) ≤ 1. We deduce from Theorem <ref>, that the mean-square stability domain size (<ref>) of SK-ROCK grows as (2-4/3 η)s^2 which is arbitrarily close to the optimal stability domain size 2s^2 for η→ 0. Indeed, for s→∞ and all η≤η_0, we have 2ω_1^-1s^-2→ 2tanh(√(2η))/√(2η) = 2-4/3 η + (η^2) and for all s,η, we have 2ω_1^-1≥ (2-4/3 η) s^2. In addition, in the special case of a zero damping (η=0), the stability function (<ref>) reduces to R(p,q,ξ) = T_s(1+p/s^2) + s^-1U_s-1(1+p/s^2)(1+p/2s^2) q ξ, and it holds (|R(p,q, ξ)|^2) ≤ 1, for all s≥1, for all p∈[-2s^2,0] and all q ∈ such that p+|q|^2/2 ≤ 0. Indeed, for p∈[-2s^2,0], we denote cosθ = 1+p/s^2∈[-1,1] and usingT_s(cos(θ))=cos(sθ),sin(θ) U_s-1(cos(θ))=sin(sθ),we obtain (|R(p,q, ξ)|^2) ≤(|R(p,√(-2p))= cos(sθ)^2+ sin(sθ)^2 1+cosθ/2≤ 1, where we used -2p=2s^2(1-cosθ), 1+p/2s^2 = 1+cosθ/2 and sin^2 θ = (1+cosθ)(1-cosθ). Before we prove Theorem <ref>, we have the following lemma, see <cit.> for analogous results. We have the following convergences as s→∞ to analytic functions[ Note that for z<0, we can use √(2z)=i√(-2z) and obtain T_s(1+z/s^2)→cos(√(-2z)) for s→∞ and similarly α(z)=(√(-2z)).] uniformly for z in any bounded set of the complex plan, T_s(1+z/s^2)→cosh√(2z), s^-1U_s-1(1+z/s^2) →α(z):= sinh√(2z)/√(2z), ω_1s^2 →Ω(η)^-1, Ω(η):=tanh√(2η)/√(2η). We prove the uniform convergence of the first limit only, since it will be useful in the proof of the next theorem. The others can be proved in a similar way. First, let us write the two functions of η in Taylor series, s^-1U_s-1(ω_0) = s^-1∑_n=0^s-1U_s-1^(n)(1)/n!(η/s^2)^n =∑_n=0^s-1( 1/n!∏_k=0^n(1-k^2/s^2)∏_k=0^n1/2k+1)η^n, α(η) = ∑_n=0^∞2^nη^n/(2n+1)! =∑_n=0^∞(1/n!∏_k=0^n1/2k+1)η^n, where we used the formula sU_s-1^(n-1)(1)=T_s^(n)(1)=∏_k=0^n-1s^2-k^2/2k+1. Subtracting the above two identities, we deduce sup_η∈[-η_0,η_0]|s^-1U_s-1(ω_0)-α(η)| ≤ ∑_n=0^s-1η_0^n/n!(1-∏_k=0^n(1-k^2/s^2))∏_k=0^n1/2k+1 +∑_n=s^∞η_0^n/n!≤ ∑_n=0^s-1η_0^n/n!(1-∏_k=0^n(1-k^2/s^2))1/2s-1 +∑_n=s^∞η_0^n/n! Noticing that η_0^n/n!(1-∏_k=0^n(1-k^2/s^2))1/2s-1 converges to zero as s→∞ and is bounded by η_0^n/n! for all integers s,n, which is the general term of the convergent series of exp(η_0)=∑_n=0^∞η_0^n/n!, the Lebesgue dominated convergence theorem implies that (<ref>) converges to zero as s→∞, which concludes the proof. For all η small enough and all s large enough, we have the following estimate: s^2 ω_1/T_s(w_0)^21-(1-ω_1)^2/1-(ω_0-ω_1)^2≤ 1 Using the Lemma <ref> we have for s→∞, uniformly for all η∈[0,η_0], s^2ω_1/T_s(ω_0)^2→2√(2η)/sinh(2√(2η)) and 1-(1-ω_1)^2/1-(ω_0-ω_1)^2→1/1-Ω(η)η. Now if we expand both functions in Taylor series we get: 2√(2η)/sinh(2√(2η))=1-4/3η+(η^2),1/1-Ω(η)η=1+η+(η^2), and this implies that for all s large enough and all η≤η_0, s^2 ω_1/T_s(w_0)^21-(1-ω_1)^2/1-(ω_0-ω_1)^2≤ (1-4/3η_0+(η_0^2))(1+η_0+(η_0^2))=1-1/3η_0+(η_0^2), which is less than 1 for η_0 small enough. Numerical evidence suggests that the result of Theorem <ref> holds for all s≥ 1 and all η≥ 0. Indeed, it can be checked numerically that (<ref>) holds for all η∈(0,1) and all s≥1. Setting x=w_0+w_1p, a calculation yields (|R(p,q,ξ)|^2)≤ (|R(p,√(-2p),ξ)|^2)= T_s(x)^2/T_s(w_0)^2 + U_s-1(x)^2/U_s-1(w_0)^2 (1+w_1/2p)^2 (-2p) The proof is conducted in two steps, where we treat separately the cases p∈ [-2ω_1^-1,-1] and p∈[-1,0]. For the first case p∈ [-2ω_1^-1,-1], which corresponds to x∈ [-1+η/s^2,ω_0-ω_1], we have (|R(p,q,ξ)|^2) = T_s(x)^2/T_s(w_0)^2 + U_s-1(x)^2/U_s-1(w_0)^2(1-w_0-x/2)^2 2w_0-x/w_1= T_s(x)^2/T_s(w_0)^2 + U_s-1(x)^2(1-x^2) Q_s(x) where we denote Q_s(x)= s^2 ω_1/T_s(w_0)^2(1+x-η/s^2/2) 1-(x-η/s^2)^2/1-x^2 First, we note that 1+x-η/s^2/2∈ [0,1-ω_1/2]. Next, using η/s^2≤ 2, we deduce d/dx( 1-(x-η/s^2)^2/1-x^2) = 2η/s^21+x^2-η/s^2 x/(1-x^2)^2≥ 2η/s^2(1-x)^2/(1-x^2)^2≥ 0. Thus, 1-(x-η/s^2)^2/1-x^2 is an increasing function of x, smaller than its value at x=ω_0-ω_1, 1-(x-η/s^2)^2/1-x^2≤1-(1-ω_1)^2/1-(ω_0-ω_1)^2 Using Lemma <ref> we obtain |Q_s(x)| ≤ 1. This yields (|R(p,q,ξ)|^2) ≤ T_s(x)^2 + U_s-1(x)^2(1-x^2) =1. For the second case p∈[-1,0] which corresponds to x∈ [ω_0-ω_1,ω_0], we deduce from T_s(x)^2 + U_s-1(x)^2(1-x^2) =1 that (|R(p,q,ξ)|^2) ≤ 1/T_s(w_0)^2 + U_s-1(x)^2/U_s-1(w_0)^2( (1+w_1/2p)^2 (-2p)-(1-x^2)U_s-1(ω_0)^2/T_s(w_0)^2) Using Lemma <ref>, we get (|R(p,q,ξ)|^2) ≤ 1/T_s(w_0)^2 + U_s-1(x)^2/U_s-1(w_0)^2( (1+w_1/2p)^2 (-2p)-(1-x^2)U_s-1(ω_0)^2/T_s(w_0)^2)→ l(η,p):=1/cosh^2√(2η)+α(η+p/Ω(η))^2/α(η)^2(-2p(Ω(η)-1)+2Ω(η)^2η), for s→∞, where the above convergence is uniform for p∈[0,1],η≤η_0. Using the fact that Ω(η)=1-2/3η+O(η^2), we deduce ∂ l/∂η|_η=0=-2+α(p)^2(-4/3p+2). By Taylor series in the neighbourhood of zero we have α(p)^2=1+2/3p+8/45p^2+O(p^3), and for p∈[-1,0], α(p)^2≤ 1+2/3p+8/45p^2, thus for all p∈[-1,0], ∂ l/∂η|_η=0≤-2+(1+2/3p+8/45p^2)(-4/3p+2) =-8/135p^2(4p+9)≤0. Therefore, there exists η_0 small enough such that for all p∈[-1,0],η≤η_0, l(η,p)≤ l(0,p)=1. This concludes the proof of Theorem <ref>. § CONVERGENCE ANALYSIS We show in this section that the proposed scheme (<ref>) has strong order 1/2 and weak order 1 for general systems of SDEs of the form (<ref>) with Lipschitz and smooth vector fields, analogously to the simplest Euler-Maruyama method. We denote by C_P^4(^d,^d) the set of functions from ^d to ^d that are 4 times continuously differentiable with all derivatives with at most polynomial growth. The following theorem shows that the proposed SK-ROCK has strong order 1/2 and weak order 1 for general SDEs. Consider the system of SDEs (<ref>) on a time interval of length T>0, with f,g ∈ C_P^4(^d,^d), Lipschitz continuous. Then the scheme (<ref>) has strong order 1/2 and weak order 1, |( X(t_n) - X_n ) ≤ Ch^1/2, t_n=nh≤ T, | (ϕ(X(t_n))) - (ϕ(X_n)) | ≤ Ch, t_n=nh≤ T, for all ϕ∈ C_P^4(^d,), where C is independent of n,h. For the proof the Theorem <ref>, the following lemma will be useful. It relies on the linear stability analysis of Lemma <ref>. The scheme (<ref>) has the following Taylor expansion after one timestep, X_1 = X_0 + hf(X_0) + ∑_r=1^mg^r(X_0)Δ W_r + h (T_s”(ω_0)ω_1^2/T_s(ω_0) + ω_1/2) f'(X_0) ∑_r=1^mg^r(X_0)Δ W_r + h^2 R_h(X_0), where all the moments of R_h(X_0) are bounded uniformly with respect to h assumed small enough, with a polynomial growth with respect to X_0. Using the definition (<ref>) of the scheme and the recurrence relations (<ref>),(<ref>), we obtain by induction on i=1,…,s, K_i= X_0 + h T_i'(ω_0)ω_1/T_i(ω_0) f(X_0) + sT_i'(ω_0)ω_1/iT_i(ω_0)∑_r=1^mg^r(X_0)Δ W_r + h (sT_i”(ω_0)ω_1^2/iT_i(ω_0) + sT_i'(ω_0)ω_1^2/2iT_i(ω_0)) f'(X_0) ∑_r=1^mg^r(X_0)Δ W_r + h^2 R_i,h(X_0), and R_i,h(X_0) has the properties claimed on R_h(X_0). Using ω_1 = T_s(ω_0)/T_s'(ω_0), this yields the result for X_1=K_s. A well-known theorem of Milstein <cit.> (see <cit.>) allows to infer the global orders of convergence from the error after one step. We first show that for all r∈ℕ the moments (|X_n|^2r) are bounded for all n,h with 0≤ nh ≤ T uniformly with respect to all h sufficiently small. Then, it is sufficient to show the local error estimate |(ϕ(X(t_1))) -(ϕ(X_1))| ≤ C h^2, for all initial value X(0)=X_0 and where C has at most polynomial growth with respect to X_0, to deduce the weak convergence estimate (<ref>). For the strong convergence (<ref>), using the classical result from <cit.>, it is sufficient to show in addition the local error estimate (X(t_1) - X_1) ≤ C h for all initial value X(0)=X_0 and where C has at most polynomial growth with respect to X_0. These later two local estimates are an immediate consequence of Lemma <ref>. To conclude the proof of the global error estimates, it remains to check that for all r∈ℕ the moments (|X_n|^2r) are boundeduniformly with respect to all h small enough for all 0≤ nh ≤ T. We use here the approach of <cit.> which states that it is sufficient to show |(X_n+1-X_n| X_n)| ≤ C (1+| X_n|) h,| X_n+1- X_n| ≤ M_n (1+| X_n|) √(h), where C is independent of h and M_n is a random variable with moments of all orders bounded uniformly with respect to all h small enough. These estimates are a straightforward consequence of the definition (<ref>) of the scheme and the linear growth of f,g (a consequence of theirLipschitzness). This concludes the proof of Theorem <ref>. In the case of additive noise, i.e. g^r,r=1,…,m are constant functions, one can show that the order of strong convergence (<ref>) become 1, analogously to the case of the Euler-Maruyama method. For a general multiplicative noise, a scheme of strong order one can also be constructed with (|R(p,q,ξ)|^2) ≤ 1 for all p∈[-2ω_1^-1,0] and all q with p+|q|^2/2≤ 0, as it can be check numerically. The idea is to modify the first stages of the scheme such that the stability function (<ref>) becomes R(p,q,ξ) = T_s(ω_0+ω_1 p)/T_s(ω_0) + U_s-1(ω_0+ω_1 p)^2/U_s-1(ω_0)^2(1+ w_1/2 p-ω_1^4/2 p^2)(q ξ+q^2ξ^2-1/2). We refer to <cit.> for details. § LONG TERM ACCURACY FOR BROWNIAN DYNAMICS In this section we discuss the long-time accuracy of the SK-ROCK for Brownian dynamics (also called overdamped Langevin dynamics). We will see that using postprocessing techniques we can derive an SK-ROCK method that captures the invariant measure of Brownian dynamics with second order accuracy. In doing so, we do not need our stabilized method to be of weak order 2 on bounded time intervals and we obtain a method that is cheaper than the stochastic orthogonal Runge-Kutta-Chebyshev method of weak order 2 (S-ROCK2) proposed in <cit.>, as S-ROCK2 uses many more function evaluations per time-step and a smaller stability domain. §.§ An exact SK-ROCK method for the Orstein-Uhlenbeck process We consider the 1-dimensional Orstein-Uhlenbeck problem with 1-dimensional noise with constants δ,σ>0,dX(t) = -δ X(t)dt+ σ dW(t), that is ergodic and has a Gaussian invariant measure with mean zero and variance given by lim_t→∞(X(t)^2)=σ^2/(2δ). Applying the SK-ROCK method to the above system we obtain X_n+1=A(p)X_n+B(p)σ√(h)ξ_n where p=-δ h, ξ_n ∼(0,1) is a Gaussian variable and similarly as for(<ref>) we have A(p)=T_s(ω_0+ω_1 p)/T_s(ω_0), B(p)= U_s-1(ω_0+ω_1 p)/U_s-1(ω_0)(1+ ω_1/2 p). A simple calculation (using that |A(p)|<1) gives lim_n→∞(X_n^2)=σ^2/2δ R(p), R(p)= 2p B(p)^2/A(p)^2-1. From the above equation, we see that the SK-ROCK method has order r for the invariant measure of (<ref>) if and only if R(p)=1+ O(p^r) and a short calculation using (<ref>) reveals that R(p)=1+ O(p), it has order one for the invariant measure (this is of course not surprising because the SK-ROCKhas weak order one). We next apply the techniques of postprocessed integrators popular in the deterministic literature <cit.> and proposed in the stochastic context in <cit.>. The idea is to consider a postprocessed dynamics X_n=G_n(X_n) (of negligible cost) such that the process X_n approximates the invariant measure of the dynamical system with higher order. For the process (<ref>), we consider the postprocessor X_n =X_n+cσ√(h)ξ_n, which yields lim_n→∞(X_n^2)=σ^2/2δ (R(p)-2c^2p). In the case of the SK-ROCK method with η=0 (zero damping), we have A(p)=T_s(1+p/s^2), B(p)=U_s-1(1+p/s^2)(1+p/(2s^2))/s. Setting c=1/(2s) and using the identity (1-x^2)U_s-1^2(x)=1-T_s^2(x) with x=1+p/s^2 reveals that R(p)-2c^2p=1 and we obtainlim_n→∞(X_n^2)=σ^2/2δ.Hence the postprocessed SK-ROCK method (that will be denoted PSK-ROCK) captures exactly the invariant measure of the 1-dimensional Orstein-Uhlenbeck problem (<ref>). Such a behavior is known for the drift-implicit θ method with θ=1/2 (see <cit.> in the context of the stochastic heat equation) and has recently also been shown for the non-Markovian Euler scheme <cit.>. In <cit.> an interpretation of the scheme <cit.> as an Euler-Maruyama method with postprocessing (<ref>) with c=1/2 has been proposed and we observe that this is exactly the same postprocessor as for the PSK-ROCK method (with s=1,η=0). As the SK-ROCK method with zero damping is mean-square stable (see Remark <ref> for η=0), it can be seen as a stabilized version of the scheme <cit.>. However, the PSK-ROCK method with s>1 and zero damping is not robust to use as its stability domain along the drift axis does not allow for any imaginary perturbation at the points where|T_s(1+p/s^2)|=1 and it is not ergodic (see Remark <ref> below).Stability analysis for Orstein-Uhlenbeck Let M ∈^d× d denote a symmetric matrix with eigenvalues -λ_d ≤…≤ -λ_1 < 0, and consider the d-dimensional Orstein-Uhlenbeck problem dX(t) = MX(t)dt + σ dW(t) where W(t) denotes a d-dimensional standard Wiener process. The following theorem shows that the damping parameter η>0 plays an essential role to warranty the convergence to the numerical invariant measure ρ^h_∞(x)dx at an exponentially fast rate. Let η>0. Consider the scheme (<ref>) with postprocessor (<ref>) applied to (<ref>) with stepsize h and stage parameter s such that 2ω_1^-1≥ hλ_d. Then, for all h≤η/λ_1,ϕ∈ C_P^1(^d,), |(ϕ(X_n)) - ∫_^dϕ(x)ρ^h_∞(x)dx | ≤ Cexp(-λ_1(1+η)^-1 t_n) where C is independent of h,n,s,λ_1,…,λ_d. It is sufficient to show the estimate |A(-λ_j h)| ≤exp(-λ_1(1+η)^-1 h) for all h≤ h_0, where we denote A(z)=T_s(ω_0 +ω_1 z)/T_s(ω_0). Indeed, considering two initial conditions X_0^1,X_0^2 for (<ref>) and the corresponding numerical solutions X_n^1,X_n^2(obtained for the same realizations of {ξ_n}) with postprocessors X_n^1,X_n^2, we obtain X_n^1 - X_n^2 = A(hM) (X_n-1^1 - X_n-1^2) and using the matrix 2-norm A(hM) =max_j |A(-λ_j h)| and (<ref>), we deduce by induction on n, X_n^1 - X_n^2 = X_n^1 - X_n^2≤exp(-λ_1(1+η)^-1 t_n) X_0^1 - X_0^2, and taking X_0^2 distributed according to the numerical invariant measure yields the result. For the proof of (<ref>), let z=-λ_j h. Consider first the case z∈ (-ηω_1^-1s^-2,0). Using the convexity of A(z) on [-ηω_1^-1s^-2,0] (note that T_s'(x) is increasing on [1,∞)), we can bound A(z) by the affine function passing by the points (x_1,A(x_1)),(x_2,A(x_2)) with x_1=-ηω_1^-1s^-2, x_2=0, A(z) ≤ 1 + z(1-1/T_s(ω_0))η^-1ω_1s^2 Using ω_1s^2≥ 1 and T_s(ω_0)≥ 1+η, we obtain (1-1/T_s(ω_0))η^-1ω_1s^2≥ (1-(1+η)^-1)η^-1 = (1+η)^-1. This yields for all z∈ [-ηω_1^-1s^-2,0], A(z) ≤ 1 + z(1+η)^-1≤exp(z(1+η)^-1) where we used the convexity of exp(z(1+η)^-1) bounded from below by its tangent at z=0. We obtain A(-λ_j h) ≤ e^-λ_j h (1+η)^-1≤ e^-λ_1 h (1+η)^-1. We now consider the case z∈ [-L_s,-ηω_1^-1s^-2]. We have |ω_0 +ω_1 z|≤ 1, thus |T_s(ω_0 +ω_1 z)|≤ 1 and |A(z)| ≤ 1/T_s(ω_0) ≤exp(-λ_1(1+η)^-1 h) for all h≤ (1+η)log (T_s(ω_0))/λ_1, and thus also for h≤η/λ_1 where we use T_s(ω_0) ≥ 1+η and (1+η)log (1+η) ≥η. This concludes the proof. Note that η>0 is a crucial assumption in Theorem <ref>. Indeed, the estimate of Theorem <ref> is false for η=0 already in dimension d=1 for all s>1: for a stepsize h such that 1-hλ_1/s^2=cos(π/s) we obtain A(-λ_1h)=-1 and B(-λ_1h)=0 in (<ref>)(corresponding to the local extrema p=-λ_1 h of A(p) closest to zero) and X_n=(-1)^n X_0 for all n, and the scheme is not ergodic. In addition, notice that Theorem <ref> allows to use an h-dependent value of η such as η= hλ_1 where λ_1≥λ_1 is an upper bound for λ_1. In this case, the exponential convergence of Theorem <ref> holds for all stepsize h≤ 1. We end this section by noting that being exact for the invariant measure of Brownian dynamics (<ref>) is only true for the PSK-ROCK method (or the method in <cit.>)in the linear case, i.e. for a quadratic potential V.Second order accuracy for the invariant measure has been shown in <cit.> (see also <cit.>) for the method in (<ref>) (equivalent to PSK-ROCK with s=1 stage) for general nonlinear Brownian dynamics (<ref>). This will also be shown for the nonlinear PSK-ROCK method in the next section. §.§ PSK-ROCK: a second order postprocessed SK-ROCK method for nonlinear Brownian dynamics We consider the overdamped Langevin equation, dX(t) = -∇ V(X(t))dt + σ dW(t), where the stochastic process X(t) takes values in ^d and W(t) is a d-dimensional Wiener process. We assume that the potential V:^d → has class C^∞ and satisfies the at least quadratic growth assumption x^T∇ V(x) ≥ C_1x^T x -C_2 for two constants C_1,C_2>0 independent of x∈^d. The above assumptions warranty that the system (<ref>) is ergodic with exponential convergence to a unique invariant measure with Gibbs density ρ_∞ = Z exp(-2σ^-2V(x)), |(ϕ(X(t)) - ∫_^dϕ(x) ρ_∞(x) dx| ≤ C e^-λ t, for test function ϕ and all initial condition X_0, where C,λ are independent of t. We propose to modify the internal stage K_1=X_0+μ_1hf(X_0+ν_1Q)+κ_1Q of the method (<ref>) as follows: K_1=X_0+μ_1hf(X_0+ν_1Q)+κ_1Q+α h(f(X_0+ν_1Q)-2 f(X_0)+f(X_0-ν_1Q)), where α is a parameter depending on s and η given inTheorem <ref> below. Notice that for α=0, we recover the original definition from (<ref>). We note that the parameter α does not modify the stability function of Lemma <ref>, and yields a perturbation of order (h^2) in the definition of X_1. Thus, the results of Theorem <ref> and Theorem <ref> remain valid for any value of α for the scheme (<ref>) with modified internal stage (<ref>). Consider the Brownian dynamics (<ref>), where we assume that V:^d → has class C^∞, with ∇ V globally Lipschitz and satisfying (<ref>). Consider the scheme (<ref>) applied to (<ref>) with modified internal stage K_1 defined in (<ref>) with α defined in (<ref>), and the postprocessor defined as X_n = X_n + cσ√(h)ξ, where c^2 = -1/4 + ω_1/2 +ω_1T_s”(ω_0)/T_s'(ω_0)-ω_1^2T_s”(ω_0)/4T_s(ω_0), α =2/sω_0ω_1 (c^2+ ω_1^2T_s”(ω_0)/2 T_s(ω_0)-r_s), andr_s is defined by induction as r_0=0, r_1=s^2ω_1^3/4ω_0:=Δ_1 and r_i=ν_ir_i-1+κ_ir_i-2+Δ_i,Δ_i=μ_isT_i-1'(ω_0)ω_1/(i-1)T_i-1(ω_0), i=2,… s. Then, X_n yields order two for the invariant measure, i.e. (<ref>) holds with r=2, and in addition |(ϕ(X_n) - ∫_^dϕ(x) ρ_∞(x) dx| ≤ C_1 e^-λ t_n + C_2h^2, for all t_n=nh, ϕ∈ C_P^∞(^d,), where C_1,C_2 are independent of h assumed small enough, and C_2 is independent of the initial condition X_0. The proof of Theorem <ref> relies on the following postprocessing analysisfrom <cit.>. Consider a scheme (<ref>) with bounded moments and assumed ergodic when applied to (<ref>), where the potential V satisfies the above ergodicity assumptions. Assume that the scheme has a weak Taylor expansion after one time step of the form (ϕ(X_1)|X_0=x) = ϕ(x) + hϕ(x) + h^2𝒜_1 ϕ(x) + (h^3), and consider a postprocessor of the form X_n = G_n(X_n) where (ϕ(X_1)|X_1=x) = ϕ(x) + h𝒜_1 ϕ(x) + (h^3), where the constants inin (<ref>),(<ref>) have at most a polynomial growth with respect to x. Here ℒϕ=ϕ'f+σ^2/2 Δϕ denotes generator of the SDE and 𝒜_1,𝒜_1 are linear differential operators with smooth coefficients. Note that 𝒜_1ℒ^2/2 in general (otherwise the scheme has weak order 2). If thecondition (A_1 + [,𝒜_1])^* ρ_∞ =0 holds, equivalently, ⟨𝒜_1ϕ +[,𝒜_1] ϕ⟩ = 0 for all test function ϕ, where we define ⟨ϕ⟩ = ∫_ℝ^dϕρ_∞ dx, then it is shown in <cit.> that X_n has order two for the invariant measure, i.e. the convergence estimates (<ref>) with r=2 and (<ref>) hold. Before we can apply the above result, the following lemma allow to compute the weak Taylor expansion of the modified scheme. Consider the scheme (<ref>) with modified stage (<ref>) and assume the hypotheses of Theorem <ref>. Then (<ref>) holds where the linear differential operator 𝒜_1 is given by 𝒜_1ϕ = 1/2ϕ”(f,f)+ σ^2/2∑_i=1^dϕ”'(e_i,e_i,f) + σ^4/8∑_i,j=1^d ϕ^(4)(e_ie_i,e_j,e_j) + c_2 ϕ'f'f + c_3 σ^2/2ϕ' ∑_i=1^df”(e_i,e_i) + c_4 σ^2 ∑_i=1^d ϕ”(f'e_i,e_i), where f=-∇ V(x) and c_2 = ω_1^2T_s”(ω_0)/2 T_s(ω_0), c_3 = r_s+ω_0/sω_1α, c_4 = T_s”(ω_0)ω_1/T_s'(ω_0) + ω_1/2. Adapting the proof of Lemma <ref>, the internal stage K_i defined in (<ref>) (and (<ref>) for i=1) satisfies (<ref>) where h^2R_i,h(X_0) can be replaced by ω_1^2T_i”(ω_0)/2 T_i(ω_0) h^2f'(X_0)f(X_0) + r̃_i σ^2/2 f”(X_0)(ξ_n,ξ_n) + h^5/2R_i +h^3 R_i,h(X_0), where (R_i)=0 and all the moments of R_i,R_i,h(X_0) are bounded with polynomial growth with respect to X_0. Here, r̃_i is defined by induction as r̃_0=0, r̃_1=Δ_1+α, and r̃_i=ν_ir̃_i-1+κ_ir̃_i-2+Δ_i, i=2,…,s. We have (R_i)=0 because R_i is a linear combination of f'(X_0)f'(X_0)ξ_n, f”(X_0)(f(X_0),ξ_n), and f”'(X_0)(ξ_n,ξ_n,ξ_n) with zero mean values (recall that odd moments of ξ_n vanish). Next, observing that the difference d_i=r̃_i-r_i satisfies d_0=0,d_1=α, and d_i=ν_id_i-1+κ_id_i-2,i=2,…,s, we deducer̃_i=r_i+d_i, d_i=U_i-1(ω_0)/T_i(ω_0)ω_0α  ∀  i=0,..,s. In particular, taking i=s in (<ref>),(<ref>), and expanding (<ref>), we deduce that (<ref>) holds with c_2,c_3,c_4 defined in (<ref>) where we note that c_3=r_s=r_s+d_s. Following the proof of <cit.> (see also <cit.>) where we apply repeatedly integration by parts for the integral in (<ref>), using Lemma <ref> for the expression of 𝒜_1, we deduce that the quantity in (<ref>) satisfies ⟨𝒜_1ϕ +[,𝒜_1]ϕ⟩ = ∑_i=1^d⟨(c_3-c_2-c^2) σ^2/2ϕ' f”(e_i,e_i) + (c_4-1/4 - c_2/2 -c^2) σ^2 ϕ”(f'e_i,e_i) ⟩, where we use [,𝒜_1]= -c^2σ^2(1/2 ϕ' ∑_i=1^df”(e_i,e_i) +∑_i=1^d ϕ”(f'e_i,e_i) ) for 𝒜_1ϕ =c^2σ^2/2 Δϕ. We see that the above quantity (<ref>) vanished if c_3-c_2-c^2=c_4-1/4 - c_2/2 -c^2=0, equivalently, c_3-c_2= c^2 = c_4-1/4 - c_2/2. For the values of α,c defined in (<ref>), we obtain that (<ref>) indeed holds and we deduce that the order two condition (<ref>) for the invariant measure is satisfied. This concludes the proof. § NUMERICAL EXPERIMENTS In this Section, we illustrate numerically our theoretical analysis and we show the performance of the proposed SK-ROCK method and its postprocessed modification PSK-ROCK. §.§ A nonlinear nonstiff problem We first consider the following non-stiff nonlinear SDE, dX=(1/4X+1/2√(X^2+1))dt+√(1/2(X^2+1))dW, X(0)=0. whose exact solution is X(t)=sinh(t/2+W(t)/√(2)). In Figure <ref>, we consider the SK-ROCK method (<ref>) and plot the strong error (|X(T)-X_N| and the weak error |(arcsinh(X(T))-(arcsinh(X_N))| at the final time T=Nh=1 using 10^4 samples and number of stages s=1,5,10,100. We obtain convergence slopes 1 and 1/2, respectively, which confirms Theorem <ref> stating the strong order 1/2 and weak order 1 of the proposed scheme. Note that s=1 stage is sufficient for the stability of the scheme in the non-stiff case. The results for s=5,10,100 yield nearly identical curves which illustrates that the error constants of the method are nearly independent of the stage number of the scheme. §.§ Nonlinear nonglobally Lipschitz stiff problems Consider the following nonlinear SDE in dimensions d=2 with a one-dimensional noise (d=2,m=1). This is a modification of a one-dimensional population dynamics model from <cit.> considered in <cit.> for testing stiff integrator performances, dX =(ν(Y-1)-λ_1 X(1-X))dt-μ_1X(1-X)dW, X(0)=0.95, dY =-λ_2Y(1-Y)dt-μ_2Y(1-Y)dW, Y(0)=0.95. Observe that linearizing (<ref>) close to the equilibrium (X,Y)=(1,1), we recover for ν=0 the scalar test problem (<ref>). In Figure <ref> we consider the SK-ROCK method applied to (<ref>) with parameters that are identical to those used in <cit.>. We take the initial condition X(0)=Y(0)=0.95 close to this steady state and use the parameters ν=2,μ_2=0.5,λ_2=-1. In a nonstiff regime (-λ_1=μ_1=1 in Figure <ref>(a)), we observe a convergence slope 1 for the second moment (X(T)^2) which illustrates the weak order one of the scheme, although our analysis in Theorem <ref> applies only for globally Lipschitz vector fields. The stage number s=1 is sufficient for stability, but we also include for comparison the results for s=10 (note that the results for s=50,100 not displayed here are nearly identical to the case s=10). The convergence curves are obtained as an average over 10^6 samples. In a stiff regime (-λ_1=μ_1^2 =100 in Figure <ref>(b)), we observe for the standard small damping η=0.05 a stable but not very accurate convergence, due to the severe nonlinear stiffness. However, considering a slightly larger damping η=4, in the spirit of the S-ROCK method, yields a stable integration for all considered timesteps and all trajectories and we observe a line with slope one for the SK-ROCK method. Here, given the timesteps h, the numbers of stages s are adjusted as proposed in (<ref>) where λ_max=|λ_1|=100. For severely stiff problems, alternatively to switching to drift-implicit schemes <cit.>, one can consider in SK-ROCK a slightly larger damping η and the corresponding stage parameter s below, similar to (<ref>) and chosen such that the mean-square stability domain length (<ref>) satisfies L > hλ_max, s=[√(hλ_max+1.5/2Ω(η))+0.5], where Ω(η) is given in Lemma <ref>. §.§ Linear case: Orstein-Uhlenbeck process We now illustrate numerically in details the role of the postprocessor introduced in Theorem <ref> for the linear Orstein-Uhlenbeck process in dimension d=m=1, dX=-λ Xdt+σ dW, X(0)=2 where we choose λ=1 and σ=√(2). In Figure <ref>, we consider the SK-ROCK and PSK-ROCK methods with s=1,5,10,100 stages, respectively. For a short time T=0.5 (Fig. <ref>(a)(b)), we observe weak convergence slopes one for both SK-ROCK and PSK-ROCK (second moment (X(T)^2))as predicted by Theorem <ref>, and the postprocessor has nearly no effect of the errors. For a long time T=10 where the solution of this ergodic SDE is close to equilibrium, we observe that the weak order one of SK-ROCK (Fig. <ref>(c)) is improved to order two using the postprocessor in PSK-ROCK (Fig. <ref>(d)), which confirms the statement of Theorem <ref> that the postprocessed scheme has order two of accuracy for the invariant measure. For comparison, in Figure <ref>, we also include the results of PSK-ROCK without damping (η=0) using M=10^8 samples. We recall that for the scalar linearOrstein-Uhlenbeck process, the PSK-ROCK method with zero damping is exact for the invariant measure (see Section <ref>). We observe only Monte-Carlo errors with size ≃ M^-1/2=10^-4, which confirms that the PSK-ROCK method has no bias at equilibrium for the invariant measure in the absence of damping, as shown in (<ref>). We emphasise however that this exactness results holds only for linear problems, and a positive damping parameter η should be used for nonlinear SDEs for stabilization, as shown in Sections <ref> and <ref>. §.§ Nonglobally Lipschitz Brownian dynamicsTo illustrate the advantage of the PSK-ROCK method applied to nonglobally Lipschitz ergodic Brownian dynamics, we next consider the following double well potential V(x)=(1-x^2)^2/4 and the corresponding one-dimensional Brownian dynamics problem dX=(-X^3+X)dt+√(2)dW, X(0)=0, In Figure <ref>, we compare the performances of S-ROCK, S-ROCK2 considered in <cit.> (a method with weak order 2 for general SDEs), and the new SK-ROCK and PSK-ROCK methods at short time T=0.5 (Figures <ref>(a)(b)) and long time T=10 (Figures <ref>(c)(d)). As we focus on invariant measure convergence and not on strong convergence, we consider here discrete random increments with (ξ_n=±√(3))=1/6,(ξ_n=0)=2/3, which has the correct moments so that Theorem <ref> remains valid. Our numerical tests indicate that it makes PSK-ROCK with modified stage (<ref>) more stable. For a fair comparison, we use the same discrete random increments for all schemes. We plot the second moment error versus the time stepsize h and versus the average cost which is the total number of function evaluations during the time integration divided by the total number number of samples. Indeed, the number of function evaluations depends on the trajectories because the stage parameter s is adaptive at each time step. For short time, we can see that the S-ROCK and the SK-ROCK method have order 1 (Figure <ref>(a)) and exhibit similar performance with nearly identical error versus cost curves in Figure <ref>(b), while PSK-ROCKis less advantageous for short time. This illustrates that the postprocessing has no advantage for short times. The S-ROCK2 method is the most accurate for small time steps, and it has order 2 as shown in Figures <ref>(a)(c), but at the same time it has a larger average cost as observed in Figures <ref>(b)(d) due to its smaller stability domain with size ≃ 0.42 · s^2.For long time, the SK-ROCK and S-ROCK both exhibit order 1 of accuracy (Figure <ref>(c)), with an advantage in terms of error versus cost for the SK-ROCK method that is about 10 times more accurate for large time steps. In contrast, the postprocessed scheme PSK-ROCK exhibits order 2 of convergence (Figure <ref>(c)) which corroborates Theorem <ref>. Since the postprocessing overcost is negligible (two additional vector field evaluations per timestep due to the modified stage K_1 in (<ref>)), this makes PSK-ROCK the most efficient in terms of error versus cost, as shown in Figure <ref>(d). The S-ROCK2 method has order 2 here but with poor accuracy compared to the PSK-ROCK method with approximately the same cost. Note that typically the SK-ROCK method used s=1,2,3 stages in contrast to the S-ROCK method using s=2,…,6 stages per timesteps. §.§ Stochastic heat equation with multiplicative space-time noise Although our analysis applies only to finite dimensional systems of SDEs, we consider the following stochastic partial differential equation (SPDE) obtained by adding multiplicative noise to the heat equation, u(t,x)t =u(t,x)x + u(t,x)Ẇ(t,x), (t,x)∈[0,T]×[0,1] u(0,x) =5cos(π x), x∈[0,1], u(t,0) =5,u(t,1)x=0, t∈[0,T], where Ẇ(t,x) denotes a space-time white noise that we discretize together with the Laplace operator with a standard finite difference formula <cit.>. We obtain the following stiff system of SDEs where u(x_i, t) ≈ u_i(t), with x_i=i Δ x, Δ x = 1/N,du_i= u_i+1-2u_i+u_i-1/Δ x^2dt+ u_i/√(Δ x)dw_i,i=1,…,N,where the Dirichlet and the Neumann conditions impose u_0=5 and u_N+1=u_N-1, respectively.Here, w_1,…, w_N are independent standard Wiener processes and dw_i indicates Itô noise. In Figure <ref>(a), we plot one realization of the SPDE using space stepsize Δ x=1/100 and timestep size Δ t=1/50. Note that the Lipchitz constant associated to the space-discretization of (<ref>) has size ρ=4Δ x^-2, and the stability condition is fulfilled for s=22 stages. For comparison, the standard S-ROCK method would require s=46 stages, while applying the standard Euler-Maruyama with a smaller stable timestep Δ t/s would require s ≥Δ t ρ/2=400 intermediate steps. Notice that the initial condition in (<ref>) satisfies the boundary conditions, which permits a smooth solution close to time t=0. Taking alternatively an initial condition that does not satisfy the boundary conditions (for instance u(x,0)=1) yields an inaccurate numerical solution with large oscillations close to the boundary x=0. A simple remedy in such a case is to consider a larger damping parameter η, as described in Remark <ref>. In Figure <ref>(b), we compare the number of vector field evaluations of the standard S-ROCK and new SK-ROCK methods when applied to the SPDE (<ref>) with finite difference discretization with parameter Δ x=1/100. The better performance of SK-ROCK with damping η=0.05 is due to its larger stability domain with size ≃ 1.94· s^2 compared to the size ≃ 0.33 · s^2 for S-ROCK. Observing the ratio of the two costs in Figure <ref>(b), we see that the new SK-ROCK methods has a reduced cost for stabilization by an asymptotic factor of about √(1.94/0.33)≃ 2.4 for large s and large stepsizes, which confirms the stability analysis of Section <ref>. The convergence analysis of the SK-ROCK method for the stochastic heat equation is the topic of future work. Notice that SK-ROCK with s=1 stage has the optimal mean-square stability length (L=2 for η=0) as defined in (<ref>). In contrast, the S-ROCK method with s=1 has the smaller stability length L=3/2, while the standard Euler-Maruyama has L=0. This explains why for the smallest considered stepsize Δ t=2^-15 in Figure <ref>(b), we have s=1 for SK-ROCK whileS-ROCK uses s=2 stages. In Figure <ref> we consider again one realization with SK-ROCK of the SPDE problem (<ref>) but with a different initial condition u(0,x)=1 not fulfilling the boundary conditions, i.e. that is outside the domain of the Laplace operator, as considered in <cit.>. We compare the result for the same sets of random numbers but for for different values of the damping parameter η. We observe numerically high oscillations in time and space for the small damping value η=0.05in Figure (<ref>) while the larger damping η=10 yields a smoother solution in Figure (<ref>). This illustrates again Remark <ref> showing that the damping parameter η can be increased in the case of severely stiff problems, adjusting the stage parameter accordingly with (<ref>). Acknowledgements.The work of the first author was partially supported by the Swiss National Foundation, grant 200020_172710. The work of the second and third authors was partially supported by the Swiss National Science Foundation, grants 200020_144313/1 and 200021_162404.The computations were performed at University of Geneva on the Baobab cluster. abbrv
http://arxiv.org/abs/1708.08145v5
{ "authors": [ "Assyr Abdulle", "Ibrahim Almuslimani", "Gilles Vilmart" ], "categories": [ "math.NA", "65C30, 60H35, 65L20, 37M25" ], "primary_category": "math.NA", "published": "20170827215050", "title": "Optimal explicit stabilized integrator of weak order one for stiff and ergodic stochastic differential equations" }
[2010]91G10,91G20,11K60TU Wien [email protected], [email protected] this paper we investigate discrete time trading under integer constraints, that is, we assume that the offered goods or shares are traded in integer quantities instead of the usual real quantity assumption. For finite probability spaces and rational asset prices this has little effect on the core of the theory of no-arbitrage pricing. For price processes not restricted to the rational numbers, a novel theory of integer arbitrage free pricing and hedging emerges. We establish an FTAP, involving a set of absolutely continuous martingale measures satisfying an additional property. The set of prices of a contingent claim is no longer an interval, but is either empty or dense in an interval. We also discuss superhedging with integral portfolios. Dynamic trading under integer constraints Stefan Gerhold, Paul Krühner December 30, 2023 =========================================§ INTRODUCTIONClassical, frictionless no-arbitrage theory <cit.> makes several simplifying assumptions on financial markets. In particular, position sizes may be arbitrary real numbers, which allows trading strategies that cannot be implemented in practice. Even if brokers are receptive to fractional amounts of shares, there will be a smallest fraction that can be purchased or sold. Moreover, traders might wish to avoid odd lots because of additional brokerage fees and the usually poor liquidity of small positions. In this case, the smallest traded unit would be a round lot consisting of several (e.g., 100) shares. Both situations can be covered by assuming that integer amounts of a price process (S^1_t,…,S^d_t)_t∈𝕋 can be traded. The set of trading times 𝕋 is assumed to be finite in this paper. For simplicity, we will call S^i the price process of the i-th (risky) asset, although it may have the interpretation of a fraction or a round lot of an actual asset price. We assume that the amount of money in the risk-less asset may take arbitrary real values. On the one hand, this increases tractability; on the other hand, it makes economic sense, as the smallest possible modification of the bank account is usually several orders of magnitude smaller than that of the the risky positions. Thus, our integer trading strategies in a model with d risky assetstake values in ℝ×ℤ^d at each time. For some results and proofs, we also consider rational strategies with values in ℝ×ℚ^d. By clearing denominators, the corresponding notions of freeness of arbitrage are equivalent (see Lemma <ref>). To the best of our knowledge, the existing literature on arbitrage, pricing and hedging under trading constraints <cit.> invariably imposes convexity assumptions on the set of admissible strategies, which are unrelated to integrality constraints. The latter do feature prominently in the computational finance literature, e.g. in the papers <cit.>, which employ mixed-integer nonlinear programming to solve the Markowitz portfolio selection problem. In the literature, other keywords such as minimum lot restrictions, minimum transaction level, and integral transaction units are used with the same meaning as our integer constraints. Somewhat surprisingly, this kind of restriction seems to have received almost no attention from the viewpoint of no-arbitrage theory. One exception is a paper by Deng et al. <cit.>, who show that deciding the existence of arbitrage in a one-period model under integer constraints is an NP-hard problem.In our main results, we assume that the underlying probability space is finite (Assumption (F) of Section <ref>). This assumption is realistic, because actual asset prices move by ticks, and prices larger than 10^10^10, say, will never occur. Still, extending our work to arbitrary probability spaces might be mathematically interesting, but is left for future work.In Section <ref>, we introduce the notions of no integer arbitrage (NIA) and no integer free lunch (NIFL) in a straightforward way. It turns out (Theorem <ref>) that the latter property is equivalent to the classical no-arbitrage condition NA, and so we concentrate on NIA in the rest of the paper. Our first main result is a fundamental theorem of asset pricing (FTAP; Theorem <ref>)characterising NIA. It involves a set of absolutely continuous martingale measures satisfying an additional property. The latter amounts to explicitly avoiding integer arbitrage outside the support of the absolutely continuous martingale measure. The theorem is thus not as neat as the classical FTAP, but is still useful for establishing several of our subsequent results. In Section <ref>, we define the set Π_ℤ(C) of NIA-compatible prices of a claim C. The integer variant of the classical representation using the set of equivalent martingale measures features only an inclusion instead of an equality (Proposition <ref>), and in fact Π_ℤ(C) may be empty. Even if it is non-empty, it need not be an interval; however, Π_ℤ(C) is then always dense in an (explicit) interval, which is the main result of Section <ref>. As regards methodology, many of our arguments just use the countability of ℤ^d (and ℚ^d), or the density of ℚ^d in ℝ^d. Still, at some places (such as Lemma <ref>, Example <ref>, and Theorem <ref>) we invoke non-trivial results from number theory, collected in Appendix <ref>.Readers who are mainly interested in the practical consequences of integer restrictions are invited to read (besides the basic definitions) Theorem <ref>, Theorem <ref>, and Section <ref>. In a nutshell, for the discrete-time models used in practice (finite probability space, floating-point – i.e., rational – asset values), the core of no-arbitrage theory does not change much. One exception is the fact that the supremum of claim prices consistent with no-integer-arbitrage need not agree with the smallest integer superhedging price (see Section <ref>). Still, this property holds in a limiting sense when superhedging a large portfolio of identical claims. That said, our work is by no means the last word on the practical consequences of integer restrictions in dynamic trading. Problems such as quantile hedging, hedging with risk measures, or hedging under convex constraints may well be worth studying under integer restrictions. In Section <ref> we discuss a toy example of variance optimal hedging under integer constraints, which leads to the closest vectorproblem(CVP),a well-known algorithmic lattice problem. § TRADING STRATEGIES AND ABSENCE OF ARBITRAGE We will work with a probability space (Ω,𝒜,P). Our main results use the following assumption: (F) Ω is finite, 𝒜 is the power set of Ω, P[{ω}]>0 for any ω∈Ω, and we choose an enumeration ω_1,…,ω_n of its elements.We assume that there is a finite set of times 𝕋 := {0,…, T}, with T∈ℕ, at which trading may occur, and fix a filtration (ℱ_t)_t∈𝕋 where ℱ_T⊆𝒜 and ℱ_0={∅,Ω}. The (deterministic) riskless interest rate is r>-1, and we have d risky assets with pricesS_t = (S_t^1,…,S_t^d) at time t∈𝕋, where S_t is assumed to benon-negative and ℱ_t-measurable. The price of the riskless asset is denoted by S_t^0 := (1+r)^t for t∈𝕋, and we denote the market price processes by S̅ := (S^0,S).We are interested in trading strategies that consist of integer positions in the risky assets. All trading strategies we consider are self-financing. (i)An integer (trading) strategy is a predictable process (ϕ̅_t)_t∈𝕋∖{0} with values in ℝ×ℤ^d and ϕ̅_t S̅_t=ϕ̅_t+1S̅_t for t∈𝕋∖{0,T}. For convenience, we will sometimes use the notation ϕ̅_0:=ϕ̅_1.The set of integer trading strategies is denoted by 𝒵. (ii) Analogously, we define the set ℛ of all (real) trading strategies and the set 𝒬 of rational strategies, with values in ℝ×ℚ^d.We obviously have 𝒵⊆𝒬⊆ℛ. For any trading strategy ϕ̅∈ℛ we denote its value at time t∈𝕋 byV_t(ϕ̅) := ϕ̅_t S̅_t=∑_j=0^d ϕ_t^jS_t^j,and its discounted value by V̂_t(ϕ̅):=V_t(ϕ̅)/S_t^0. Often it is convenient to work with discounted asset values or discounted gains which are denoted by Ŝ_t:= (S_t^1,…,S_t^d)/S_t^0,ΔŜ_t:= Ŝ_t-Ŝ_t-1for t∈𝕋 resp. t∈𝕋∖{0}. The discounted value process then equalsV̂_t(ϕ̅) = V_0(ϕ̅) +∑_k=1^t ϕ_k ΔŜ_k, t∈𝕋.(i) An integer arbitrage is a strategy ϕ̅∈𝒵 which is an arbitrage for the market S̅.(ii) A model satisfies the no-integer-arbitrage condition (NIA), if it admits no integer arbitrage.(iii) Define the set (a ℤ-module)𝒦_ℤ := {∑_k=1^T ϕ_k ΔŜ_k : ϕ̅∈𝒵}of discounted net gains realizable by integer strategies, and𝒞_ℤ:= (𝒦_ℤ-L_+^0)∩ L^∞.Assuming (F), we define the condition NIFL (no integer free lunch) ascl(𝒞_ℤ) ∩ L_+^0={0}.The closure is taken w.r.t. the Euclidean topology, upon identifying L^∞ with ℝ^n.Clearly, NIA is weaker than the classical no-arbitrage property NA or NIFL. It turns out that the classical no-arbitrage property NA and NIFL are equivalent (for finite probability spaces), see Theorem <ref> below. The following simple properties will be used often: (i) If (F) holds, then in the definition of integer arbitrage the condition ϕ̅∈𝒵 can be replaced by ϕ̅∈𝒬.(ii) In the definition of integer arbitragethe condition V_0(ϕ̅) ≤ 0 can be replaced by V_0(ϕ̅) = 0.(iii) NIA is equivalent toV̂_T(ϕ̅) - V̂_0(ϕ̅) ≥ 0 ⇒ V̂_T(ϕ̅) = V̂_0(ϕ̅) for any ϕ̅∈𝒵 (or, under (F), for any ϕ̅∈𝒬).(ii) and (iii) are proved precisely as in the classical case. Part (i): Clearly, any arbitrage strategy in 𝒵 is also in 𝒬. Now assume that there is an arbitrage ϕ̅ such that (ϕ_t^1,…,ϕ_t^d)∈ℚ^d for any t∈𝕋. DefineN := inf{n ∈ℕ: nϕ∈ℤ^d· T}.Then Nϕ_t ∈ℤ^d for any t∈𝕋, and Nϕ̅ is an arbitrage. By (<ref>), the implication (<ref>) can be written as∑_k=1^T ϕ_k ΔŜ_k ≥ 0⇒ ∑_k=1^T ϕ_k ΔŜ_k = 0.Although our main results assume (F), we mention that (F) is actually not necessary in parts (i) and (iii) of Lemma <ref>. This follows easily from the fact that arbitrage in a multi-period model implies the existence of a period that allows arbitrage. In the classical setup, this is Proposition 5.11 in <cit.>; the proof works for integer and rational strategies, too. Under the finiteness condition (F) on Ω, we can show that any real trading strategy can be approximated by an integer trading strategywith a certain rate. The proof is based on Dirichlet's approximation theorem (Theorem <ref>).(i) If S is bounded, then for any strategy ϕ̅∈ℛ and any ϵ>0, we can find astrategy ψ̅∈𝒬 such that sup_t∈𝕋 esssup |V_t(ϕ̅)-V_t(ψ̅)| < ϵand V_0(ϕ̅)=V_0(ψ̅). (ii) Assume (F) and let ϕ̅∈ℛ and ϵ>0. Then there is q∈ℕ and a strategy ψ̅∈𝒵 such that V_0(ϕ̅)=V_0(ψ̅)/q andsup_t∈𝕋,j=1,… d, l=1,…,n |ψ_t^j(ω_l) - qϕ_t^j(ω_l)| < q^-1/(nd(T+1)) <ϵ.In particular, for any strategy ϕ̅∈ℛ we can find strategies ψ̅∈𝒬, η̅∈𝒵 and q∈ℕ such thatsup_t∈𝕋 esssup |V_t(ϕ̅)-V_t(ψ̅)|< ϵ,sup_t∈𝕋 esssup |qV_t(ϕ̅)-V_t(η̅)|< ϵ and V_0(ϕ̅) = V_0(ψ̅) = V_0(η̅)/q. The first part is trivial as any real number can be approximated by rational numbers. Thus we find a sequence of strategies (ψ̅^(k))_k∈ℕ in 𝒬 such that ψ^(k)→ϕ uniformly in ω for k→∞. This and the boundedness of S imply the convergence of the value at any time t∈𝕋 if the initial value is being fixed as equal.To show part (ii), let R_t := {ϕ_t^j(ω_l): j=1,…, d,l=1,…,n}. For any t∈𝕋 let a_t^1,…,a_t^K_t be an enumeration of the elements of R_t. We have K_t≤ dn for any t∈𝕋 and thus ∑_t∈𝕋K_t ≤ nd(1+T). By Dirichlet's approximation theorem (Theorem <ref>), we find q∈ℕ with q^-1/(nd(1+T)) < ϵ and p_t^k ∈ℤ with |p_t^k-qa_t^k| < q^-1/(nd(1+T)) for any t∈𝕋, k=1,…,K_t. For t∈𝕋 we define ψ_t^j(ω_l) := p_t^k where k∈{1,…,K_t} is such that ϕ_t^j(ω_l) = a_t^k. Then{ψ_t^j = p_t^k}⊆⋃_m ∈ A_k{ϕ_t^j = a_t^m}where A_k = {m=1,…, K_t: p_t^m = p_t^k}. Thus ψ_t is measurable w.r.t. to the σ-algebra generated by ϕ_tand, hence, ℱ_t-1-measurable. Therefore, ψ is a predictable ℤ^d-valued process. The uniform distance of ψ and ϕ is less than q^-1/(nd(1+T)) by construction. With the previous lemma at hand we can show that under the finiteness condition classical no-arbitrage is equivalent to NIFL.Assume (F). Then the following statements are equivalent:(i) There is an equivalent martingale measure Q≈ P, (ii) The model satisfiesthe classical no-arbitrage property NA and (iii) The model satisfies NIFL. Moreover, if the number of risky assets is d=1, then the following statement is equivalent as well: (iv) The model satisfies NIA.The equivalence of (i) and (ii) is the classical FTAP, see <cit.>. Furthermore,NA is equivalent to the classical no free lunch condition in our setup (see <cit.>), which yields the implication (ii)⇒(iii).Now we assume (iii) and show (ii). Let ϕ̅∈ℛ such that V_0(ϕ̅) = 0 and V_T(ϕ̅) ≥ 0.By part (ii) of Lemma <ref> we find q_N∈ℕ and strategies ψ^(N)∈𝒵 such thatesssup|q_N V_T(ϕ̅)-V_T(ψ̅^(N))| ≤1/N.W.l.o.g., the sequence q_N increases. We get V_T(ψ̅^(N)) ≥ q_NV_T(ϕ̅) - 1/N≥ -1/N. Define Z_N:=1 V_T(ψ̅^(N))> 1,V_T(ψ̅^(N)) V_T(ψ̅^(N)) ≤ 1= V_T(ψ̅^(N)) -(V_T(ψ̅^(N)) -1)1_{V_T(ψ̅^(N)) >1}∈𝒞_ℤ,N∈ℕ. Since Z_N∈ L^∞, there is a convergent subsequence, and w.l.o.g.Z_N itself converges to some Z∈cl(𝒞_ℤ).By (<ref>), we have Z≥0. Then, NIFL implies thatZ=0, and thus V_T(ψ̅^(N))→0. Since q_N^-1V_T(ψ̅^(N)) → V_T(ϕ̅) by (<ref>) (recall that q_N increases), we concludethat V_T(ϕ̅) =0.Now assume that d=1. (ii)⇒(iv) is obvious. We assume that (ii) does not hold. Proposition 5.11 in <cit.> yields the existence of a one-period arbitrage, i.e. an arbitrage ϕ̅ and t_0∈𝕋 such that ϕ_t=0 for any t∈𝕋∖{t_0}. Since ϕ̅ is an arbitrage we must have ϕ^1_t_0≠ 0. Define ψ^j_t := ϕ^t_j / |ϕ^1_t_0|, t∈𝕋,j=0,1.Then ψ is an arbitrage as well. Moreover, ψ_t^1∈{-1,0,1}⊆ℤ for any t∈𝕋, thus ψ∈𝒵. Consequently, (iv) does not hold. In practice, all values occurring in the model specification are floating-point numbers. The following result shows that in this case the existence of an arbitrage opportunity is not affected by integrality constraints.Assume (F), and that the interest rate r and all asset values are rational: r∈ℚ, and S_t∈ℚ^d for t∈𝕋. Then NIA is equivalent to NA.NA always implies NIA. Now suppose that we have a real arbitrage opportunity.By part (iii) of Lemma <ref>, there is a predictable process ϕ such that ∀ ω∈Ω: ∑_k=1^T ϕ_k(ω) ΔŜ_k(ω) ≥ 0,∃ ω∈Ω: ∑_k=1^T ϕ_k(ω) ΔŜ_k(ω) > 0. The assertion now follows from Lemma <ref> below. Note that predictabilityof the resulting rational process is easy to guarantee, by introducing for all k,j a singlevariable for theϕ_k^j(ω) for which the ωs belong to the same atom of ℱ_k-1. In the proof of the preceding result, we applied the following simple lemma. Using Ehrhart's theory of lattice points in dilated polytopes <cit.>, it is certainly possible to state much more general results along these lines.[We thank Manuel Kauers for pointing this out.] Therefore, we do not claim originality for Lemma <ref>, but give a short self-contained proof for the reader's convenience. Let (a_ij)_1≤ i≤ r,1≤ j≤ s be a matrix with rational entries a_ij∈ℚ. Suppose that there is a real vector (x_1,…,x_s) such that∑_j=1^s a_ij x_j ≥ 0,i=1,…,r,with at least one inequality being strict.Then there is a rational vector satisfying (<ref>). After possibly reordering the lines of the matrix (a_ij), we may assume that there is u∈{1,…,r} such that∑_j=1^s a_ij x_j > 0,1≤ i≤ u,∑_j=1^s a_ij x_j = 0,u<i≤ r.By defining y_i:=∑_j=1^s a_ij x_j for 1≤ i≤ u, we get that the vector (x_1,…,x_s,y_1,…,y_u) solves the system∑_j=1^s a_ij x_j - y_i= 0,1≤ i≤ u,∑_j=1^s a_ij x_j= 0,u<i≤ r, y_1,…,y_u >0.Equations (<ref>) and (<ref>) constitute a homogeneous linear system of equations with rational coefficients, which has a basis B⊂ℚ^s+u of rational solution vectors, by Gaussian elimination. The vector (x_1,…,x_s,y_1,…,y_u) can be written as a linear combination of vectors in B. By approximating the (real) coefficients of this linear combination with rational numbers, we get a vector (x̃_1,…,x̃_s,ỹ_1,…,ỹ_u)∈ℚ^s+u satisfying (<ref>)–(<ref>). Then (x̃_1,…,x̃_s) is the desired rational vector. The assertion of Theorem <ref> does not hold for infinite probability spaces, as the following example illustrates. Let Ω={ω_1,ω_2,…} be countable, 𝒜=2^Ω, and fix an arbitrary probability measure P with P[{ω_i}]>0 for all i∈ℕ. We choose d=2, T=1, and r=0. The asset prices are defined by S_0=(1,1) andS_1(ω_i)= (1+p_i,1+q_i) i even,(1-p̂_i,1-q̂_i) i odd,where p_i,q_i,p̂_i,q̂_i are natural numbers satisfyingp_i/q_i↘πandp̂_i/q̂_i↗π, i→∞.Thus, the increments areΔ S_1(ω_i)= (p_i,q_i) i even,(-p̂_i,-q̂_i) i odd.A vector (ϕ_1^1,ϕ_1^2)∈ℝ^2 yields an arbitrage if and only ifp_i/q_iϕ_1^1+ϕ_1^2≥ 0, i even,p̂_i/q̂_iϕ_1^1+ϕ_1^2≤ 0, i odd,with at least one inequality being strict. By (<ref>), the vector (ϕ_1^1,ϕ_1^2)=(1,-π) satisfies this, and so NA does not hold. By letting i tend to infinity, we see that there is no integer vector satisfying (<ref>)-(<ref>), which shows that the model satisfies NIA. Our next goal is to characterise NIA, without restricting the asset prices to rational numbers. As we will see, for d>1 NIA is not equivalent to the existence of an equivalent martingale measure, but rather to the existence of an absolutely continuous martingale measure with an additional property. We first introduce sets of strategies which do not yield any net profit. (i) Let Q be a probability measure on (Ω,𝒜), and denote the set of trading strategies with zero initial value by ℛ_0:={ϕ̅∈ℛ:V_0(ϕ̅)=0}. We denote the set of all integer-valued (resp. rational-valued, resp. real-valued) trading strategies with zero initial capital and Q-a.s. zero gain by𝒵^0_Q:= {ϕ̅∈ℛ_0 ∩𝒵 : V_T(ϕ̅)=0Q-a.s.},𝒬^0_Q:= {ϕ̅∈ℛ_0 ∩𝒬: V_T(ϕ̅)=0Q-a.s.},ℛ^0_Q:= {ϕ̅∈ℛ_0: V_T(ϕ̅)=0Q-a.s.}.(ii) If we assume[This ensures that the sets {ω} occurring in part (ii) of Definition <ref> are measurable.] (F) then we write ^max for the set of martingale measures Q≪ P such that∀ϕ̅∈𝒬^0_Q:(V_T(ϕ̅)≥ 0 ⇒V_T(ϕ̅) = 0)and∃ϕ̅∈ℛ^0_Q: V_T(ϕ̅)≥ 0 and {V_T(ϕ̅) > 0} = {ω∈Ω: Q[{ω}] = 0 }. (iii)denotes the set of martingale measures Q≪ P such that∀ϕ̅∈𝒵^0_Q:(V_T(ϕ̅)≥ 0 ⇒V_T(ϕ̅) = 0). Obviously, we have 𝔔⊆^max⊆, where 𝔔 denotes the set of equivalent martingale measures. (As for the first inclusion, ϕ̅=0 satisfies the existence statement in (ii).) Before giving an FTAP for integer trading we show further properties of the measures in ^max. Assume (F) and that ^max≠∅. Then there is a set A⊊Ω such that ^max is the set of martingale measures whose support is Ω∖ A. Also, there is ϕ̅∈ℛ_0 with V_T(ϕ̅)≥ 0 and {V_T(ϕ̅)>0} =A.Now, let Q∈^max and let Q'∈. Then Q'≪ Q. Moreover, ^max is dense inwith respect to the total variation distance.Choose Q∈^max and let ϕ̅∈ℛ_Q^0 satisfy the existence statement in (ii) of Definition <ref>. Define A:={V_T(ϕ̅)>0}. Then ϕ̅ is the required trading strategy.Let Q' be a martingale measure with support equal to Ω∖ A. Then ϕ̅ satisfies the existence statement of (ii) in Definition <ref>. Let ψ̅∈𝒬_Q'^0 with V_T(ψ̅)≥ 0. Then V_T(ψ̅) = 0 Q'-a.s., i.e. V_T(ψ̅) = 0 on Ω∖ A. Consequently, ψ̅∈𝒬_Q^0. (ii) of Definition <ref> yields that V_T(ψ̅) = 0. Thus, Q'∈^max. We need to show that any measure in ^max is a martingale measure with support Ω∖ A. This, however, follows as soon as we have shown that Q'≪ Q for any Q'∈. Let Q'∈. Observe that V_T(ϕ̅) = 0 Q'-a.s. because Q' is a martingale measure. Thus A = {V_T(ϕ̅) > 0} is a Q'-null set. We find Q'≪ Q.Finally, we have to show that Q' can be approximated by elements in ^max in total variation. Define Q_α := α Q'+(1-α)Q for any α∈[0,1]. Then Q' = Q_1 ← Q_α as α→ 1. However, Q_α is a martingale measure with the same support as Q for α≠ 1 and, hence, it is in ^max by what we have shown so far. We can now state an FTAP for integer trading.Assume (F). Then the following statements are equivalent: (i) ^max≠∅(ii) ≠∅(iii) The market satisfies NIA.The implication (ii)⇒(iii) does not need assumption (F). (i)⇒(ii) is trivial.(ii)⇒(iii): We fix a measure Q∈. Let ϕ̅∈ℛ_0∩𝒵 with V_T(ϕ̅)≥ 0. Since Q is a martingale measure we have V̂_T(ϕ̅) = 0 Q-a.s. and, hence, V_T(ϕ̅) = 0 Q-a.s. Thus ϕ̅∈𝒵^0_Q. By part (iii) of Definition <ref> we have V_T(ϕ̅) = 0. Hence, we have NIA.(iii)⇒(i): LetA:={ω∈Ω: ∃ϕ̅∈ℛ_0: V_T(ϕ̅) ≥ 0∧ V_T(ϕ̅)(ω) > 0}.For every ω∈ A choose an according strategy ϕ̅^(ω)∈ℛ_0 with V_T(ϕ̅^(ω)) ≥ 0 and V_T(ϕ̅^(ω))(ω) > 0. Defineϕ̅:= ∑_ω∈ Aϕ̅^(ω).Then ϕ̅∈ℛ_0, V_T(ϕ̅)≥ 0 and {V_T(ϕ̅) > 0} = A.We claim that for any ψ̅∈ℛ_0 with V_T(ψ̅)1_Ω∖ A≥ 0 we have V_T(ψ̅)1_Ω∖ A = 0. Let ψ̅∈ℛ_0 with V_T(ψ̅)1_Ω∖ A≥ 0. If V_T(ψ̅) ≥ 0 on A, then V_T(ψ̅) ≥ 0 and, hence, V_T(ψ̅)1_Ω∖ A = 0 by construction of A. Thus, we may assume that V_T(ψ̅)(ω) < 0 for some ω∈ A. Thenc:= -min{V_T(ψ̅)(ω):ω∈ A}/min{V_T(ϕ̅)(ω):ω∈ A}> 0.The strategy ψ̅+cϕ̅ is in ℛ_0 and satisfies V_T(ψ̅+cϕ̅)≥ 0. Thus, V_T(ψ̅+cϕ̅) = 0 outside A. Hence, V_T(ψ̅) = 0 outside A, i.e. V_T(ψ̅)1_Ω∖ A = 0. Assume for contradiction that A=Ω. Then V_T(ϕ̅)>0. Definee := min{ V_T(ϕ̅)(ω): ω∈Ω} > 0. Lemma <ref> yields q∈ℕ and ψ̅∈𝒵 such that |V_T(ψ̅) - V_T(qϕ̅)| < e. Thus, V_T(ψ̅) > qV_T(ϕ̅)-e ≥ 0. Thus ψ̅ is an integer arbitrage. A contradiction.Consequently, A⊊Ω. We have shown that the market S̅ is free of arbitrage on Ω∖ A. The classical fundamental theorem yields a martingale measure Q on Ω∖ A for S̅. We denote its extension to a probability measure on Ω by Q, i.e. Q[M] = Q[M∖ A] for any M⊆Ω. Then Q≪ P and Q is a martingale measure with {ω∈Ω: Q[{ω}] = 0} = A. Since {V_T(ϕ̅)>0} = A we have the existence statement inpart (ii) of Definition <ref>.Now let ψ̅∈𝒬_Q^0 with V̂_T(ψ̅)≥ 0. Let q be a common denominator for {ψ_t^j(ω_l): t∈𝕋,j=1,…,d,l=1,…,n}. Then qψ̅∈𝒵_Q^0. Since we have NIA we get V_T(ψ̅) = 1/qV_T(qψ̅) = 0 as claimed. An immediate consequence is the following sufficient criterion for the construction of markets with no integer arbitrage.Let Q≪ P be a martingale measure and assume that 𝒵_0^Q = {0}. Then the market satisfies NIA. The following example is a simple application of the preceding corollary.Assume that d=2, n≥ 2, T=1, r=0 and choose(S^1_0,S^2_0)=(1,π) and(S_1^1,S_1^2)(ω_j):=(3/2,3π/2) j=1, (1/2,π/2) j=2.Define Q[{ω_j}] = 1_{j=1,2}/2 for j=1,…,n. Then Q≪ P is a martingale measure, andℛ^0_Q ={(0,-πϕ^2,ϕ^2):ϕ^2∈ℝ}. Consequently, we have 𝒵^0_Q={0}. Thus, Corollary <ref> yields that the market does not allow for integer arbitrage. Observe that this holds regardless of the specification of (S_1^1,S_1^2)(ω_j) for j≥ 3.Another immediate consequence is the existence of absolutely continuous martingale measures.Suppose that a model satisfies NIA and assume (F). Then there is an absolutely continuous martingale measure. Immediate from Theorem <ref> (iii)⇒(i). The following example shows that the existence of an absolutely continuous martingale measure alone is insufficient to exclude integer arbitrage.Let Ω = {ω_1,ω_2}, S^0_0 = 1 = S^0_1, S^1_0= 1 and S_1^1(ω_i) = i for i=1,2 (i.e. T=1, d=1, n=2). Then Q := δ_ω_1 is a martingale measure which is absolutely continuous with respect to P := (δ_ω_1 + δ_ω_2)/2, where δ_ω_j denotes the Dirac-measure on ω_j. The strategy ϕ̅_1 := (-1,1) is an integer arbitrage.Finally, we provide a technical statement that will be used in Section <ref>.Assume (F), let Q∈ and assume that for any B∈ℱ_1 we have Q[B]∈{0,1}. Then ℱ_1=ℱ_0. Let A∈ℱ_1 be maximal with Q[A]=0. Then B:=Ω∖ A is an atom, and its only strict subset contained in ℱ_1 is the empty set. If A=∅, then the claim follows trivially. Assume for contradiction that A≠∅. We claim that the model restricted to A still satisfies NIA. To this end let ϕ̅∈𝒬, t=1,… T with ϕ̅_1=…ϕ̅_t-1=0 and V̂_t(ϕ̅)=… =V̂_T(ϕ̅)≥ 0 on A. (Since existence of an arbitrage implies existence of a one period arbitrage, it suffices to consider this kind of strategy.)Case 1: t=1. Since Ŝ_0 = E_Q[Ŝ_1] = Ŝ_1(B), we find that V̂_1(ϕ̅)(B) = V_0(ϕ̅) = 0 and, hence, V_1(ϕ̅)≥ 0 everywhere. Since (S^0,…,S^d) satisfies NIA, we obtain that V_1(ϕ̅) = 0 and, hence, V_s(ϕ̅)=0 for any s∈𝕋.Case 2: t≥ 2. Define ψ̅:= 1_Aϕ̅. Since A∈ℱ_1 and ϕ̅_1=0 we find that ψ̅∈𝒬 with ψ̅_0=…ψ̅_t-1=0 and V̂_t(ψ̅)=… =V̂_T(ψ̅)≥ 0. Since the model (S^0,…,S^d) on Ω satisfies NIA by assumption we find that 0 = V_t(ψ̅) = 1_A V_t(ϕ̅) and, hence, V_t(ϕ̅) = 0 on A. Thus (S^0,…,S^d) restricted to A satisfies NIA. By Theorem <ref> there is Q'∈ for the model (S^0,…,S^d) restricted to A. We denote its extension to Ω by Q' as well, i.e. Q'[C] = Q'[C∩ A] for any C∈𝒜. Define Q̃:=Q/2+Q'/2 and observe that Q̃∈. However, Q ≉Q̃ because Q' has disjoint support with Q. But Proposition <ref> implies Q≈Q̃, which yields a contradiction. Thus A=∅ and, hence, ℱ_1=ℱ_0. § CLAIMS AND INTEGER TRADING Fix a model that satisfies NIA. (i)A claim is a random variable C≥0. A real number p≥0 is an integer arbitrage free price of C, if there is an adapted non-negative stochastic process (X_t)_t∈𝕋 with X_0=p, X_T=C such that the market (S^0,…,S^d,X)satisfiesNIA. The set of integer arbitrage free prices is denoted by Π_ℤ(C).(ii) An integer superhedge for C is a trading strategy ϕ̅∈𝒵 such that V_T(ϕ̅) ≥ C, and it is an integer replication strategy if it satisfies V_T(ϕ̅) = C. We writeσ_ℤ(C) = inf{V_0(ϕ̅): ϕ̅∈𝒵, V_T(ϕ̅)≥ C}for the infimum of prices of integer superreplication strategies for C.Analogously to Π_ℤ(C), we write Π(C) for the set of classical arbitrage free prices in models satisfying NA. We recall the classical superhedging theorem (Corollaries 7.15 and 7.18 in <cit.>):Assume that NA holds, and let C be a claim with supΠ(C)<∞. Then there is a strategy ϕ̅∈ℛ with V_0(ϕ̅)=supΠ(C) and V_T(ϕ̅)≥ C. Moreover, supΠ(C) is the smallest number with this property. We find analogue statements to the preceding theorem under the weaker assumption NIA. Proposition <ref> below states that NIA suffices for the existence of a real cheapest superhedge whose price is the infimum of all rational superhedging prices. Moreover, Theorem <ref> below implies that either the set of NIA compatible prices for the claim is empty, or its supremum equals the cheapest superhedging price.There is no need to define the notion of integer completeness, because there would be no interesting models that have this property: The following statements are equivalent: (i) Every claim is replicable by an integer strategy,(ii) The probability space (Ω,𝒜,P) consists of a single atom.If (ii) holds and C is a claim, then there is a constant c∈[0,∞) such that C=c a.s. Then C is replicated by the integer strategy ϕ̅=(ϕ^0,0) with ϕ^0_t=c/(1+r)^T-t, t∈𝕋.Now suppose that every claim is integer replicable. In particular, then, each claim is replicable in the classical sense. It is well known that thisimplies that Ω has a partition into finitely many atoms. (This result is, of course, usually proved in the framework of a model satisfying NA. Assuming NA is not necessary though, as seen from the proof of Theorem 5.37 in <cit.>.) If Ω does not consist of a single atom, then we can fix two distinct atoms A and B. For a random variable X we can find its essential value on A (resp. B) and denote it by δ_A(X) (resp. δ_B(X)). A self-financing integer trading strategy ϕ̅ is uniquely defined by specifying its initial wealth V_0(ϕ̅) and the predictable ℤ^d-valued process ϕ=(ϕ_t)_t=1,…,T. Thus there is a bijective map Γ: ℝ×𝒵_c→𝒵 where 𝒵_c := { (ϕ^1,…,ϕ^d): ϕ̅∈𝒵} is countable with V_0(Γ(v,ϕ)) = v for any v∈ℝ. In particular, v↦ V_T(Γ(v,ϕ)) is affine. We have { (a,b)∈[0,∞)^2 : a1_A+b1_B can be integer replicated}⊆{(δ_A(V_T(ϕ̅)),δ_B(V_T(ϕ̅))): ϕ̅∈𝒵} =⋃_ϕ∈𝒵_c{(δ_A(V_T(Γ(v,ϕ))),δ_B(V_T(Γ(v,ϕ)))): v∈ℝ}.For each ϕ∈𝒵_c, the set {(δ_A(V_T(Γ(v,ϕ))),δ_B(V_T(Γ(v,ϕ)))): v∈ℝ} is a null set for the two-dimensional Lebesgue measure, because it is a one dimensional affine space in ℝ^2. We conclude that (<ref>) has Lebesgue measure zero, and hence (<ref>) is a null set, too. This contradicts our assumption.Recall that in the classical case (assuming (F), so that integrability holds), the set of arbitrage free prices has the representationΠ(C) = { E_Q[C/(1+r)^T]: Q∈𝔔},where 𝔔 is the set of equivalent martingale measures. The corresponding result for NIA looks as follows: Assume (F) and that the model satisfies NIA. Let C be a claim. ThenΠ_ℤ(C) ⊆{ E_Q[C/(1+r)^T] : Q ∈}.Suppose that p∈Π_ℤ(C). Then there is an adapted process X such that X_0 = p, X_T=C and (S^0,…,S^d,X) satisfies NIA. Letbe the set of absolutely continuous martingale measures that satisfy part (iii) of Definition <ref> for this market. By Theorem <ref> there is Q∈⊆. Then p = E_Q[C/(1+r)^T].The following example shows that the inclusion in Proposition <ref> can be strict. In fact, in this example we have Π_ℤ(C)=∅.Let Ω = {ω_1,ω_2,ω_3}, r=0, d=2, T=1 and P[{ω_j}]=1/3 for any j=1,2,3. Then the riskless asset is constant 1, i.e. S_t^0 = 1 for t∈{0,1}. We choose (S_0^1,S_0^2) = (π,1) and(S_1^1,S_1^2)(ω_j) = (2π,2) j=1, (π/2,1/2) j=2, (π,2) j=3.A short calculation reveals that Q[{ω_j}] := 1_{j≠ 3}j/3 is the only martingale measure. Obviously, we have Q≪ P, and the only integer strategy with zero initial wealth and Q-a.s. zero final wealth is identically zero. Thus Q satisfies (ii) in Theorem <ref> and, hence, we have NIA. Since Q is the only martingale measure we have ={Q}=. Now, we consider the claim C:=1_{ω_3}.Proposition <ref> yields that Π_ℤ(C) ⊆{ E_Q[C] } = {0}.Define the extended model (S^0,S^1,S^2,X), where X_0:=0, X_1:=C. Since (0,0,0,1) is an integer arbitrage for the extended model, it follows that 0∉Π_ℤ(C), and so Π_ℤ(C) = ∅⊊{0} = { E_R[C/(1+r)^T] : R ∈}. If the model satisfies NA (and not just NIA), then we can compare the sets of classical resp. integer arbitrage free prices. It is well known that Π(C) is an interval, which is open for non-replicable C and consists of a single point if C is replicable. It turns out that under NA the set Π_ℤ(C) is an interval, too, which may differ from Π(C) only at the endpoints. In particular, if NA holds, then Π_ℤ(C) cannot be empty.Suppose (F), that the model satisfies NA and let C be a claim. Then Π(C) ⊆Π_ℤ(C) ⊆cl(Π(C)). Moreover, if sup(Π_ℤ(C)) ∈Π_ℤ(C), then either C has aduplication strategy in 𝒬or there is no cheapest classical superhedging strategy that is in 𝒬. The first inclusion is trivial. Proposition <ref> in combination with NA yields that ^max is the set of martingale measures which are equivalent to P and that this set is dense in . Thus, Proposition <ref> impliesΠ_ℤ(C)⊆{ E_Q[C/(1+r)^T]:Q∈}⊆cl{ E_Q[C/(1+r)^T]:Q∈^max}= cl(Π(C)).To show the second assertion, suppose that s:=sup(Π_ℤ(C)) ∈Π_ℤ(C), and that there is a cheapest classical superhedging strategy ϕ̅∈𝒬. This means that ϕ̅ has price V_0(ϕ̅)=s and payoff V_T(ϕ̅) ≥ C. Since s ∈Π_ℤ(C), there is an integer-arbitrage free extension of the model where C trades at price s.Consider the strategy (ϕ̅,-1) in the extended model. Its cost is zero, and its payoff is V_T(ϕ̅)-C≥0. By part (i) of Lemma <ref>, we conclude C=V_T(ϕ̅), and so ϕ̅∈𝒬 is a duplication strategy for C. Alternatively, the inclusion Π_ℤ(C) ⊆cl(Π(C)) can be proved using Lemma <ref> (i), Lemma <ref> (i), and Theorem <ref>.In the preceding theorem the interval boundaries may or may not be contained in Π_ℤ(C), as the following example shows. The computations needed for parts (ii)-(iv) are similar to (i), and we omit the details.Let Ω={ω_1,ω_2,ω_3}, r=0, T=1 and assume that the number of risky assets is d=1. Let S_0^1 = 2 and S_1^1(ω_j) = 1 j=1, 3 j=2,3 j=3. The equivalent martingale measures are given by Q_α := 12 δ_ω_1 + αδ_ω_2 + (12 -α)δ_ω_3, α∈(0,12), and so the model satisfies NA. (i) Define the claimC(ω_j) = 2√(2)j=1, 0 j=2,4√(2) j=3.Using (<ref>), we find the classical arbitrage free pricesΠ(C)={ E_Q_α[C]: α∈(0,12)}=(√(2),3√(2)).We now check the boundary points for integer arbitrage, using part (iii) of Lemma <ref>. An integer arbitrage in the market extended by C with price p thus amounts to ϕ∈ℤ^2 such that ϕ(Δ S_1^1,C-p)≥ 0 and ϕ(Δ S_1^1,C-p)≠ 0. For p=√(2), we get the inequalities(-1√(2)1 -√(2)1 3√(2)) ( ϕ^1ϕ^2 ) ≥0.The solution set {(ϕ^1,ϕ^1/√(2)): ϕ^1 ∈ [0,∞)} has trivial intersection with ℤ^2, and so √(2) is an integer arbitrage free price for C. Similarly, we obtain that 3√(2)∈Π_ℤ(C) as well, and we conclude that the interval Π_ℤ(C) contains both endpoints: Π_ℤ(C) = [√(2),3√(2)]. We now verify that there is no cheapest classical superhedge in 𝒬, in accordance with the second assertion of Theorem <ref>. (Note that C is not replicable, as |Π(C)|>1; in particular, there is no replication strategy in 𝒬.) Clearly, if ϕ̅∈ℝ^2 is a cheapest superhedge, then ϕ^0 must satisfy ϕ^0=max_ω∈Ω(C(ω)-ϕ^1 S_1^1(ω)). The cost of this strategy then isV_0(ϕ̅)= max_ω∈Ω(C(ω)-ϕ^1 S_1^1(ω)) + ϕ^1 S_0^1= max_ω∈Ω(C(ω)-ϕ^1 Δ S_1^1(ω)).Our optimal strategy is ϕ̅=(3√(2),√(2))∉𝒬, because the probleminf_ϕ^1 ∈ℝmax_ω∈Ω(C(ω)-ϕ^1 Δ S_1^1(ω)) =inf_ϕ^1 ∈ℝmax{2√(2)+ϕ^1,-ϕ^1,4√(2)-ϕ^1 }has the unique solution ϕ^1=√(2). Similarly, we obtain that the most expensive classical subhedge is not in 𝒬, agreeing with the (obvious) subhedging variant of the second assertion of Theorem <ref>. (ii) IfC(ω_j) = 2√(2)j=1, 0 j=2,2√(2) j=3,then Π_ℤ(C) = [√(2),2√(2)). The cheapest classical superhedge is in 𝒬, whereas the most expensive classical subhedge is not in 𝒬.(iii) IfC(ω_j) = 0 j=1, 0 j=2,2√(2) j=3,then Π_ℤ(C) = (0,√(2)].The cheapest classical superhedge is not in 𝒬, whereas the most expensive classical subhedge is in 𝒬.(iv) IfC(ω_j) = 0 j=1, 0 j=2,2 j=3,then Π_ℤ(C) = (0,1). The cheapest classical superhedge and the most expensive classical subhedgeare both in 𝒬.It might make sense to restrict attention to static trading strategies in the claim, e.g., as a simple approach for modelling the typically reduced liquidity of derivatives compared to their underlyings. This means that the claim can initially be bought or sold, but not traded until maturity. In the classical case, the superhedging theorem (Theorem <ref>) readily yields that the set Π^stat(C) of static-arbitrage-free claim prices defined in this way satisfies Π^stat(C)= Π(C). Now suppose that our model satisfies only NIA. Analogously to (<ref>), defineσ̂_ℤ(C) = sup{V_0(ϕ̅): ϕ̅∈𝒵, V_T(ϕ̅)≤ C}.For p∉ [ σ̂_ℤ(C), σ_ℤ(C)], we clearly have p∉Π_ℤ^stat(C), because, using appropriate integer sub- resp. superhedges, one can easily construct a static integer arbitrage for the extended model. Therefore, we obtain Π_ℤ(C) ⊆Π_ℤ^stat(C)⊆[ σ̂_ℤ(C), σ_ℤ(C)]. We now proceed to identify the value of the `cheapest' superhedge in 𝒬. The only difference to the classical case is that the cheapest superhedge is not necessarily in 𝒬, but can be approximated arbitrarily well with superhedges in 𝒬 (even if only NIA holds). For results on the `cheapest' superhedge in 𝒵, see Section <ref>.Assume (F), that the model satisfies NIA and let C be a claim. Then there is a cheapest superhedge ϕ̅∈ℛ which satisfiesV_0(ϕ̅)= sup{E_Q[C/(1+r)^T]:Q∈}= sup{E_Q[C/(1+r)^T]:Q∈^max}. Moreover, for any ϵ>0 there is a superhedge ξ̅∈𝒬 for C such that V_0(ξ̅) ≤ V_0(ϕ̅) + ϵ.Proposition <ref> yieldssup{E_Q[C/(1+r)^T]:Q∈} = sup{E_Q[C/(1+r)^T]:Q∈^max}. Let Q∈^max and defineA:={ω∈Ω: Q[{ω}]=0}. Then, S̅ restricted to Ω∖ A satisfies NA because Q is a martingale measure. By (F), there is a cheapest superhedge for C on this market, which we denote by η̅∈ℛ. It satisfies V_T(η̅) ≥ C Q-a.s. Since Q∈^max there is ψ̅∈ℛ_0 such that V_T(ψ̅)≥ 0 and {V_T(ψ̅)>0} = A. As Ω is finite there is a∈ℝ such that V_T(aψ̅+η̅) = aV_T(ψ̅)+V_T(η̅) ≥ C. By Proposition <ref> andTheorem <ref>,we find that ϕ̅:= aψ̅+η̅ is a superhedge for C with initial priceV_0(ϕ̅) = V_0(η̅) = sup{E_Q[C/(1+r)^T]:Q∈^max}.Now let γ̅∈ℛ be any superhedge for C. Then V_T(γ̅) ≥ C Q-a.s. for any Q∈^max and, hence, V_0(γ̅) =E_Q[V_T(γ̅)/(1+r)^T] ≥sup{E_Q[C/(1+r)^T]:Q∈^max}.Consequently, ϕ̅ is a cheapest superhedge for C.The second assertion follows easily from part (i) of Lemma <ref>.§ THE STRUCTURE OF THE SET OF INTEGER-ARBITRAGE-FREE PRICES The main result of this section is that the set Π_ℤ(C) of NIA-compatible claim prices is always dense in an interval (Theorem <ref>). First, we give some sufficient conditions that imply that Π_ℤ(C) equals an interval. Assume (F) and that the model satisfies NIA. Then Π_ℤ(C) is an interval if any of the following statements holds: (i) there is only one trading period (T=1), and Π_ℤ(C) is not empty, (ii) there is only one risky asset (d=1), or (iii) the model satisfies NA.If (ii) holds, then Theorem <ref> yields that (iii) holds. If we assume (iii), then Theorem <ref> yields the claim. Now assume that (i) holds. Proposition <ref> implies thatΠ_ℤ(C) ⊆{E_Q[C/(1+r)^T] : Q∈} :=J. If J is a singleton, then we have equality by assumption and, hence, the claim follows. Thus we may assume that J contains at least two points. The set J is an interval. Let p∈(J) and define X_0:=p, X_1:=C. Assume for contradiction that there is an integer arbitrage (ϕ̅,ϕ^d+1) for the model (S^0,…,S^d,X).By part (ii) of Lemma <ref>, we may assume that V_0(ϕ̅,ϕ^d+1)=0. We have ϕ_1^d+1≠ 0, because otherwise ϕ̅ is an integer arbitrage for (S^0,…,S^d) with V_0(ϕ̅) = 0. Case 1: ϕ_1^d+1<0. Then C ≤ -(1/ϕ_1^d+1)∑_j=0^d ϕ_1^j S_1^j. Thus, ψ_1^j := -ϕ^j/ϕ^d+1, j=0,…,d, is a superhedge for C. We have V_0(ψ̅) = p. Proposition <ref> yields that p = V_0(ψ̅) ≥sup(J) > p. A contradiction. Case 2: ϕ_1^d+1>0; analogous. Thus, p∈Π_ℤ(C) which yields that (J) ⊆Π_ℤ(C) ⊆ J and, hence, Π_ℤ(C) is an interval.We now give an example in which the set of integer arbitrage compatible prices is not an interval. More precisely, we exhibit a model satisfying NIA and a claim C where the set of NIA-consistent prices is given by Π_ℤ(C) = [0,1/2]∖ (ℚ + ℚπ).Let Ω := {ω_1,ω_2,ω_3,ω_4}, d=2, T=2 and r=0. We use the filtration ℱ_0 := {∅,Ω}, ℱ_1:=σ({ω_1},{ω_2,ω_3},{ω_4}) and ℱ_2:=2^Ω. We choose the market model given by (S^1_0,S^2_0) := (1,π) andS_2(ω_j) := S_1(ω_j) :=(3/2,3π/2) j=1,(1/2,π/2) j=2,3, (1,1+π) j=4.This market allows for the static real arbitrage η̅_t := (0,-π,1) which is self-financing, satisfies V_0(η̅) = 0 and V_2(η̅) = 1_{ω_4}. Thus, any martingale measure Q must satisfy Q[{ω_4}] = 0. Define the measure Q_α by Q_α[{ω_j}] :=1/2 j=1, α/2j=2,(1-α)/2j=3,0 j=4. for any α∈ [0,1]. Then Q_α is a martingale measure and 𝒵_Q_α^0 = {0}. In particular, Theorem <ref> yields that NIA holds. Moreover, = {Q_α: α∈[0,1]}.Now we choose the claim C := 1_{ω_2}. Proposition <ref> yieldsΠ_ℤ(C) ⊆{ E_Q[C] : Q∈} = [0,1/2].Let p∈ [0,1/2]∩ (ℚ+ ℚπ), p≠ 0 and assume for contradiction that p∈Π_ℤ(C). Then there is an adapted process (X_0,X_1,X_2) such that X_0=p, X_2=C and the model (S^0,S^1,S^2,X)satisfies NIA. Define α := 2p, and let u,v∈ℚbe such that α=u+vπ.Define the strategy (ϕ̅_t,ϕ^3_t)_t=1,2∈𝒬 by ϕ^3_1:= sgn((2-π)v-u) ∈{-1,1} and (ϕ̅_1,ϕ_1^3):= (-32 αϕ_1^3,u ϕ_1^3,v ϕ_1^3,ϕ^3_1),(ϕ̅_2,ϕ_2^3)(ω_j):=0 j=1,2,3,(12|(2-π)v-u|,0,0,0) j=4. This strategy satisfies V_0(ϕ̅,ϕ^3) = 0 and V_1(ϕ̅,ϕ^3)=V_2(ϕ̅,ϕ^3) = 12|(2-π)v-u|· 1_{ω_4}. Thus, (ϕ̅,ϕ^3) is a rational arbitrage. A contradiction. Let p=0 and assume for contradiction that p∈Π_ℤ(C). Then there is an adapted process (X_0,X_1,X_2) such that X_0=0, X_2=C and the model (S^0,S^1,S^2,X) satisfies NIA. Define the static strategy (ϕ̅,ϕ^3) := (0,0,0,1). We have V_0(ϕ̅,ϕ^3) = 0 and V_2(ϕ̅,ϕ^3) = 1_{ω_2}. Thus, (ϕ̅,ϕ^3) is an integer arbitrage. A contradiction.We have shown so far that Π_ℤ(C)⊆ [0,1/2]∖ (ℚ + ℚπ).Conversely, let now p∈ [0,1/2]∖ (ℚ + ℚπ). We show that p∈Π_ℤ(C). Define α := 2p andX^α_0 := α/2, X^α_1 := α 1_{ω_2,ω_3}, X^α_2 := C. To see that the model (S^0,S^1,S^2,X^α) satisfies NIA, assume for contradiction that there is an integer arbitrage. Then there is a one period arbitrage (ϕ̅,ϕ^3). Obviously, there is no arbitrage possibility in the second period, and so we may assume (ϕ_2,ϕ^3_2)=0 (i.e., no risky position in the second period). Thus, V_1(ϕ̅,ϕ^3) = V_2(ϕ̅,ϕ^3) ≥ 0. Since Q_α is a martingale measure for the extended model, we get V_1(ϕ̅,ϕ^3) = 0 Q_α-a.s. In particular, V_1(ϕ̅,ϕ^3)(ω_2)=0, and together with V_0(ϕ̅,ϕ^3)=0 (see Lemma <ref> (ii)) this impliesϕ^1_1 + πϕ^2_1 - αϕ^3_1=0.As the original model satisfies NIA, we must have ϕ^3_1≠ 0, which leads to the contradiction2p=α = ϕ^1_1 + πϕ^2_1/ϕ^3_1∈ℚ + ℚπ. Thus, there is no integer arbitrage, i.e. the model satisfies NIA and, hence, p∈Π_ℤ(C).Throughout the remainder of this section, we will always assume (F), NIA and ℱ_T=𝒜. Also, let C be a claim. The following theorem is our main result on the structure of Π_ℤ(C) in the general case.The set Π_ℤ(C) is either empty or dense in [inf{ E_Q[C/(1+r)^T]: Q∈},sup{ E_Q[C/(1+r)^T]: Q∈}].The theorem will follow from Lemmas <ref> and <ref> below. Note that  can be replaced by , due to the last assertion of Proposition <ref>. In order to prove Theorem <ref>, we assume that Π_ℤ(C) is non-empty, and choose p^*∈Π_ℤ(C). By definition, there is an adapted process (X^*_t)_t∈𝕋 such that X^*_0=p, X^*_T=C, and the model (S^0,…,S^d,X^*) satisfies NIA. We also defineA_t := {ω∈Ω: ∃ϕ̅∈ℛ_0:V_T(ϕ̅)≥ 0,V_T(ϕ̅)(ω)>0, ϕ̅_1,…,ϕ̅_t=0}for any t∈𝕋∖{T}. By the same argument as in the proof of Theorem <ref>, there is ϕ̅∈ℛ_0 with ϕ̅_1,…,ϕ̅_t=0, V_T(ϕ̅)≥ 0 and {V_T(ϕ̅)> 0} = A_t. From the definition we see that A_t⊆ A_t-1 and A_t-1∖ A_t∈ℱ_t for any t∈𝕋∖{0}. We writefor the set of measures Q such that (Ŝ_u)_u=t,… T is a Q-martingale, Ω∖ A_t is the support of Q, and Q[B]>0 for any non-empty set B∈ℱ_t. Now we define two sequences of sets:𝒦_T:= {C},𝒦_t := {E_Q[D/1+r|ℱ_t]: Q∈,D∈𝒦_t+1},𝒞_T:= {C},𝒞_t := {E_Q[D/1+r|ℱ_t]: Q∈,D∈𝒞_t+1, ∀ B∈ℱ_t ∀ξ∈ℚ^d ∀ s∈{-1,1}: 1_B ξΔŜ_t+1≥ s1_B(D/(1+r)^t+1-E_Q[D/(1+r)^t+1|ℱ_t]) ⇒ 1_B ξΔŜ_t+1 = s1_B(D/(1+r)^t+1-E_Q[D/(1+r)^t+1|ℱ_t])}for any t∈𝕋∖{T}. Lemma <ref> below together with the convexity of =𝔔_0 implies that𝒦_0 = [inf{ E_Q[C/(1+r)^T]: Q∈},sup{ E_Q[C/(1+r)^T]: Q∈}],and Lemma <ref> below yields a countable exception set F such that 𝒞_0 = 𝒦_0∖ F. Finally, Lemma <ref> states that 𝒞_0 is contained in the set Π_ℤ(C) of NIA-compatible prices, which establishes Theorem <ref>. For technical reasons, we first analyse the sets , and we will need the stochastic convexity of 𝒦_t given in Lemma <ref> below. Let t∈𝕋. Thenis non-empty. If t≠ 0, then for any Q∈𝔔_t-1 there is Q'∈ such that E_Q'[X|ℱ_s] = E_Q[X|ℱ_s] Q-a.s. for any s=t,…,T and any random variable X:Ω→ℝ. Let I:={t∈𝕋:the claim holds for t}. We have 0∈ I by Theorem <ref> (i). Let t∈𝕋 such that t-1∈ I. We show t∈ I which implies I=𝕋 and, hence, the claim. We directly produce the measure with the given extra property. To this end let Q∈𝔔_t-1. Let B_1…,B_m be an enumeration of the minimal non-empty elements of ℱ_t and define k := |{ B_l: l=1,…,m, B_l⊆ A_t-1∖ A_t}|. We may assume that B_1,…,B_k⊆ A_t-1∖ A_t. Since Ω∖ A_t-1 is the support of Q we have Q[B_l] > 0 for any l=k+1,…,m. Since A_t∖ A_t-1 is ℱ_t-measurable we have A_t-1∖ A_t = ⋃_l=1^kB_l. Define the probability measures P_l := P/P[B_l] on B_l. Since the model (Ŝ_u)_u=t,…,T satisfies NIA we get from Theorem <ref> (i) a martingale measure Q_l≪ P_l on B_l. Define the probability measure Q'[D] := (Q[D] + ∑_l=1^k Q_l[D∩ B_l] )/(1+k), D∈𝒜. Clearly, the support of Q' is Ω∖ A_t and Q'[D]>0 for any D∈ℱ_t. Also, (Ŝ_u)_u=t,…,T is a Q'-martingale. Thus, we have Q'∈. Now, let X:Ω→ℝ be a random variable and s∈{t,…,T}. We show that E_Q'[X|ℱ_s] is a version of the ℱ_s-conditional expectation of X under Q. To this end, let D∈ℱ_s and define D':=D∖ A_t-1. Then D' is Q'-essentially ℱ_s-measurable, because A_t is a Q'-null set and A_t-1∖ A_t is ℱ_t⊆ℱ_s-measurable. We have E_Q[E_Q'[X|ℱ_s]1_D]= E_Q[E_Q'[X|ℱ_s]1_D'] = E_Q[E_Q'[X1_D'|ℱ_s]] = E_Q[E_Q[X1_D'|ℱ_s]] = E_Q[X1_D'] = E_Q[X1_D]. Thus, t∈ I. For any t∈𝕋 we have 𝒦_t = { E_Q[C/(1+r)^T-t|ℱ_t]: Q∈}. Define I:={ t∈𝕋: 𝒦_t = { E_Q[C/(1+r)^T-t|ℱ_t]: Q∈}}. Obviously, T∈ I. Let t∈ I∖{0}. We show that t-1∈ I which implies I=𝕋 and, hence, the claim.To this end, let X_t-1∈𝒦_t-1. Then there is Q∈𝔔_t-1 and X_t∈𝒦_t such that X_t-1 = E_Q[X_t/1+r|ℱ_t-1]. Since X_t∈𝒦_t there is R∈ such that X_t = E_R[C/(1+r)^T-t|ℱ_t]. Define the measureQ'[B] := E_Q[R[B|ℱ_t]], B∈𝒜.Since R∈ we have R[B] >0 for any non-empty set B∈ℱ_t. Let B∈ℱ_t-1⊆ℱ_t be non-empty. Then R[B|ℱ_t] = 1_B and, hence, Q'[B]=Q[B]>0. Also, (Ŝ_u)_u=t-1,…,T is a Q'-martingale. Since A_t⊆ A_t-1 and A_t-1∖ A_t∈ℱ_t we get R[A_t-1|ℱ_t] = 1_A_t-1∖ A_t + R[A_t|ℱ_t].However, A_t is an R null set, hence R[A_t|ℱ_t] = 0 R-a.s. Since ℱ_t has no non-empty R null sets, we have R[A_t|ℱ_t] = 0. We get R[A_t-1|ℱ_t] = 1_A_t-1∖ A_t, which yields Q'[A_t-1] = Q[A_t-1∖ A_t] = 0. Let ω∈Ω∖ A_t-1. Then R[{ω}|ℱ_t]≥ 0 and R[{ω}|ℱ_t](ω) > 0. Since Q[{ω}]>0 we get Q'[{ω}] >0. Thus, the support of Q' is Ω∖ A_t-1, which yields Q'∈𝔔_t-1. We have E_Q'[C/(1+r)^T-(t-1)|ℱ_t-1]= E_Q[E_R[C/(1+r)^T-t|ℱ_t]/(1+r)|ℱ_t-1] = E_Q[X_t/1+r|ℱ_t-1] = X_t-1. Thus, X_t-1∈{ E_Q[C/(1+r)^T-t|ℱ_t-1]: Q∈𝒬_t-1}.Now, let X_t-1∈{ E_Q[C/(1+r)^T-(t-1)|ℱ_t-1]: Q∈𝔔_t-1}; we have to show that X_t-1∈𝒦_t-1. There is Q∈𝔔_t-1 such that X_t-1 = E_Q[C/(1+r)^T-(t-1)|ℱ_t-1]. By Lemma <ref> we find Q'∈ such that E_Q[Y|ℱ_s] = E_Q'[Y|ℱ_s] Q-a.s. for any random variable Y:Ω→ℝ and any s=t,…, T. Define X_t := E_Q'[C/(1+r)^T-t|ℱ_t] ∈𝒦_t because t∈ I. We find𝒦_t-1 ∋ E_Q[X_t/1+r|ℱ_t-1] = E_Q[E_Q'[C/(1+r)^T-t+1|ℱ_t]|ℱ_t-1] = E_Q[C/(1+r)^T-(t-1)|ℱ_t-1] = X_t-1. Thus, t-1∈ I.For any t∈𝕋 and any ℱ_t-measurable random variable α with values in [0,1] and any X,Y∈𝒦_t we have α X + (1-α) Y ∈𝒦_t. Lemma <ref> yields measures Q,R∈ such that X=E_Q[C/(1+r)^T-t|ℱ_t], Y=E_R[C/(1+r)^T-t|ℱ_t]. Define the measureQ'[B] := E_Q[ α Q[B|ℱ_t] + (1-α) R[B|ℱ_t]].It is clear that Q and Q' agree on ℱ_t, and one easily verifies Q'∈. Let B∈ℱ_t. ThenE_Q'[ 1_B C/(1+r)^T-t]= E_Q[ α E_Q[ 1_B C/(1+r)^T-t| ℱ_t] + (1-α) E_R[ 1_B C/(1+r)^T-t| ℱ_t]] = E_Q[α 1_B X +(1-α)1_B Y] = E_Q'[α 1_B X +(1-α)1_B Y]. We find 𝒦_t∋ E_Q'[C/(1+r)^T-t|ℱ_t] = α X + (1-α)Y. We have 𝒞_0 ⊆Π_ℤ(C). Let p∈𝒞_0. Define X_0:=p. We can find recursively X_t+1∈𝒞_t+1 and Q_t∈ such that X_t = E_Q_t[X_t+1/(1+r)|ℱ_t] for t∈𝕋∖{T}. Since X_T∈𝒞_T = {C} we have X_T=C. Assume for contradiction that there is an integer arbitrage for the model (S^0,…,S^d,X). Then there is a one period integer arbitrage (ϕ̅,ϕ^d+1), i.e. there is t_0∈𝕋 such that (ϕ_t,ϕ^d+1_t)=0 for any t∈𝕋∖{0,t_0}. Then there is a minimal set B∈ℱ_t_0-1∖{∅} such that 1_B(ϕ̅,ϕ^d+1) is still an arbitrage. Define (η,η^d+1):=(ϕ_t_0,ϕ^d+1_t_0)(ω)∈ℤ^d+1 for some ω∈ B. ThenY := 1_B(ηΔŜ_t_0 + η^d+1(X_t_0/(1+r)^t_0-X_t_0-1/(1+r)^t_0-1)) ≥ 0and P[Y>0]>0. Since the model (S^0,…,S^d) satisfies NIA, we have η^d+1≠0 and can define ξ^j := η^j/η^d+1. We get Y/η^d+1 = 1_B( ξΔŜ_t_0 + (X_t_0/(1+r)^t_0-X_t_0-1/(1+r)^t_0-1)). Thus, X_t_0-1∉𝒞_t_0-1. A contradiction. It is not hard to see that actually 𝒞_0 = Π_ℤ(C), but we will not use this fact.Let t∈𝕋. Let X_t∈𝒦_t and define X^α_t := α X_t + (1-α)X_t^* for any α∈[0,1]. Then there is a countable set F⊆ (0,1] such that X_t^α∈𝒞_t for any α∈[0,1]∖ F. In particular, 𝒞_t is dense in 𝒦_t. Define I:={ t∈𝕋:the claim holds for this t}. Obviously, T∈ I. Let t∈ I∖{0}. We show that t-1∈ I which implies I=𝕋 and, hence, the claim. To this end, let X_t-1∈𝒦_t-1. Then there are X_t∈𝒦_t and Q∈𝔔_t-1 such that X_t-1 = E_Q[X_t/(1+r)|ℱ_t-1]. We define X_t-1^α := α X_t-1 + (1-α) X^*_t-1, X_t^α := α X_t + (1-α) X^*_tfor any α∈[0,1]. There is F_t⊆ (0,1] countable such that X_t^α∈𝒞_t for any α∈ [0,1]∖ F_t. For any α∈[0,1]∖ F_t we find recursively X^α_s+1∈𝒞_s+1 and Q^α_s∈𝔔_s such that X^α_s = E_Q^α_s[X^α_s+1/(1+r)|ℱ_s], for s≥ t.We will show that (S_u^0,…,S_u^d,X^α_u)_u=t-1,…,T satisfies NIA for all but countably many choices for α∈[0,1]. Since existence of an integer arbitrage implies existence of a one-period arbitrage and the market (S_u^0,…,S_u^d,X^α_u)_u=t,…,T does not allow for arbitrage, we know that this arbitrage must be in the period from t-1 to t. Since ℱ_t-1 is generated by finitely many atoms it is sufficient to condition on one of the atoms. Thus, we may simply assume that t=1. We defineF_0 := {α∈(0,1] : α∈ F_1 orX^α_0∉𝒞_0}, and we will show that F_0 is countable. To this end, we define the sets𝒟_1:= { Y∈ L^0((Ω,ℱ_1,P),ℝ): Y=ξΔŜ_1 Q-a.s., ξ∈ℝ^d},𝒟^ℚ_1:= { Y∈ L^0((Ω,ℱ_1,P),ℝ): Y=ξΔŜ_1 Q-a.s., ξ∈ℚ^d}and ΔX̂^α_1 := X^α_1/1+r-X^α_0 for α∈[0,1]. Case 1: ΔX̂_1∉𝒟_1 or ΔX̂^*_1∉𝒟_1.We define F̅:={α∈ [0,1]:Δ X_1^α∈𝒟_1}. Since 𝒟_1 is a vector space, we find that F̅ contains at most one element. We claim that (0,1]∖ (F_1∪F̅) does not contain any element of F_0. To this end, let α∈ (0,1]∖ (F_1∪F̅) and assume for contradiction that there is an integer arbitrage in the first period. Hence, there is ξ∈ℤ^d+1 such that ξ^d+1≠ 0 and ξΔŜ_1 + ξ^d+1ΔX̂^α_1≥ 0. Since Q is a martingale measure we get ξΔŜ_1 + ξ^d+1ΔX̂^α_1 = 0 Q-a.s., and after solving for ΔX̂^α_1 we find ΔX̂^α_1∈𝒟_1. A contradiction.Case 2: ΔX̂_1,ΔX̂^*_1 ∈𝒟_1 and there is a set with positive Q-measure on which ΔX̂_1 ≠ΔX̂^*_1. Since 𝒟^ℚ_1 restricted to Ω∖ A_0 has countably many elements we find that ΔX̂^α_1∈𝒟^ℚ_1 at most countably often. Denote the set of α∈[0,1] where ΔX̂^α_1∈𝒟^ℚ_1 by F̅. We claim F_0⊆ F_1∪F̅. To this end, let α∈ (0,1]∖ (F_1∪F̅) and assume for contradiction that there is an integer arbitrage in the first period. Then there is ξ∈ℤ^d+1 such that ξ^d+1≠ 0 and ξΔŜ_1 + ξ^d+1ΔX̂^α_1≥ 0. Since Q is a martingale measure we get ξΔŜ_1 + ξ^d+1ΔX̂^α_1 = 0 Q-a.s., and after solving for ΔX̂^α_1 we find ΔX̂^α_1∈𝒟_1^ℚ. A contradiction.Case 3: ΔX̂_1,ΔX̂^*_1 ∈𝒟_1 and ΔX̂_1 = ΔX̂^*_1 Q-a.s. Then, we have X_1= (1+r)(ΔX̂_1+X_0) = (1+r)(ΔX̂^*_1+X_0) = X_1^* + (1+r)(X_0-X_0^*), Q-a.s. If X_0 = X_0^*, then X_0=X_0^*∈𝒞_0. Thus we may assume that X_0≠ X_0^*. If R[B] ∈{0,1} for any R∈ and any B∈ℱ_1, then Lemma <ref> yields ℱ_1=ℱ_0 and, hence, ΔX̂_1 = 0 = ΔX̂^*_1 which yields that F_0 = F_1. Thus, we may assume that there is R∈ and B∈ℱ_1 with R[B]∈ (0,1). By Proposition <ref> we find that there is B∈ℱ_1 such that for any R∈=𝔔_0 we have R[B]∈ (0,1). In particular, we have Q[B]∈ (0,1).For n∈ℕ we defineY_1^n:= X_11_B^c + ((1-1/n)X_1+X_1^*/n)1_B= X_1 + 1_B(X_1^*-X_1)/n,Y_0^n:= X_0Q[B^c] + ((1-1/n)X_0+X_0^*/n)Q[B] = X_0 + Q[B](X_0^*-X_0)/n. Lemma <ref> yields that Y_1^n∈𝒦_1 for any n∈ℕ. The measure Q_n[D] := E_Q[Q_n'[D|ℱ_1]], where Q_n'∈𝔔_1 is such that Y_1^n = E_Q_n'[C/(1+r)^T-1|ℱ_1], satisfies E_Q_n[Y_1^n/1+r] = E_Q[Y_1^n/1+r] = Y_0^n,where the last equality follows from the definition of Q and (<ref>). Thus, Y_0^n∈𝒦_0 for any n∈ℕ. Observe thatΔŶ_1^n := Y_1^n/1+r-Y_0^n = ΔX̂_1 + 1/n(1_BX_1^*-X_1/1+r-Q[B](X_0^*-X_0)). We find that ΔŶ_1^n ≠ΔX̂_1^* with positive Q-probability. By appealing to case 1 resp. case 2 we find F^n⊆[0,1] countable such that α Y_0^n+(1-α)X_0^*∈𝒞_0 for any α∈ [0,1]∖ F^n. Definethe countable setF̅:= {α(1-Q[B]/n): n∈ℕ, α∈ F^n}.We claim that F_0∖{1}⊆ F_1∪F̅. To this end let α∈ (0,1) ∖ (F_1∪F̅). Choose n∈ℕ such that n > Q[B]/(1-α). Then there is α'∈ [0,1] such that α = α'(1-Q[B]/n). We find α'∉ F^n because α∉F̅. Thus, we have 𝒞_0 ∋α'Y_0^n+(1-α')X_0^* = X_0^*+α/1-Q[B]/n(Y_0^n-X_0^*) = X_0^* + α/1-Q[B]/n(X_0-X_0^*)(1-Q[B]/n) = X_0^* + α(X_0-X_0^*) = X_0^α.Consequently, α∉ F_0, and we have shown that F_0 is countable. § INTEGER SUPERHEDGING In this section we discuss some properties of the integer superhedging price σ_ℤ(C) of a claim, as defined in (<ref>). First, we give a simple example where it does not agree with the classical superhedging price supΠ(C). In this example, the gap between supΠ(C) and the cheapest integer superhedging price σ_ℤ(C) has size a, for an arbitrary number a>0. On the probability space Ω={ω_1,ω_2}, consider the one-dimensional modelS^1_0 = 1,S^1_1(ω_1) = 1-2a, S^1_1(ω_2)=1+2awith r=0. The unique equivalent martingale measure is (δ_ω_1+δ_ω_2)/2, and so the unique arbitrage free price of the claimC(ω_1)=0,C(ω_2)=2ais given by Π(C)={a}. By Theorem <ref>, we have Π_ℤ(C)=Π(C)={a}. Theinteger superhedging price is found by computingσ_ℤ(C) = inf_ϕ∈ℤmax_ω∈Ω(C(ω) -ϕΔ S_1^1(ω))=min_ϕ∈ℤmax{2aϕ,2a-2aϕ}=2a.We obtain that the interval of prices of integer superhedges is [2a,∞). For real ϕ, the minimum in (<ref>) is attained at ϕ=12, yielding the classical superhedging price a=supΠ(C). As soon as a model is fixed, the gap considered in the preceding example can be bounded for all claims. In Example <ref>, we have equality in (<ref>). On ℝ^n, we always use the Euclidean norm ·=·_2.Assume (F) and NA, and let C be a claim. Thenσ_ℤ(C) - supΠ(C) ≤1/2√(d) max_ω∈Ω∑_k=1^TΔŜ_k (ω).Let ψ̅∈ℛ be a cheapest classical superhedge. By Theorem <ref>, it satisfies V_0(ψ̅)=supΠ(C). By rounding the risky positions of ψ̅ to the closest integers (with any convention for half-integers), we get a strategy (ψ^0,⌊ψ⌉)∈𝒵. Clearly,ψ_t-⌊ψ⌉_t≤(12,…,12)=12 √(d),t∈𝕋.Define Ĉ=C/(1+r)^T, and let ϕ̅∈𝒵. SinceV̂_T(ϕ̅) = V_0(ϕ̅) + ∑_k=1^T ϕ_k ΔŜ_k,we get the necessary conditionV_0(ϕ̅)=max_ω∈Ω( Ĉ(ω)- ∑_k=1^T ϕ_k(ω) ΔŜ_k(ω) ),if ϕ̅ should be a cheapest integer superhedge. It follows thatσ_ℤ(C)= inf_ϕ predictable, ℤ^d-valued max_ω∈Ω( Ĉ(ω)- ∑_k=1^T ϕ_k(ω) ΔŜ_k(ω) ) ≤max_ω∈Ω( Ĉ(ω)- ∑_k=1^T ⌊ψ⌉_k(ω) ΔŜ_k(ω) ) ≤max_ω∈Ω( Ĉ(ω)- ∑_k=1^T ψ_k(ω) ΔŜ_k(ω) ) + max_ω∈Ω∑_k=1^T(ψ_k(ω)-⌊ψ⌉_k(ω))ΔŜ_k(ω) ≤supΠ(C) + max_ω∈Ω∑_k=1^Tψ_k(ω)-⌊ψ⌉_k(ω)·ΔŜ_k(ω)≤supΠ(C) +1/2√(d) max_ω∈Ω∑_k=1^TΔŜ_k (ω).The following example shows that, contrary to the case of classical superhedging, there need not exist a cheapest integer superhedge. Let Ω={ω_1,ω_2}, r=0, T=1 and d=2. We choose the model with S_0 :=(2,2) and S_1(ω_j) =(3,2-√(2))j=1,(1,2+√(2))j=2.This model satisfies NA, and it is complete in the classical sense. Indeed, the only martingale measure is given by Q[{ω_j}] = 1/2 for j=1,2. Consider the claimC(ω_1) = 1-12√(2),C(ω_2) =1+ 12√(2),whose set of (integer) arbitrage free prices is the singleton Π_ℤ(C)=Π(C)={1}. Then there is no minimizer for the superhedging problem (see (<ref>))inf_ϕ∈ℤ^2max_i=1,2(C(ω_i)-ϕΔ S_1(ω_i)) =:inf_ϕ∈ℤ^2 f(ϕ).Indeed, for ϕ∈ℝ^2, the set of minimizers would beϕ∈{(x,12 + x/√(2)): x∈ℝ},yielding inf_ϕ∈ℝ^2 f(ϕ)=1. Obviously, this set contains no integer strategies. By Kronecker's approximation theorem (Theorem <ref>), the sequence (12+m/√(2)) 1, m∈ℕ, is dense in [0,1]. Thus, there is a sequence m_k∈ℕ such that0≤ (12+m_k/√(2)) 1 ≤1/k,k∈ℕ.Defineϕ^(k):=(m_k,⌊12+m_k/√(2)⌋) ∈ℤ^2,k∈ℕ.Since f is Lipschitz continuous (with constant L, say), we have|f(ϕ^(k))-1|= |f(ϕ^(k))- f(m_k,12+m_k/√(2))| ≤ L (0,(12+m_k/√(2)) 1) → 0,k→∞.Thus, the infimum of the prices of integer superhedges is σ_ℤ(C)=1, but there is no cheapest integer superhedge.Financial institutions usually hedge large portfolios of identical (or at least similar) options. The following theorem shows that, when superhedging N copies of C, the integer superhedging price per claim converges to the classical superhedging price: lim_N→∞N^-1σ_ℤ(NC) = supΠ(C). The second part of Theorem <ref> gives an estimate on superhedging C with rational strategies with controlled denominators. Assume (F) and NA, and let C be a claim. Then (i)σ_ℤ(NC)/N = supΠ(C) + O( 1/N),N→∞. (ii) There is a sequence of rational strategies ψ̅^(N)∈𝒬 such that all denominators occurring in ψ̅^(N) have absolute value at most N,V_0(ψ̅^(N)) = supΠ(C) + O(N^-1/(nd(T+1))log N),and ψ̅^(N) is a superhedging strategy for C. (i) Assumption (F) implies that the classical superhedging price supΠ(C)=sup{E_Q[C/(1+r)^T]:Q∈𝔔} is finite. It is clear thatN^-1σ_ℤ(NC) ≥ N^-1supΠ(NC) = supΠ(C)for all N. For the converse estimate, let ϕ̅ be a classical superhedging strategy for C with price supΠ(C) (see Theorem <ref>). Defineη^(N),j_t(ω) := ⌊ N ϕ_t^j(ω) ⌋ = N ϕ_t^j(ω)+O(1), ω∈Ω,t∈𝕋,1≤ j≤ d,N∈ℕ.We choose an arbitrary map f:ℕ→ℝ satisfying lim_N→∞f(N)=∞ and putη_1^(N),0 := Nϕ_1^0+f(N),N∈ℕ.Then wedefine η_t^(N),0 for t=2,…,T recursively to obtain a self-financing integer strategy η̅^(N) for each N. By the definition of η^(N), and since ϕ̅ is a superhedging strategy, we haveV̂_T(η̅^(N))/N = V_0(η̅^(N))/N +∑_k=1^T-1η^(N)_k/NΔŜ_k= ϕ_1^0+ f(N)/N + η_1^(N) S_0/N +∑_k=1^T-1ϕ_k ΔŜ_k + O(1/N) = f(N)/N +V_0(ϕ̅) +∑_k=1^T-1ϕ_k ΔŜ_k + O(1/N) ≥C/(1+r)^T + f(N)/N +O(1/N) ≥C/(1+r)^Tfor large N. This shows that η̅^(N) is an integer superhedging strategy of NC for large N, and henceσ_ℤ(NC)≤ V_0(η̅^(N)) = N V_0(ϕ̅) + O(f(N)) = N supΠ(C) + O(f(N)),N→∞.It is easy to see that a quantity that is O(f(N)) for any f tending to infinity is O(1). Since f was arbitrary, the statement follows.(ii) Again, let ϕ̅ be a classical superhedging strategy for C with price supΠ(C). By Dirichlet's approximation theorem (Theorem <ref>), there are 1≤ q(N)≤ N and p(N,t,j,l)∈ℤ such that|ϕ_t^j(ω_l)q(N)-p(N,t,j,l)| < N^-1/(nd(1+T)),1≤ l≤ n,t∈𝕋,1≤ j≤ d,N∈ℕ.We defineψ_t^(N),j(ω_l) := p(N,t,j,l)/q(N),1≤ l≤ n,t∈𝕋,1≤ j≤ d,N∈ℕ,which yields|ϕ_t^j(ω_l)-ψ_t^(N),j(ω_l)|<N^-1/(nd(1+T)),1≤ l≤ n,t∈𝕋,1≤ j≤ d,N∈ℕ.After fixing the initial bank account positionψ_1^(N),0:= ϕ_1^0 + N^-1/(nd(1+T))log N,N∈ℕ,a strategy ψ̅^(N)∈𝒬 is defined for each N. By definition,V_0(ψ̅^(N))= ϕ_1^0 + N^-1/(nd(1+T))log N + ψ_1^(N)S_0 = ϕ_1^0 + N^-1/(nd(1+T))log N + ϕ_1 S_0 + O(N^-1/(nd(1+T))) = supΠ(C) + O(N^-1/(nd(1+T))log N),N→∞.It remains to show that ψ̅^(N) is a superhedge for C for large N. This follows fromV̂_T(ψ̅^(N))= V_0(ψ̅^(N)) + ∑_k=1^T ψ_k^(N)ΔŜ_k = V_0(ϕ̅) + N^-1/(nd(1+T))log N + ∑_k=1^T ϕ_k ΔŜ_k + O(N^-1/(nd(1+T))) ≥C/(1+r)^T + N^-1/(nd(1+T))log N + O(N^-1/(nd(1+T))) ≥C/(1+r)^T,N large. For those finitely many N where the last inequality does not hold, we can simply add a sufficient amount of initial capital to obtain a superhedge; this does not change the convergence rate.From the proof of (ii), it is clear that log N can be replaced by an arbitrary function tending to infinity. § VARIANCE OPTIMAL HEDGING IN ONE PERIODWe consider a one-period model satisfying (F) and NA. Moreover, we suppose that d≤ n. Our goal is to approximately hedge a given (non-replicable) claim C. For tractability, the error is measured by the norm of L^2(P^*), where P^* is a fixed EMM; we denote this norm by · throughout this section. In the classical case, this leads to the optimization probleminf_ϕ∈ℝ^dinf_V_0 ∈ℝ C/(1+r)-V_0-ϕΔ S_1= inf_ϕ∈ℝ^dC̃ - ϕΔ S_1 ,where C̃:=(C-E^*[C])/(1+r). Note that inf_V_0 ∈ℝ is attained atV_0=E^*[C/(1+r)-ϕΔ S_1]=E^*[C]/(1+r).The problem (<ref>) is then solved by projecting C̃ orthogonally to the space {ϕΔ S_1 : ϕ∈ℝ^d}, which is closed by Theorem 6.4.2 in <cit.>. For more details on variance-optimal hedging (in particular, on the multi-period problem), we refer to Chapter 10 of <cit.> and the references given there.Now we proceed to our setup, and restrict ϕ to ℤ^d. The minimization w.r.t. V_0 is done as in (<ref>), and we thus have to computeinf_ϕ∈ℤ^dC̃ - ϕΔ S_1 .We haveC̃ - ϕΔ S_1 ^2= ∑_l=1^n P^*[ω_l](C̃(ω_l)-ϕΔ S_1(ω_l))^2= ∑_l=1^n (C̃(ω_l)P^*[ω_l]^1/2-ϕΔ S_1(ω_l)P^*[ω_l]^1/2)^2.The problem (<ref>) thus amounts to computing the element of the lattice{ϕ^1 ( Δ S_1^1(ω_1)P^*[ω_1]^1/2 ⋮ Δ S_1^1(ω_n)P^*[ω_n]^1/2)+… + ϕ^d( Δ S_1^d(ω_1)P^*[ω_1]^1/2 ⋮ Δ S_1^d(ω_n)P^*[ω_n]^1/2): ϕ∈ℤ^d }⊂ℝ^nclosest to the vector( C̃(ω_1)P^*[ω_1]^1/2 ⋮ C̃(ω_n)P^*[ω_n]^1/2) ∈ℝ^nw.r.t. the Euclidean norm. This is an instance of the closest vector problem (CVP), a well-known computational problem with applications in cryptography, communications theory and other fields. The survey paper <cit.> offers an accessible introduction to this subject with many references. By the Pythagorean theorem, the closest lattice point is the latticepoint closest to the projection of (<ref>) to the subspace generated by the lattice. A cheap method to compute a (hopefully) close lattice point consists of rounding the coefficients of this projected point to the closest integers. It is well-known, though, that the resulting point may be far from optimal. This happens in the following example.We consider a toy example with d=2, |Ω|=4, and r=0, specified in Table <ref>. The numbers are not calibrated to any market data, but are chosen to illustrate the point that a naive approach at integer approximate hedging (as mentioned above) can lead to significant errors. A detailed investigation of integer variance-optimal hedging over several periods in realistic models is left to future work.We wish to approximately hedge N copies of the claim, i.e., the claim NC, for N∈ℕ. First, we computed the classical variance-optimal hedge ϕ^(N)=N ϕ^(1)∈ℝ^2 by projection (see (<ref>)). The relative L^2(P^*)-errorNC̃- ϕ^(N)Δ S_1 /NC̃ = C̃- ϕ^(1)Δ S_1 /C̃,which of course does not depend on N, is displayed in the second line of Table <ref>. The third line of Table <ref> contains the maximal position size max_i=1,2|ϕ^(N),i|=N max_i=1,2|ϕ^(1),i| in the underlying assets. Then, we solved the integer variance-optimal hedging problem (<ref>) exactly, using the algorithm CLOSESTPOINT described in <cit.>, which is based on the Schnorr-Euchner algorithm <cit.>. CVP is known as a computationally hard problem, with the fastest algorithms having exponential complexity in the dimension. Since our dimension is only |Ω|=4, this was not an issue in this toy example. In more sophisticated examples, a preprocessing using the LLL-algorithm <cit.> might faciliate the task of computing a closest vector. Table <ref> shows the relative L^2(P^*)-errorNC̃- ϕ_CVP^(N)Δ S_1 /NC̃and the maximum position size. Finally, we used a poor man's approach at solving (<ref>) approximately, by simpling rounding the positions of the classical hedge ϕ^(N) to the closest integers. From Table <ref>, we see that this works fine for large N, but gives significantly worse results than solving CVP for small N. Note that, in this example, the position sizes of the integer hedge are much smaller than that of the classical hedge. Finally, we mention that computing the so-called covering radius <cit.> of the lattice (<ref>) yields an upper bound for the hedging error for any claim. § TOOLS FROM NUMBER THEORY In this appendix we collect the classical number theoretic theorems we have used in this paper. The theorems of Dirichlet and Kronecker are fundamental results in Diophantine approximation (i.e., the approximation of real numbers by rational numbers). Given α_1,…,α_n∈ℝ and an integer N>1, there are q,x_1,…,x_n∈ℤ with 1≤ q≤ N and|α_i q-x_i| < N^-1/n,1≤ i≤ n. If θ∈ℝ is an irrational number, then the sequence (n θ 1)_n∈ℕ is dense in [0,1]. We also mention here the following classical theorem <cit.>:Let 𝒦⊂ℝ^d be closed, convex, zero-symmetric, and bounded. If the volume of 𝒦 satisfies vol(𝒦) ≥ 2^d, then 𝒦 contains a non-zero point with integral coordinates. We did not apply Theorem <ref> in the rest of the paper, but hint at a possible application. Consider the following one-period portfolio optimization problem with maximum loss constraint, where c>0 and U is some utility function:E[U(ϕΔ S_1)] →max!ϕ∈ℤ^d, ϕΔ S_1 ≥ -c.Then, Theorem <ref> easily yields a sufficient criterion to ensure that the admissibility set contains a non-zero portfolio. Refinements of Theorem <ref> give several linearly independent portfolios.siam
http://arxiv.org/abs/1708.07661v1
{ "authors": [ "Stefan Gerhold", "Paul Krühner" ], "categories": [ "q-fin.MF", "91G10, 91G20, 11K60" ], "primary_category": "q-fin.MF", "published": "20170825091805", "title": "Dynamic trading under integer constraints" }
#1Appendix <ref> #1Section <ref> #1Eq. (<ref>) #1Eqs. (<ref>) #1Fig. <ref> #1Figs. <ref> dκåατΩωζ⟨⟨||⟩⟩/†^†^-1∂#1#1infinf𝒜𝒞ℱ𝒢ℋℒℳ𝒩𝒪𝒫ℛ𝒮𝒯𝒰𝒱𝒵χ#1 =Test.[a footnote] [Another footnote.]= to #[email protected], [email protected]@[email protected]@iucaa.in^1Department of Physics & Astrophysics, University of Delhi, New Delhi-110007 India.^2 IUCAA, Post Bag 4, Pune University Campus, Ganeshkhind, Pune-411007 India. 1cmModels of inflationary magnetogenesiswith a coupling to the electromagnetic action of the form f^2 F_μνF^μν, are known to suffer from several problems. These include the strong coupling problem, the back reaction problemand also strong constraints due to Schwinger effect.We propose a model which resolves all these issues. In our model,the coupling function, f, grows during inflation and transits toa decaying phasepost inflation. This evolutionary behaviour is chosen so as to avoid the problem ofstrong coupling. By assuming a suitable power law form of the couplingfunction, we can also neglect back reaction effects during inflation.To avoid back reaction post-inflation, we find that the reheating temperatureis restricted to be below ≈ 1.7 × 10^4 GeV.The magnetic energy spectrum is predicted to be non-helical and generically blue. The estimated present day magnetic field strength and the corresponding coherence length taking reheating at the QCD epoch(150 MeV) are1.4 × 10^-12 G and6.1 × 10^-4 Mpc, respectively.This is obtained after taking account of nonlinear processing over and above the flux freezing evolution after reheating. If we consider also the possibility of a non-helical inverse transfer,as indicated in direct numerical simulations, the coherence lengthand the magnetic field strength are even larger. In all cases mentioned above,the magnetic fields generated in our modelssatisfy the γ-ray bound below a certain reheating temperature. Challenges in Inflationary Magnetogenesis: Constraints from Strong Coupling, Backreaction and the Schwinger Effect Kandaswamy Subramanian^2 December 30, 2023 ================================================================================================================== § INTRODUCTIONMagnetic fields have been observed over a wide range of length scales in the universe. They have been detected in galaxies, galaxy clusters and even in intergalactic voids <cit.>. Gamma ray observations have put a lower bound on the intergalactic magnetic fields of the order 10^-15G for fields coherent on Mpc scales <cit.>. The mechanism responsible for the origin of these fields is not yet clearly understood although considerable work has been done in this regard. There are two approaches in this context. One approach attributes the origin of seed fields to astrophysical batteries which are then further amplified by flux freezing and dynamo action <cit.>. The other approach proposes a primordial origin of seed fields. The proponents of this scenario suggest that the generation of seed fields can possibly occur due to processes in early universeduring inflation <cit.>, electroweak <cit.> or QCD phase transitions <cit.> (for reviews, see<cit.>). Primordial origin would be more favoured if the observations of magnetic fields in voids is firmed up.Inflation offers a natural setting to explain observed large scale magnetic fields since large scale featuresemerge naturally from the theory. However, the magnetic field strength generated during inflation decreases with the expansion factor a(t), very rapidly as B ∝ 1/a^2, as the standard EM action is conformally invariant. This results in a very low present day strength which is far below the value required to even seed the dynamo. Hence conformal invariance needs to be broken to obtain fields of sufficient strengths that are observed today. Inflation provides many scenarios where this is possible, one of them is in which the inflaton field couples to the kinetic term of the electromagnetic (EM) field <cit.>. However this model is also beset with several problems such as the strong coupling and the back reaction problem. Strong coupling problem occurs if the effective electric charge is high during some epoch of inflation thereby making the perturbative calculations of the EM test field untrustworthy. We will address this issue in more detail in Section <ref>. In some cases, during inflation, the electric and magnetic energy density can overshoot the background energy density. This can end inflation as well as suppress the production of magnetic fields. This is known as the back reaction problem. Models of low energy scale inflation have been suggested where the back reaction problem has been tackled <cit.>. In some of these models, however, Schwinger effect constraint poses a problem <cit.>. Schwinger mechanism is the production of charged particles due to electric fields. If the electric field is high enough during inflation, it can generate charged particles. This can lead to the conductivity becoming very high. This will affect the EM field and can result in a low strength of the magnetic field today.In this paper, we propose a model to tackle all the above problems besieging inflationary magnetogenesis. In our model, we allow the coupling function (function of the inflaton which breaks conformal invariance, f) to evolve during inflation and also during a matter dominated era from the end of inflation to reheating. By demanding that EM field energy density does not back react on the background even after inflation, we get a constraint on the scale of inflation and the reheating temperature. We give several scenarios which satisfy all the three constraints.They all imply a low reheating scale and a blue spectra for the generated fields, with a sub-horizon coherence length. Therefore one has to consider the nonlinear effects discussed by Banerjee and Jedamzik <cit.> during the radiation dominated era after reheating. The field strength decays while the coherence length increases due to this evolution. The generated field strengths and coherence scales in several of our models areconsistent with the potential lower limits from γ-ray observations.The outline of the paper is as follows: In Section <ref> and <ref>, we provide a general background about the evolution of EM fields during inflation and further state the resultant form of power spectra of magnetic and electric energy density. In Section <ref>, we discuss the possible constraints on magnetogenesis arising out of Schwinger effect and show how these can be satisfied. In Section <ref>, we address the issue of strong coupling and give a detailed account of the model which solves this problem. Section <ref> gives the predictions arising out of our model, without taking nonlinear effects into consideration. In Section <ref>, we add nonlinear effects into our model, also discussing the mechanism of inverse transfer briefly in this context. Further in Section <ref>, we discuss whether our results conform with the constraints obtained from gamma ray observations. Our conclusions are given in Section <ref>.§ EVOLUTION OF ELECTROMAGNETIC FIELD DURING INFLATIONThe standard Maxwell action is invariant under conformal transformation <cit.> and FRW metric is conformally flat. Due to this the electromagnetic (EM) fluctuations decay rapidly as the square of the scale factor. Hence, breaking of conformal invariance is necessary for inflationary magnetogenesis <cit.>. We start with the action for the EM field in which the conformal invariance is explicitly broken by introducing a time dependent function f^2(ϕ), where ϕ is the inflaton field, coupled to the kinetic term (F^μνF_μν) in the action <cit.>.S=-∫√(-g) d^4x [f^2(ϕ)1/16 πF_μνF^μν + j^μA_μ]-∫√(-g) d^4x [1/2∂^νϕ∂_νϕ + V(ϕ)].Here F_μν=∂_μ A_ν - ∂_ν A_μ, where A_μ is the EM 4-potential. The term j^μ A_μ represents the interaction where j^μ is the four current density. The second term in the action incorporates the evolution of the inflaton field. In this paper we have adopted Greek indices μ, ν.... to represent space-time coordinates and Roman indices i,j,k.... to represent spatial coordinates. We follow the metric convention g_μν=diag(-,+,+,+).To begin with, we neglect the interaction term and assume that there are no free charges.Varying the action with respect to the EM 4-potential, we obtain the following modified form of Maxwell's equations.[f^2 F^μν]_;ν=1/√(-g)∂/∂ x^ν[√(-g)g^μαg^νβ f^2(ϕ) F_αβ]=0 .Varying the action with respect to the scalar field we obtain the following equation.1/√(-g)∂/∂ x^ν[√(-g) g^μν∂_μϕ] -dV/dϕ=f/8 πdf/dϕ F_μν F^μν.Here EM field is assumed to be a test field and hence, it will not affect the evolution of the background which is dominated by the scalar field potential during inflation. The scalar field ϕ is assumed to be homogeneous, having only time dependence. Adhering to the homogeneity and isotropy of the universe, we work in FRW space-time and further assume it to be spatially flat,ds^2 = -dt^2+a^2(t)[dx^2+dy^2+dz^2]=a^2 (η)[-dη^2+dx^2+dy^2+dz^2].In this new coordinate system (η,x,y,z), η denotes conformal time. Further, to solve Eq.(<ref>), it is convenient to adopt the Coulomb gauge,∂_j A^j =0A_0=0.We can express Eq.(<ref>) for μ=i as,A_i”+2 f'/f A_i'-a^2 ∂_j ∂^j A_i=0.Here prime (') denotes derivative with respectto η and ∂^j is defined as ∂^j≡ g^jk∂_k=a^-2 η^jk∂_k. Promoting A_i to an operator and imposing the quantization condition, we expand A_i in terms of creation and annihilation operators in Fourier space<cit.>.The evolution equation for thecorrespondingmode functionin Fourier space A(k,η),can be obtained fromEq.(<ref>), and in terms of a new variable A̅≡ a A(k,η), becomes A̅”+2 f'/fA̅'+ k^2A̅=0.Here k is the comoving wave number. We can re-express the above equation in the form of a harmonic oscillator equation with a time dependent frequency. To do this, we further define a new variable 𝒜≡ fA̅(k,η). The equation of motion in terms of this new variable is,𝒜”(k,η)+(k^2-f”/f) 𝒜(k,η)=0.Before we solve the above equation for a particular f(ϕ), we first define the magnetic and electric energy density respectively as,ρ_B = ⟨0|T^B_μνu^μu^ν|0⟩andρ_E = ⟨0|T^E_μνu^μu^ν|0⟩ ,whereρ_B and ρ_E are defined as the vacuum expectation value of the respective energy momentum tensors T^B_μν and T^E_μν, measured by the fundamental observers. The velocity of these observers u^μ is specified as (1/a,0,0,0). In our analysis, we work with the spectral energy densities of magnetic and electric fields,(dρ_B(k,η)/dln k) and (dρ_E(k,η)/dln k). These spectral energy densities can be obtained from the Fourier transform of ρ_B and ρ_E in Eq.(<ref>)<cit.>,dρ_B(k,η)/d ln k = 1/2 π^2k^5/a^4 |𝒜(k,η)|^2dρ_E(k,η)/d ln k = f^2/2 π^2k^3/a^4|[𝒜(k,η)/f]'|^2.§ MAGNETIC AND ELECTRIC ENERGY DENSITY DURING INFLATIONIn this analysis, we assume the background to be de Sitter during inflation. The evolution of the scale factor a(η) with conformal time η during de Sitter, is given by,a = -1/H_fη,where H_f=a'/a^2 is the Hubble parameter during inflation. In our normalization, a ≡ a_i = 1 at η_i = -1/H_f, where η_i and a_i are the conformal time and the scale factor at the beginning of inflation respectively.To solve Eq.(<ref>),we need to know how f(ϕ) evolves with time or with expansion factor.We assume f evolves as, f(a) = f_i(a/a_i)^å.For any model of inflation this form of f can be chosen by adopting appropriate functions of ϕ<cit.>.For the above mentioned form of f(η), Eq.(<ref>) reduces to,𝒜”(k,η)+(k^2- å(å+1)/η^2) 𝒜(k,η)=0.The general solution of this equation is given by,𝒜_1=√(-kη)[c_1(k)J_-å-1/2(-kη)+c_2 (k) J_å+1/2(-kη)].The constants c_1 and c_2 are determined by matching the solution above with the mode functions corresponding to the Bunch-Davies vacuum in the limit of (- k η) →∞,c_1 =√(π/4k)expiπå/2/cos(-πå),c_2=√(π/4k)expiπ(-å+1)/2/cos(-πå).Using Eq.(<ref>) and Eq.(<ref>), the magnetic energy density spectrum in the super-horizon limit (-kη)<<1 becomes,dρ_B/d ln k ≈ ℱ(n)/2 π^2 H_f^4 (-kη)^4+2n≈ ℱ(n)/2 π^2 H_f^4 (k/aH_f)^4+2nwhere,n=-å if å≥ -1/2 and n=1+å if å≤ -1/2andℱ(n)=π/2^2n+1Γ^2(n+1/2)cos^2(π n).Similarly we can determine the spectral electric energy density as,dρ_E/d ln k≈𝒢(m)/2 π^2 H_f^4 (-kη)^4+2m.Here, G is given by, 𝒢(m)=π/2^2m+3Γ^2(m+3/2)cos^2(π m)and m takes the values,m=-å+1 if å≥ 1/2 and m=å if å≤ 1/2.First we explore the possibility of scale invariant magnetic field spectrum. However, we also note that scale invariance for magnetic field spectrum does not imply the scale invariance of electric field spectrum. There are two possible values of å for scale invariant magnetic field spectrum, namely, å=-3 and å=2 . For the first branch, å=-3, we have from Eq.(<ref>) 4+2m=-2. In this case, as (-kη)→ 0 (i.e. towards the end of inflation), the electric density increases rapidly as ρ_E ∝ (-kη)^-2→∞. In this case the model runs into difficulties as the electric energy density would eventually exceed the inflaton energy density in the universe even before sufficient inflation.This problem is known as the back reaction problem. Therefore å=-3branchfor the generation of magnetic fields is strongly constrained.Also production of large electric fields can give rise to finite conductivity due toSchwinger effect during inflation which can further stop the generation of magnetic fields as discussed in <cit.>.This motivates us to rather choose the second branch, å=2,for which we have from Eq.(<ref>) 4+2m=2. As (-kη)→ 0, ρ_E ∝ (-kη)^2→ 0. This branch does not suffer from any back reaction effects, although strong couplingcould potentially pose a problem here as discussed in Section <ref>. In Section <ref>, we also suggest a model in which strong coupling problem is avoided. Also the issue of finite conductivity arising fromparticle production due to Schwinger mechanism does not pose a problem for this branch. This is shown explicitly in the following section. § SCHWINGER EFFECT CONSTRAINT ON MAGNETOGENESISIt is well known that electric fields can produce charged particles out of the vacuum due to Schwinger mechanism <cit.>. These particles can give rise to a finite conductivity depending on the electric field strength, resulting in the electric field being damped and the growth of magnetic fields being frozen <cit.>. Kobayashi and Afshordi<cit.> have determined the conductivity due to charged particles produced via Schwinger mechanism in de Sitter space-time for different strengths of electric fields. Then using the value of conductivity they have put constraints on inflationary magnetogenesis models. In our analysis we had neglected the interaction term but to check the effect of conductivity we need to reinstate the interaction term back into the equation of motion.After including the interaction term in Eq.(<ref>) we get,A_i”+(2 f'/f+4 π a σ/f^2) A_i'- ∂_j ∂_j A_i=0.In writing this equation we have taken j_i=σ E_i=-σ (1/a)(∂ A_i/∂η) where σ denotes conductivity due to the charged particles produced. Clearly this would reduce to the previous case (i.e. without interaction) in the limit,|2 f'/f| ≫|4 π a σ/f^2|.We now proceed to evaluate and examine the validity of this inequalityfor the growing branch, å≥1/2.In this case Eq.(<ref>) translates to2σπ/å H_f f^2≪1 ,for the Schwinger effect to be unimportant. To check this bound we need to estimate σ/H_f,which in turn has been shown to depend on the electric field strength <cit.>. For the two cases |e_N E|≪ H^2 and |e_N E|≫ H^2, we adopt the results derived in <cit.>. We have defined e_N as the effective charge, e_N = e/f^2. In the limit f → 1, we get back the standard electric charge e. For |e_N E|≫ H_f^2 σ/H_f≃ sgn(E) 1/12 π^2|e_N|^3 E/H_f^2e^π m^2/|e_NE|For |e_N E|≪ H_f^2, the results depend on the mass of the charged scalar field m. If m > H_f, we have, σ/H_f≃7/72 π^2e_N^2 H_f^2/m^2 =7/72 π^2e^2  H_f^2/f^4  m^2 ,and if m < H_f, we haveσ/H_f≃3/4 π^2e_N^2 H_f^2/m^2 = 3/4 π^2e^2  H_f^2/f^4  m^2.The relations discussed above show how the conductivity behaves in the presence of electric field. It is worth emphasising thatKobayashi and Afshordi <cit.>,have assumed that the charged scalar field and the EM field do not affect the background.To check which one of the above cases is relevant to our analysis, we need to estimate the electric field strength in our case. Using Eq.(<ref>), E^2(a)≡ 8 πρ_E=8 π/f^2∫ d ln k 𝒢(-å+1)/2 π^2 H_f^4 (k/aH_f)^6-2å. From the above expression we can see that the electric energy density integral diverges for large values of k. We are however interested in the length scales that exit the Hubble radius during the inflationary period. This gives a range for the relevant k over which the integral needs to be performed. The largest length scale of interest is the one which exits the horizon at the beginning of inflation. Hence the physical size of this scale should be equal to the Hubble radius during inflation (The Hubble radius is constant during inflation). Hence, comoving L_upper=1/H_f a_i which gives k_lower =H_f a_i.At a time t during inflation, the scale factor a>a_i and the comoving length exiting the horizon at that time is L=(H_fa)^-1 implying k_upper = H_fa. The smallest length scale of interest is the one which left the horizon at the end of inflation. Evaluating the integral at any time t during inflation we have, L_lower=H_f^-1/a implying k_upper=H_f a. Evaluating the integral using the above limits, we have,E^2(a) ≈ 8 π/f^2𝒢(-å+1)/2 π^2(6-2å) H_f^4 (1-(a_i/a)^6-2å).This implies,|e_N E|/H_f^2≈2 |e|/f^3(𝒢(-å+1)/π(6-2å))^1/2(1-(a_i/a)^6-2å)^1/2.Since a>a_i we can neglect a_i/a in the above expression for 1/2 ≤å<3.For å=3, Eq.(<ref>) modifies to |e_N E|/H_f^2=(2|e|/f^3) (𝒢(-2) ln(a/a_i)/π)^1/2. Further assuming that f=1 at the beginning of inflation and that it grows during inflation, we can infer that |e_N E| ≪ H_f^2 is valid throughout the inflationary regime for 1/2≤å≤3. As we saw for the case |e_N E| ≪ H_f^2, there are two possibilities: (i) m>H_f and (ii) m<H_f. For m>H_f, we have from Eq.(<ref>), 2σπ/å H_f f^2≃14/72 πåe^2  H_f^2/f^6  m^2. Since f>1 during inflation and also m>H_f above,we can infer from the above expression that 2σπ /å H_f f^2 is always less than 1. Hence, Eq.(<ref>) is valid for m > H_f. This implies that magnetic field generation will not be affected in this case even if we consider the conductivity of the medium. On similar lines, for the case of m < H_f, we have from Eq.(<ref>),2σπ/å H_f f^2≃6/4 åπe^2  H_f^2/f^6  m^2.Thus for m < H_f, even if initially 2 σπ/ å H_f f^2≫ 1, as f ∝ a^å grows rapidly in time, one would have 2σπ/ å H_f f^2≪ 1 and the effect of Schwinger conductivity would become negligible.By keeping in mind the validity of Eq. (<ref>), we get the following bound for this case: H_f>m>√(6 e^2/4 åπ)H_f/f^3For m < H_f, if m satisfies the above condition, conductivity will not affect inflationary magnetogenesis. This will always be satisfied as f grows much larger than unity.Thus we can conclude that our selected branch (1/2≤å≤3) is not affected by the finite conductivity of the charged particles produced due to Schwinger mechanism. The backreacting branch å = -3 is indeed strongly constrained as already pointed out in <cit.>.§ SOLVING THE STRONG COUPLING PROBLEMWe mentioned in Section <ref> that å=2 branch keeps electric field under control. We also saw above that it does not face the strong constraints imposed by the Schwinger effect. However it suffers froma variant of the strong coupling problem. This problem was first pointed out by Demozzi, Mukhanov and Rubinstein <cit.>. It states that if f grows during inflation and settles down to f_f=1 at the end of inflation, the value of f would need to be very small at the beginning of inflation (f=f_i). The effective charge defined by, e_N=e/f_i^2 would then be very high at the beginning. This would imply that the coupling between charged particles and the EM field would be unacceptably strong. Alternatively, if f decreases from a large value to f_f = 1, one could in principle avoid this problem. However we already pointed out thata scenario where f decreases rapidly,suffers from the back reaction problem.On the other hand, if we assume that the value of f at the beginning of inflation is f_i=1, then there will be a large value of f at the end of inflation and hence resulting in a very small value of coupling constant. This as such will not be a problem<cit.>.However, we would need to ensure that after the end of inflation, the value of f decreases to its pre-inflationary value to restore back the standard couplings. This should happen within a time scale such that it does not affect the known standard physics. If this is achieved, one would have found a way of solving the strong coupling problem.In our model, we assume that f increases as a power law during de Sitter inflation beginning with a value of unity, after which it decays back to its pre-inflationary value during a matter dominated phase that lasts till reheating. The form of the power law during the inflationary phase is given by, f=f_1 ∝ a^å          a_i ≤ a ≤ a_f and that in the post inflationary epochs by,f=f_2 ∝a^-β          a_f ≤ a ≤ a_r .Here a_f and a_r denote the scale factor at the end of inflation and at the end of reheating, respectively. We consider å > 1/2 in our analysis and keep β as general. Since the value of the functionfis taken to be unity at the beginning of inflation, its value at the end of inflation is f_f=(a_f/a_i)^å. Further, demanding the continuity of f at the end of inflation, we have,f = f_1=[a/a_i]^å  for  a ≤ a_f = f_2=[a_f/a_i]^å[a/a_f]^-β  for  a_f ≤ a ≤ a_r In order to calculate the evolution of the vector potential after inflation, we solve Eq.(<ref>). Since we are interested in the super-horizon scales, we express the solution (denoted by A̅_̅2̅) as,A̅_̅2̅= d_1+d_2 ∫_η_f^η1/f_2^2 dη.Here η_f is the conformal time at the end of inflation. Expressing the solution in terms of a, we make use of the dependence of a on η during matter dominance, a=(H_f^2 a^3_f/4)( η + 3/(a_f H_f))^2. We arrive at this expression by ensuring the continuity of a and a' at the end of inflation. The solution becomes,A̅_̅2̅= d_1+d_2 ∫_a_f^a1/a_f^2H_f√(a/a_f)f_2^2 da.As 𝒜 = fA̅ determines the growth or decay of the magnetic and electric fields, the constant solution A̅_2 =d_1 is a decaying mode when f decreases.Thus we need to have a non-zero d_2 to get growing modes during the epoch after inflation when f decreases back to unity.To find the constants d_1 and d_2,we demand that the value of A̅ as well as its first derivative (with respect to conformal time) be continuous at the end of inflation. Using (<ref>), the expression for A̅ can be obtained during inflation (A̅_1) as,A̅_1 =𝒜_1/f_1 = c_12^1/2 + å k^-å H_f^å/Γ (1/2 - å)[1-(k/a H_f)^2/4(1/2 - å)+(k/a H_f)^4/32(1/2 - å )(3/2 - å)] +c_2k^-å H_f^å(k/a H_f)^2 å + 1/2^å + 1/2Γ(-3/2 + å ).Here we have included the higher order terms as well, that are obtained by expanding the Bessel functions in the super-horizon limit. We can see from the expression above, that the first term in the c_1 branch is time independent. If we do not consider the higher order terms in the c_1 branch, it would not contribute to the coefficient d_2 during derivative matching.In fact, the contribution of these higher order terms have more weightage than the c_2 branch.Matching the expression above and its derivative to the expression of A̅_2 given inEq.(<ref>) and its derivative we get the coefficients d_1 and d_2 as,d_1= c_1 2^1/2+å(k/H_f)^-å/Γ(1/2-å)(1+(k/a_f H_f)^2/2 (2 å -1)+(k/a_f H_f)^4/8 (2 å -1) (2 å -3))+c_2 2^-1/2-å(k/H_f)^-å/Γ(3/2+å)(k/a_f H_f)^2 å +1d_2= [2^1/2+å(k/H_f)^-å/Γ(1/2-å) c_1 (-k(k/a_f H_f)/2å-1+-k (k/a_f H_f)^3/2 (2å-1)(2å-3))+c_22^-1/2-å(2å+1)(-k)(k/H_f)^-å/Γ(3/2+å)(k/a_f H_f)^2å]f^2_2(a_f).After substituting d_1 and d_2 in Eq.(<ref>), we use the solution of A̅_2 in Eq.(<ref>) and Eq.(<ref>) for the magnetic and electric energy densities.Further retaining only the dominant terms, the post inflationary magnetic and electric energy density spectra at reheating, reduces to,d ρ_B(k,η)/d ln k|_R≈ 2^2å+1/32 πcos^2(πå)1/a_r^4 Γ^2(1/2-å)×[ k^-2å+8H_f^2å-4/(1/2-å)^2 a_f^4(2β+1/2)^2](a/a_f)^4β+1 d ρ_E(k,η)/d ln k|_R≈ 2^2å+1/32 πcos^2(πå)1/a_r^4 Γ^2(1/2-å)×[ k^-2å+6H_f^2å-2 f_2^4(a_f)/a_f^2(1/2-å)^2].If we consider å = 2 case which implies a scale invariant magnetic and electric energy density spectrum during inflation, we get a blue magnetic spectrum,with d ρ_B(k,η)/(d ln k) ∝ k^4, after inflation. Both the magnetic and electric energy density increase after inflation in this case with the electric energy density dominating over the magnetic energy density. This is evident from Fig.(<ref>), where we show the evolution of ρ_B (red dashed line), ρ_E(blue dashed-dotted line) and ρ_ϕ (black solid line).It is more convenient to express the ratios of scale factors in terms of the number of e-foldings. Let N denote the number of e-foldings from the beginning to the end of inflation and N_r denote the number of e-foldings from the end of inflation to the reheating era. Thus the ratios of scale factors become,a_f/a_i=e^N   and   a_r/a_f=e^N_r.By demanding f(a_r)=1, we can express β=α N/N_r. We need to ensure that the total energy density in the electric and magnetic fieldsdoes not exceed the total energy density in the inflaton field. During inflation, this condition is always satisfied forthe particular values of å that we consider, for which back reaction problem does not exist.The same is shown in the Fig.(<ref>) for å=2, the scale invariant case. The total electromagnetic energy density at the end of reheating is,ρ_E+ρ_B |_R =∫_a_i H_f^k_rd ρ_E(k,η)/d ln k d ln k+∫_a_i H_f^k_rdρ_B(k,η)/dln k d ln k ≈ (C+D) H_f^4 e^å (2N+N_r)-7N_r,where k_i= a_iH_f to k_f = a_fH_f represents the modes which leave the horizon during inflation. Post-inflation, in the matter dominated era, a range of modes re-enter the horizon. The mode which enters the horizon at reheating is k_r= a_r H_r. Therefore, at reheating the super-horizon modes, which have been amplified have wave numbers between k_i to k_r. In the above expression we have substitutedk_r=a_r H_r=a_f H_f e^-N_r/2. The value of H_r is obtained by evolving the Hubble parameter during matter dominance. The coefficients C and D are respectively,C= 2^2å+1/32 π (1/2-å)^2 (-2å+6) Γ^2(1/2-å) cos^2(åπ)D= 2^2å+1/32 π (1/2-å)^2 (2β+1/2)^2 (-2å+8)Γ^2(1/2-å) cos^2(åπ)The background energy density at the end of reheating is given by,ρ_ϕ|_r= g_r π^2/30T_r^4.Here g_r represents the relativistic degrees of freedom at reheating and T_r is the temperature at reheating. Hence by imposing the condition ρ_B + ρ_E < ρ_ϕ and substituting the value of ρ_B + ρ_E from Eq.(<ref>), we get the following constraint,2å(N+N_r)-(7+å)N_r<ln(π^2 g_r/30(C+D))-4lnH_f/T_r.In the above expression, N and N_r are not independent of H_f and T_r. They are related by the fact that the present observable universe has to be inside the horizon at the beginning of inflation to explain the isotropy of Cosmic Microwave Background Radiation (CMB). This condition implies,(a_0H_0)^-1 < (a_iH_f)^-1 1/H_0a_r a_f a_i/a_0 a_r a_f <1/H_f.Here, a_0 and H_0 represent the scale factor and Hubble parameter today, respectively. By relating the ratios of scale factors to the number of e-folds, we get the following constraint.N+N_r >66.9-ln(T_r/H_f)-1/3lng_r/g_0.To get the above expression, we have assumed radiation dominated era from reheating till today. The scale factors a_0 and a_r are related by,a_0/a_r=(g_r/g_0)^1/3T_r/T_0. Here g_0 represents relativistic degree of freedom today and T_0 is the CMB temperature today.N_r can be written in terms of H_f and T_r using the inflaton energy density at the end of inflation(ρ_inf) and reheating (ρ_ϕ|_r),N_r=1/3lnρ_inf/ρ_ϕ|_r=1/3ln[90 H^2_f/8 π G π^2 g_r T_r^4]Substituting Eq.(<ref>) and Eq.(<ref>) inEq.(<ref>) and writing N_r in terms of H_f and T_r, we get the constraintln[C+D/g_r(g_0/g_r)^2å/3(g_r π^2/30)^7+å/3]+134å+(2å+4)lnH_f/T_r-4(7+å)/3ln(√(3 H_f^2/8π G)1/T_r)<0.If reheating temperature and the scale of inflation satisfy the above bound, our prescribed model will be able to circumvent the strong coupling problem during inflation and back reaction in the post inflationary era. From the inequality in Eq.(<ref>), for a particular value of å, once we fix T_r, we can get an upper bound on H_f or vice-versa. After we know the above three quantities, we can proceed to calculate N_r from Eq.(<ref>). This value of N_r is also an upper bound. Together with the above information, we can estimate N from the bound in Eq.(<ref>). We can further estimate the proper coherence length at reheating using the expression below,L_c= a_r ∫_0^k_r2 π/kd ρ_B(k,η)/d ln kd ln k/∫_0^k_rd ρ_B(k,η)/d ln k d ln k.For this coherence length we can estimate the magnetic field strength at reheating using,B[L_c]= √(8 πd ρ_B(k,η)/d ln k)|_k=2π a_r/L_c. § PREDICTED MAGNETIC FIELD STRENGTH AND COHERENCE LENGTH DUE TO FLUX FREEZING EVOLUTION After reheating, the universe is composed of conducting relativistic plasma. The electric fields produced in the previous epochs get shorted out. As far as the magnetic fields are concerned,several processes affect their evolution. The simplest of them is the expansion of the universe where B ∝ 1/a^2. However, on sub-Hubble scales, nonlinear processes in plasma also play an important role in their evolution. In this section we just consider the former case and the effect of nonlinear processes will be considered in the next section.We consider reheating at different temperatures (T_r). The lowest reheating temperature we consider is 5 MeV as reheating below this energy scaleis ruled out by Big Bang Nucleosynthesisconstraints <cit.>.Wecarry out theanalysis as mentioned in section <ref> and estimate the coherence length (L_c) and magnetic field strength (B[L_c]) at reheating. To estimate the present day value of the magnetic field strength and its corresponding coherence length, we consider their evolution to be given by,L_c0=L_c(a_0/a_r),B_0[L_c0]=B[L_c](a_0/a_r)^-2,where L_c and B(L_c)are as in Eq.(<ref>) and Eq.(<ref>) respectively. Our analysis is repeated for a number of possible reheating temperatures, starting from T_r=5 MeV to T_r= 1 TeV. Of particular interest are the epochs of reheating at QCD phase transition (T_r=150MeV) and electroweak phase transition (T_r=100GeV). These cases have been considered for å=2 which gives a scale invariant magnetic field spectrum during inflation. Other values of å can also be considered if we allow a departure from scale invariance during inflation while at the same time ensuring that the backreaction problem does not arise. As an example we have considered å=3. The latter case gives us adρ_B/d(ln(k)) ∝ k^2 magnetic spectruminstead of ak^4spectrumfor super-horizon modes in the post inflationary era. We note that for GUTscale inflation (10^14 GeV),satisfying all the constraints requires a reheating temperature≈ 10^-8 GeV, which falls below the temperature prescribed by the BBN bound. The results aregiven in Table <ref>. As evident from the table, for a reheating temperature at 100 GeV, the magnetic field strength is 5.6 × 10^-7 G at a coherence length of 8.8 × 10^-10 Mpc. The magnetic field strength and the coherence length, bothincrease, as reheating temperaturedecreases to 5 MeV from 1000 GeV.Deviating from scale invariance during inflation, for å=3, we find that at QCD phase transition (150 MeV), the magnetic field strength achieved is9.2 × 10^-9 G at a coherence length 9.7 × 10^-7 Mpc. The magnetic field strength is lower than that obtained for å=2 although the coherence length obtained islarger.Note that in all our models, the magnetic spectrum is blue and in addition, the coherence scale also becomes smaller than the Hubble radius. Therefore it becomes necessary to consider the nonlinear processingand damping of the magnetic field, over and above its flux freezing evolution. This is taken up in the next section.§ NONLINEAR EVOLUTION OF MAGNETIC FIELD The nonlinear evolution of tangled small scale magnetic fields has been extensively discussed by Banerjee and Jedamzik<cit.> (see also <cit.>).We first summarize their arguments and thenapply their results to our magnetogenesis scenarios. Nonlinear processing becomes importantwhen the Alfvén crossing time (η_NL=(kV_A(k))^-1) of a mode becomes smaller than the Hubble time (H^-1). Here V_A(k) ≡√((d ρ_B/d ln k)/(ρ+p)) is the Alfvén velocity at k.The energy density and pressure of the relativistic species are denoted by ρ and  p. The Lorentz force due to the field can then drive fluid motions to the Alfvén velocity within an expansion time, providedthe viscosity of the fluid is small enough. This is indeed the case in most epochs.The fluid Reynolds number R_e is then typically large leading to cascade of energy to smaller and smaller scales or in other words a state of MHD turbulence. When such a state is achieved, energy atthe coherence scale is transferred to small scales down to the dissipation scale. Since the kinetic energy comes from initial magnetic energy and there is no other energy source, magnetic energy also decays. For a blue spectrum,since the Alfvén crossing time increases with scale L, the energy at the next largest scale starts dominating the spectra and this scale now becomes the new coherence length.The detailed evolution needs to be studied numerically including the effect of intervening epochs when viscosity is important. The net result of the nonlinear processing during the radiation dominated epochs can be summarized by the following evolutionequations for the proper magnetic field B^NL[L_c] and proper coherence length L_c^NL<cit.>, B_0^NL[L^NL_c0] = B_0[L_c0](a_m/a_r)^-p,  L_c0^NL = L_c0(a_m/a_r)^q,where a_m is the scale factor at radiation-matter equality, p ≡ (n+3)/(n+5) and q ≡ 2/(n+5). Here, n is defined in such a way that (d ρ_B/d ln k)∝ k^n+3.Here we have also used the fact that the expansion factor during radiation domination varies as a(η) ∝η. We consider the form of evolution mentioned above up to matter-radiation equality after which L^NL_c grows only logarithmically <cit.>. We neglect the logarithmic growth in L^NL_c and evolve the two quantities till today in a similar way as mentioned in Section <ref>. The results of this calculation are shown in Table <ref>.Compared to the previous case where nonlinear effects are not considered, we see that the coherence length is larger and the magnetic field strength is lower for the same reheating temperature.For reheating at QCD phase transition (150 MeV), and taking å=2 (or n=1),the magnetic field strength comes out to be 1.4 × 10^-12 Gat the corresponding coherence length of 6.1 × 10^-4 Mpc.This value of coherence lengthincreases for å = 3 to 2.8 × 10 ^-2 Mpcwhile the corresponding magnetic field strengthdecreases to 3.2 × 10^-13 G.Numerical simulationsby Brandenburg et. al. <cit.> show a slower decay of non-helical magnetic fields. This slower decay of the field is also accompanied by an apparent inverse transfer of the magnetic energy to larger scales <cit.>,which is usually thought to occur only for helical fields.It would be of interest to examine the consequence of this non-helical inverse transfer to the predicted field strengths and coherence scales.The simulations of <cit.>, start from a blue spectra with n=2, and show that the magnetic field energydecays as B_0^S[L^S_c0] = B_0[L_c0](a_m /a_r)^-0.5and coherence length increases asL_c0^S = L_c0(a_m/a_r)^0.5.These simulations are motivated by studying the decay of causally generated fields which typically are expected to have such a spectrum on infra-red scales. Our inflation generated fields, although having blue spectra, have more power on large scales, with n=1 for å=2 case and n=-1 for the case when å=3, and thus the magnetic energy could perhaps decay even more slowly. Nevertheless, toget an idea of what such a non-helical inverse transfer would imply for the field strengths and coherence scales, we adopt simply the scalings found by Brandenburg et al <cit.>. The results are shown in Table <ref>for different reheating temperatures. The analysis is same as mentioned in Section <ref>. We note thatboth the coherence lengthand magnetic field strength are larger than the one estimated using only the standard nonlinear evolution. For example, at a reheating temperature around QCD phase transition (150 MeV),the coherence lengthincreases from 6.1 × 10^-4 Mpc to 1.1 × 10^-2. Further the magnetic field strengthincreases to 4.4 × 10^-11 Gfrom 1.4 ×10^-12 G. This is the case for å = 2 (n=1).For the case of å = 3 (n=-1) the evolution relations of the magnetic field strength and the corresponding coherence length are equivalent to those obtainedin <cit.>.Hence, the values do not change after incorporating inverse transfer.We also note that, though the magnetic field strengthis largerafter considering inverse transfer, it still remains far lower than the strength obtainedwhen nonlinear effects are not taken into account.The corresponding coherence length however isconsiderablyenhanced.We recall thatfor reheating at 150 MeV, the magnetic field strengthwithout accounting for nonlinear decay,is 1.3 × 10^-6 G at a coherence length of 6.5 × 10^-7 Mpc. Thus the field strength after taking account of the nonlinear decay with inverse transfer is much smaller (4.4 × 10^-11 G),and the coherence scale is much larger (1.1 × 10^-2 Mpc). § CONSTRAINTS FROM Γ-RAY OBERVATIONS We ask if the generated fields can explain the constraints from the gamma ray bounds. The γ-ray observations of TeV blazarssuggest a lower limit on the strength of the intergalactic magnetic fields of the order of 10^-15 G at acomoving coherence length of 0.1 Mpc <cit.>.Thisbound was obtained from the non-detection of secondary gamma ray emission by the Fermi telescope. The above mentioned lower limit was obtained for the caseL_C≫ L_IC, where L_C is theproper coherence length of the magnetic field and L_IC is the mean free path of the charged particles that undergo inverse compton scattering.For L_C≪ L_IC,the lower limit on magnetic field strength increases with coherence length as (L_c)^-1/2.In Fig.(<ref>) and Fig.(<ref>) we compare the lower bound on the fields obtained from the γ-ray observations (black solid line), with the predicted field strength from our models at the coherence scale, for different reheating temperatures (red dashed line). The black line in the figures, which represents the gamma ray bound, wasevaluated by scaling 10^-15 G at 0.1 Mpc to the coherence lengthobtained for a particular reheating temperature. The scaling relationas mentioned above goes as (L_c)^-1/2.Fig.(<ref>) is for the case when only the flux freezing evolution is taken into account (without nonlinear processing).The left panel of Fig.(<ref>) is for the case when the standard nonlinear evolution is taken into account, while the right panel assumes the scalings implied by a possible nonlinear inverse transfer. All the figures assume å=2, where the spectra are scale invariant during the inflationary era and transit to a blue spectrum with n=1 by reheating.We can see from Fig.(<ref>), that for the range of reheating temperatures we have considered, the magnetic field strength obtained for a particular coherence length lies well above the lower limit prescribed by the gamma ray observations. For å =2 case, the reheating temperatures can range from a minimum value of5 MeV to a maximum limit of 1.7 × 10^4 GeV.For å = 3 case, this limit decreases to 6.7 GeV. This is because the magnetic energy density diverges during inflation and to prevent it from exceeding the inflaton energy density requires a low-scale inflation. Further to satisfy the constraint in Eq.(<ref>), reheating temperature also has to be low.Taking into account nonlinear evolution tightens these constraints for both å=2 and å =3. The change in the constraint for å=2 case, can be seen in Fig.(<ref>).The shaded region shown in the figure corresponds to the values of magnetic fields allowed by the gamma ray bounds. If non-helical decaying MHD turbulence does not have an inverse transfer (left panel of Fig.(<ref>)), the reheating temperature has to be below ≈ 7 GeV for the γ-ray bound to be satisfied. On the other hand, if one takes into account theinverse transfer as discussed in Section <ref>,we saw that larger magnetic field strengths are possible. Then from the right panel of Fig.(<ref>), we see thatthe limits on the reheating temperatures get relaxed. In this case, the reheating temperatures allowed by the gamma raybound increases to ≈ 4 × 10^3 GeV.For å = 3 case, the maximum reheating temperature is not drastically affected by considering nonlinear evolution. It decreases from 6.7 GeV to 4 GeV. This limit is also not affected by including the inverse transfer phenomena and T_r remains at 4 GeV. In case the lower limit on the magnetic field strength from γ-ray observations is decreased to 10^-17 G at 1 Mpc as discussed in <cit.>, our constraint on the reheating temperature will be relaxed further.Thus we see that to resolve the strong coupling problem(by demanding f(ϕ) to decay to its pre-inflationary value)and also to avoid back reaction in the post inflationary era,requires both a low scale inflation and a low scale of reheating.These models however lead to magnetic field strengths and coherence scales which are consistent with the γ-ray lower limits, for reheating temperatures up to about few thousand GeV.We note that few earlier studies have also consideredlow-scale inflation in the context ofinflationary magnetogenesis. Ferreira et. al. <cit.> have discussed a scenariowherein they have considered the back reacting branch of f(ϕ) during a period of low-scale inflation. Although the model successfully satisfies the gamma ray bound and avoids both strong coupling and back reaction, it violates the Schwinger effect constraint as discussed by Kobayashi and Afshordi <cit.>. Our model on the other handdoes not run into Schwinger effect inconsistencies as discussed in Section <ref>. § DISCUSSION AND CONCLUSIONS We have studied here the generation of magnetic fields during the inflationary era. As the standard Maxwell action is conformally invariant,electromagnetic (EM) field fluctuations decay with expansion as1/a^2.Thus to generate fields of significantpresent day strengths, breaking of conformal invariance is imperative.One of the waysthis can bedone is by coupling the EM action to a function of the inflaton (f^2FF<cit.>). Althoughsuch a model can lead tothe generation of magnetic fields withpresent day strengths of interest,it suffers fromseveral potential problems. These have been referred to as thestrong coupling problem, back reaction problem and theSchwinger effect constraint. In Section <ref>, we have shown that in the f^2FF model,there are two possible evolution of f(ϕ) to generate scaleinvariant magnetic field spectrum. In the first case, f increases as a^2 but if one demands thatthe effective electric charge is restored to the standard valueat the end of inflation, the modelsuffers from the strong couplingproblem at the beginning of inflation. In the second case, f decreasesas a^-3. In this case however,the resultant electric field spectrum diverges. Hence, the model suffers from the back reactionproblem and Schwinger effect inconsistencies.We have proposed a model which evades all the above mentioned problems. In our model, during inflation, f increases as a power law with an exponent å. We constrain å to be greater than 1/2 and also such that there are no back reaction effects during inflation. The coupling function (f)is assumed to begin with a standard value of unity and it increases to a largevalue at the end of inflation. To get back the standard coupling (e),we introduce a transition at the end of inflation in the evolution of f. During the second part of the evolution in the post-inflationary matter dominated era, f decreases as a power law with an exponent β. By demanding that the EM energy density does not back react on the background post inflation as well, we have put a bound on the reheating temperature and the scale of inflation. We note as the reheating temperature increases, the scale of inflation decreases according to the constraint in Eq.(<ref>).Hence, the maximum reheating temperature possible is when it becomes equivalent to the scale of inflation.For the scale invariant magnetic spectrum during inflation i.e å =2, the upper bound on the reheating temperature obtained is ≈ 10^4 GeV.For each reheating temperature and the scale of inflation, we have estimated the present day magnetic field strength and the corresponding coherence length. We have considereddifferent cases in the evolution of the magnetic field after reheating.To begin with,we do not consider any nonlinear effects arising due to interaction of the magnetic modes with the plasma. For this case, we obtain fields of the order1.3 × 10^-6 G and the coherence length scales of the order6.5 × 10^-7 Mpc for a reheating temperature at the QCD epoch (150 MeV) ( Refer to Table <ref>). From Fig. <ref>, wesee that for all reheating temperatures below about 10^4 GeV,the magnetic field strengths and coherence scales are large enough tosatisfy the gamma ray bound.Taking nonlinear evolutionand the resulting turbulent decay intoaccount,we find thatthe γ-ray observations lead to an upper bound onthe reheating temperatureof about 7 GeV(the left panel of Fig (<ref>)). A model which does satisfy the γ-ray constraint,is when reheating occurs at the QCD epoch. The coherence length is enhanced to 6.1 × 10^-4 Mpcand the magnetic field strength is decreased to about 1.4 × 10^-12 G.However there is also the phenomena of inverse transfer <cit.>(predicted by numerical simulations) of non-helical MHD turbulencewhich needs to be taken into account. When thisis also incorporated in our calculations, we see an improvement in the constraint on reheating temperatures and strength of the magnetic fields. The upper bound on reheating temperature increases to ≈ 4 × 10^3 GeV. For a reheating temperature at 100 GeV, the coherence length is enhanced to7.3 × 10^-4 Mpc from8.8 × 10^-10 Mpc (which is obtained if we assume only pure flux freezing evolution).On the other hand the magnetic field strength at the above mentionedcoherence scales decreases to 6.8× 10^-13G from5.6 × 10^-7G.One possible scenario whereby the coupling function f transitsfrom a growing function to one that decays back to unity can perhaps be realizedin models of hybrid inflation <cit.>. In hybrid inflation, two interacting scalar fields, ϕ and σ, are employed. During inflation, the inflaton field (ϕ) slow rolls and the other field (σ) is static. To end inflation, ϕ triggers σ to rapidly roll down the potential.The transition in f can be explained by making f a function of both these scalar fields. The function f could increase asϕ slow rolls during inflationbut at the end of inflation, it shifts to a function of σ.It is brought down to its initial value as σ cascades down. More work is needed to explore this idea further.In our analysis, we have only looked at the generation of non-helical magnetic fields. Helical magnetic fields on the other hand will provide a much bigger advantage since inverse cascadeleads to an even milder decrease inthe strength of the magnetic fields while increasing the coherence lengths of the fields. We intend to look at this case in the future. There is also the question of gravitational waves production from anisotropic stresses of these generated magnetic fields as discussed by Caprini and Durrer <cit.>. It would be of interest to study gravitational wave generationin our model and check whetherthey could be detected in futuregeneration of detectors and thus probe magnetogenesis.§ ACKNOWLEDGMENTSRS, SJ and TRS acknowledge the facilities at IRC, University of Delhi as well as the hospitality and resources provided by IUCAA, Pune where part of this work was carried out. The research of RS is supported by SRF from CSIR, India under grant 09/045(1343)/2014-EMR-I. The research of SJ is supported by UGC Non-NET fellowship, India. TRS acknowledges the Project grant from SERB EMR/2016/002286.apsrev4-1
http://arxiv.org/abs/1708.08119v2
{ "authors": [ "Ramkishor Sharma", "Sandhya Jagannathan", "T. R. Seshadri", "Kandaswamy Subramanian" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20170827182158", "title": "Challenges in Inflationary Magnetogenesis: Constraints from Strong Coupling, Backreaction and the Schwinger Effect" }
Straight quantum layerwith impurities inducing resonancesSylwia Kondej=============================================================Institute of Physics, University of Zielona Góra, ul. Szafrana 4a,65246 Zielona Góra, Polande-mail: [email protected], We consider a straight three dimensional quantum layer with singular potential supported on a straight wire which is localized perpendicularly to the walls andconnects them. We prove that the infinite number ofembedded eigenvalues appears in this system. Furthermore, we show that after introducing a small surface impurity to the layer, the embedded eigenvalues turn to the second sheet resolvent poles which state resonances. We discuss the asymptotics of the imaginary component of the resolvent pole with respect to the surface area. Keywords: Singular perturbations, embedded eigenvalues, resonances.§ INTRODUCTION The paper belongs to the line ofresearch often called Schrödinger operators with delta potentials [In the following we will equivalently use the notations delta interaction and delta potential.]. The analysis of these type of potentials is motivated by mesoscopic physics systems with the semiconductor structuresdesigned in such a way that theycan be mathematically modelled by the Dirac deltasupported on the sets of lower dimensions. The support of delta potential imitates thegeometry of the semicondutor material, for example, it can take a form of one dimensional sets (wires) or surfaces with specific geometrical properties. A particle is confined in the semiconductorstructure however the model admits a possibility of tunneling. Therefore these types of systems are called in literature leaky quantum graphs or wires. One of the most appealing problem in this area is the questionhow the geometry of a wire affects the spectrum; cf. <cit.> and <cit.>. The aim of the present paper is to discuss how the surface perturbation leads to resonances.We consider a non relativistic three dimensionalmodel ofquantumparticle confined between two infinite unpenetrable parallel walls whichform a straight quantum layer defined by Ω :={(x, x_3)∈^2 × [0, π ]}.In the case of absence of any additional potentialthe Hamiltonian of such system is givenby the negative Laplacian-Δ : D(Δ ): → L^2 (Ω ) with the domain D(Δ )=W^2,2(Ω )∩ W^1,2_0(Ω ), i.e. withthe Dirichlet boundary conditions on ∂Ω. The spectrum of -Δ is determined by σ_ess(-Δ )=[1, ∞ ) however it is useful to keep in mind that the energies in x_3-directionare quantized and given by {k^2}_k=1^∞.At the first stage we introduce a straight wire I which connectsthe walls ∂Ω being at the same time perpendicular to them. We assume the presence of interaction localized on I and characterized by the coupling constant α∈. The symbolic Hamiltonian of such system can be formally written-Δ + δ_α , I ,where δ_α ,I represents delta potential supported on I.Since the interaction support in this model has the co-dimension larger then one it is called strongly singular potential.The proper mathematical definition of Hamiltonian can beformulatedin the terms of boundary conditions. More precisely, we define H_α asa self adjoint extension of -Δ|_C^∞_0 (Ω∖ I). determined by means of the appropriate boundary conditions which functions from the domain D(H_α )satisfy on I.The coupling constant α is involved in the mentioned boundary conditions, however, it is worth to say at this point thatα does not contributeadditively to the structure of Hamiltonian. To describe spectralproperties of H_α we can rely on radial symmetry of the system and consider two dimensional system with point interaction governed by the Hamiltonian H^(1)_α. The spectrum of H^(1)_α consists of positive half line and one discrete negative eigenvalueξ_α = -4^2 (-2πα +ψ(1)) ,cf. <cit.>, where -ψ(1)= 0,577... determines the Euler–Mascheroni constant. This reflexes the structure of spectrum of H_α, namely, for each l∈ the numberϵ_l = ξ_α +l^2 ,gives rise to an eigenvalue of H_α. Note that infinite numberof ϵ_l lives above the threshold of the essential spectrum and, consequently, the Hamiltonian H_α admitsthe infinite number of embedded eigenvalues.In the second stage we introduce to the layeran attractive interaction supported on a finite C^2 surface Σ⊂Ω separatedfrom a wire I by some distance, cf. Fig. 1.Suppose thatβ≠ 0 is a real number. The Hamiltonian H_α , β which governs this system canbe symbolically writtenas -Δ +δ_α, I -βδ_Σ , where δ_Σ stands for theDirac delta supported on Σ; this term represents weaklysingular potential Again, a proper mathematical definition ofH_α, β can be formulated as a self adjoint extension ofH_α|_{f∈D(H_α ) ∩ C^∞ (Ω∖ I):f =0on Σ}..This extension is defined by means of theappropriate boundary conditions on Σ discussed in Section <ref>.The aim of this paper is to analyse how the presence of surface interaction supported onΣ affects the embedded eigenvalues. The existence of embedded eigenvalues is a direct consequenceof the symmetry. By introducing additional interaction on Σ we break this symmetry, however, if the perturbationis small then we may expect that the system preserves a ”spectral memory“ on original eigenvalues.In Section <ref> we show, for example,that if the area |Σ| → 0then the embedded eigenvalues ϵ_lturn to complex poles of the resolvent of H_α, β. These poles are given byz_l = ϵ_l +o(|Σ | ) withz_l <0; the latter confirms that z_l is localized on the second sheet continuation. We derive the explicit formula for the lowest order of the imaginary component of z_l and show that it admitsthefollowing asymptoticsz_l = 𝒪 (|Σ|^2) .The poles of resolvent state resonances in the system governed by H_α , β and z_l is related to the width of the resonance given by -2 z_l. Finally, let us mention that varioustypes of resonators in waveguides and layershave been already analyzed. For example, in <cit.> the authors study resonances induced by the twisting of waveguidewhich is responsible for breaking symmetry. The planar waveguide with narrows playing the role of resonators has been studied in <cit.>. On the other hand, the straight Dirichlet or Neuman waveguides with windows or barriers inducing resonances have been analyzed in <cit.>. Furthermore, resonances in the curvedwaveguides with finite branches have beendescribed in <cit.>. It is also worth to mention that quantum waveguides with electric and magnetic fields have been considered,  cf. <cit.>.On the other hand, the various types of resonators induced by delta potential in two or three dimensional systems have been analyzed. Let us mention the results of <cit.> which describe resonancesin the terms of breaking symmetry parameters or by means of tunnelling effect.In<cit.> the authors consider a straight two dimensional waveguide with a semitransparent perpendicular barrier modeled by deltapotential. It was shown that after changing slightly the slope of barrier the embedded eigenvalues turn to resonances; the widths of theseresonances can be expressed in the terms of the barrier slope. The present paper is, in a sense, an extension of <cit.>. However,the strongly singular character of the delta interaction supported on I causes that even an infinitesimalchange of the slope of I, can not be understood as a small perturbation. Thereforethe resolvent poles are not interesting from the physical point of view since they rapidly escape far away from the real line. In the present model the role of small perturbation playsdelta potential on Σ whichleads to resonances. Finally, it is worth to mention that the spectral properties of quantum waveguides and layers with delta interaction have been studied, for example, in<cit.>. The results of <cit.> concern weakly singular potentials and in <cit.> the authors considerstrongly singular interaction. In the present paper we combine bothtypes of delta interaction and analyze how they affect each other.General notations: ∙stands for the complex plane and _± for the upper, respectively,lower half-plane.∙ ·, (· ,· ) denote the norm and the scalar product in L^2 (Ω ) and (·, · )_Σ defines the scalar product inL^2 (Σ ).∙ Suppose that A stand for a self adjoint operator. We standardly denote by σ _ess(A),σ _p(A) and ρ (A), respectively, the essential spectrum, the point spectrum and the resolvent set of A.∙ The notation C stands for a constant which value can vary from line to line.§ PARALLEL WALLS CONNECTED BY WIRE INDUCINGEMBEDDED EIGENVALUES §.§ Free particle in layer.Let Ω⊂^3 stand fora layer definedby Ω := { x= ( x, x_3) : x∈^2 ,x_3 ∈ [0, π ]} and in the following we assume conventionx = (x_1, x_2)∈^2.The ”free“ Hamiltonian is determined by H= -Δ : D(H)= W^ 2,2 (Ω )∩ W_0 ^1,2(Ω )→ L^2 (Ω )and it admits the following decomposition H=-Δ ^(2)⊗ I +I ⊗ -Δ^(1) onL^2 (^2) ⊗ L^2 (0,π ) , where Δ^(2) : D(Δ^(2)) = W^2,2(^2)→ L^2(^2) stands for thetwo-dimensional Laplacian and Δ^(1) : D(Δ^(1)) = W^2,2(0,π)∩ W_0^1,2(0,π)→L^2 (0,π) determines one-dimensional Laplacian with the Dirichlet boundary conditions. To define the resolventof Hit is useful to note thatthe sequence {χ_n}_n=1^∞ given byχ_n (x_3) : = √(2/π)sin ( n x_3) ,n∈ forms anorthonormal basis in L^2 (0, π). Suppose that z∈∖ [1,∞ ). Then R(z):= (-Δ -z)^-1 defines an integral operator with the kernel 𝒢(z; x,x',x_3,x'_3):= 1/2π∑_n=1^∞ K_0 (κ_n (z)|x- x'|) χ_n (x_3)χ_n (x_3 ' ) , where K_0 (·) denotes the Macdonald function, cf. <cit.>, and κ _n (z):= -i√(z-n^2) , √(z-n^2) >0.In the following we will also use the abbreviation 𝒢 (z) for (<ref>). The threshold of spectrum of H is determined by the lowest discrete transversal energy, i.e. 1.Moreover, it is purely absolutely continuous and consequently, it takes the form σ (H) = [1,∞ ) .§.§ Layer with perpendicular wire: embedded eigenvalues phenomena.We introduce awire defined by the straight segment of width π and perpendicular to walls.The presence of the wire will be modelled by delta interaction supported on I⊂Ω, where I:= (0,0)× [0,π].In view of the radial symmetry the operator with delta interaction on I admits a naturaldecomposition onL^2 (Ω)= L^2 (^2)⊗ L^2 (0,π ) and acts in the subspace L^2 (^2) as the Schrödinger operatorwith one point interaction. Therefore, thedelta potential can be determined by appropriate boundary conditions, cf. <cit.>,which can be implemented, in each sector of the transversal energy, separately. For this aim we decompose a functionψ∈ L^2 (Ω ) onto ψ (x)=∑_n=1^∞ψ_n (x )χ_n (x_3), whereψ_n (x ):= ∫_0^πψ (x, x_3 ) χ_n (x_3)dx_3.D_1)We say that a function ψ belongs to the set D'⊂W^2,2_loc (Ω∖ I) ∩ L^2 (Ω ) if Δψ∈ L^2 (Ω ),ψ|_∂Ω=0 and the following limits Ξ_n (ψ ):= - lim_|x|→ 01/ln |x|ψ_n (x) , Ω_n (ψ):=lim_|x|→ 0( ψ_n (|x |) - Ξ_n (ψ )ln |x |) are finite. D_2) Forα∈, wedefine the set D (H_α ):= {ψ∈ D':2παΞ_n (ψ ) = Ω_n (ψ) for any n∈} and the operator H_α : D (H_α ) → L^2 (Ω ) which acts H_αψ (x)= -Δψ (x) ,forx∈Ω∖ I .The resulting operator H_α : D(H_α )→ L^2 (Ω )coincides -Δ_α^(2)⊗ I +I ⊗ -Δ^(1) onL^2 (^2) ⊗ L^2 (0,π ) , where Δ_α ^(2) : D(Δ_α ^(2)) → L^2(^2) stands for thetwo-dimensional Laplacian with point interaction, cf. <cit.> with the domain D(Δ_α ^(2)). Consequently, H_αis self adjoint and its spectral properties will be discussed in the next section. §.§ Resolvent of H_α.Suppose that z∈_+. Weuse the standard notation R_α (z) for the resolvent operator, i.e. R_α (z):= (H_α - z)^-1. To figure out the explicit resolvent formula we introduceω_n (z; x):= 1/2π K_0 (κ_n (z)| x | ) χ_n (x_3) ,n∈ ;in the following we will use also abbreviation ω_n (z)= ω_n (z; · ).The following theorem states the desired result.The essential spectrum of H_α is given byσ_ess (H_α )=[1, ∞ ) .Furthermore, let [Analogously as in the previous discussionwe assume √(z-n^2) >0. The logarithmic function z↦ln z is definedin the cutplane -π < z <πand admits continuation to entire logarithmic Riemann surface.]Γ_n (z):= 1/2π( 2πα +s_n (z)) ,wheres_n (z):=-ψ(1) +ln√(z-n^2)/2i .Suppose that z∈∖ [1,∞ ) and Γ _n (z) ≠ 0. Then z∈ρ (H_α ) and operator R_α (z) admits the Krein-like form:R_α(z)= R(z) +∑_n=1^∞Γ_n (z)^-1(ω_n (z̅ ), · )ω_n(z) .Our first aim is to show that (<ref>) defines the resolvent of H_α. Operator H_α is defined as the self adjoint extension of -Δ |_C^∞ _0 (Ω∖ I). Suppose that f∈ C^∞ _0 (Ω∖ I). Then g:=(-Δ -z )f∈ C^∞ _0 (Ω∖ I). Employing the fact that ω_n (z)=𝒢(z)∗ (δχ_n ), where 𝒢(z) is the kernel defined by (<ref>) and δ =δ (x), weconcludethat (ω_n (z̅ ),g)= ⟨δχ_n, f⟩_-1,1 = 0where⟨· , ·⟩_-1,1 states the duality between W^-1,2(Ω ) and W^1,2(Ω ). This, consequently,implies R_α (z) (-Δ -z )f = R (z) (-Δ -z )f= f in view of (<ref>) which means that R_α (z) defines the resolvent of a self adjoint extension of -Δ |_C^∞ _0 (Ω∖ I). To complete the proof we have to show that any function g=R_α (z)f satisfies boundary conditions (<ref>). In fact, g admits the unique decomposition g=g_1+g_2, where g_1:= R(z)f and g_2= ∑_n=1^∞Γ_n (z)^-1(ω_n (z̅ ), f )ω_n(z). Therefore, a nontrivial contribution to Ξ_n (g) comes from g_2 since g_1 ∈ W^2,2(Ω ). Employing the asymptotic behaviour of the Macdonald function, cf. <cit.>K_0 (ρ)=ln1/ρ +ψ(1)+𝒪(ρ) , we get Ξ_n (g)= 1/2πΓ_n(z)^-1(ω_n (z̅), f) and Ω_n (g)= (1-1/2πΓ_n(z)^-1s_n (z) ) (ω_n (z̅), f) =αΓ_n(z)^-1 (ω_n (z̅), f) . Using (<ref>) one obtains(<ref>). This completes the proof of (<ref>).The stability of the essential spectrum can be concluded in the analogous way as in <cit.>. The key stepis to show that R(z)-R_α (z) is compact. The statement can be proved relying on compactness of the trace map S : W^2,2(Ω )→ L^2 (I ) which follows from the boundedness of the trace map, cf. <cit.> and the compactness theorem, cf. <cit.>.This implies, in view of boundedness of R(z) :L^2 (Ω )→W_0^1,2(Ω )∩ W^2,2(Ω ), thatSR(z) :L^2 (Ω )→ L^2(I) is compact. Employing the resolvent formula, cf. <cit.>, and the factthat the remaining operators contributing toR(z)-R_α (z)are bounded we conclude that R(z)-R_α (z)iscompact.The spectral analysis developed in this work is mainly based onthe resolvent properties. In the following we will use the results of <cit.> where strongly as well as weakly singular potentials were considered.In the following theorem we state the existence of eigenvalues of H_α. Let 𝒜_α := {n∈ : ξ_α +n^2 <1 }. Each ϵ_n:= ξ_α +n^2, where n∈𝒜_α defines the discrete eigenvalue of H_α with the corresponding eigenfunction ω_n:= ω_n (ϵ_n). In particular,this means that for anyα operator H_α has at least one eigenvalue ϵ_1 below the threshold of the essential spectrum. Operator H_α has infinite number of embedded eigenvalues. More precisely, for any n∈∖𝒜_α the number ϵ_n:= ξ_α +n^2 determines the embedded eigenvalue. In particular, there exists ñ∈∖𝒜_α such that ϵ_n ∈ ((n-1)^2, n^2) for any n> ñ. The proof is based on the Birman-Schwinger argument which, in view of (<ref>), readsz∈σ_p (H_α )⇔∃n∈ : Γ _n(z)=0,cf. <cit.>.Note that, given n∈ the function z↦Γ_n (z), z ∈{ : √(z-n^2)> 0} has the unique zero at z=ξ_α +n^2, i.e.Γ_n (ξ_α +n^2) = 0 .Finally, it follows, for example, from <cit.> that the corresponding eigenfunctions takes the form 𝒢 (ϵ_l)∗χ_n δ. This completes the proof. § SURFACE IMPURITY We definea finite smooth parameterized surfaceΣ⊂Ω beinga graph of the map U∋ q=(q_1, q_2) ↦ x (q) ∈Ω.The surface element can be calculated by means of the standard formula dΣ = |∂_q_1x (q)×∂_q_1x (q) |dq. Additionally we assume thatΣ∩ I = ∅. Furthermore, let n : Σ→^3 stand for the unit normal vector (with an arbitrary orientation)and ∂_n denote the normal derivative defined by vector n. Relying on the Sobolev theorem we state that the trace map W^1,2(Ω ) ∋ψ↦ψ |_Σ∈L^2 (Σ ) constitutes a bounded operator;we set the notation (· , · )_Σ for the scalar product in L^2 (Σ ). Given β∈∖{0} we define the following boundary conditions: suppose thatψ∈C (Ω ) ∩ C^1 (Ω∖Σ) satisfies∂_n ^+ ψ|_Σ - ∂_n ^- ψ|_Σ= -βψ|_Σ ,wherethe partial derivatives contributingto the above expression are defined as the positive, resp., negative limits on Σand signs are understood with respect to direction of n. D_3)We say that a function ψ belongs to the set D̆⊂W^2,2_loc (Ω∖ (I∪Σ )) if Δψ∈ L^2 (Ω ), ψ|_∂Ω=0 and the limiting equations (<ref>) and (<ref>) are satisfied. D_4) Define operator whichfor f∈D̆ acts as-Δ f (x) if x∈Ω∖ (I∪Σ)and let H_α, β : D (H_α, β) → L^2 (Ω ) stand for its closure. To figure out the resolvent of H_α , β wedefine the operator acting from to L^2 (Ω) to L^2(Σ) as R_α, Σ(z)f= (R_α(z)f )|_L^2(Σ ). Furthermore, we introduce the operator from L^2 (Σ ) to L^2 (Ω ) defined by R_α, Σ (z)f = 𝒢_α∗ f δ, where 𝒢_α stands for kernel of (<ref>). Finally, we defineR_α, ΣΣ (z) :L^2 (Σ ) → L^2 (Σ ) by R_α, ΣΣ (z) f= (R_α, Σ (z)f)|_Σ. In view of (<ref>) the latter takesthe following formR_α, ΣΣ (z)= R_ΣΣ(z) +∑_n=1^∞Γ_n (z)^-1(w_n (z̅ ), · )_Σw_n(z) ,where w_n(z):= ω_n(z)|_Σ and R_ΣΣ(z) :L^2 (Σ )→ L^2 (Σ ) stands for the bilateral embedding of R(z).Following the strategy developed in <cit.> we define the set Z ⊂ρ (H_α ) such that z belongs to Z if the operators(I-βR_α , ΣΣ(z))^-1 ,and (I-βR_α , ΣΣ(z̅))^-1acting from L^2(Σ ) to L^2 (Σ)exist and are bounded. Our aim is to show that Z≠∅ .Therefore we auxiliary define the quadratic below boundedform ∫_Ω |ψ |^2dx - β∫_Σ| ψ |_Σ|^2dΣ ,ψ∈ W^1,2_0 (Ω ) . Let H̆_β stand for the operator associated to the above form in the sense of the first representation theorem, cf. <cit.>. Following the arguments from <cit.>we concludethat I-βR_ΣΣ(z) : L^2 (Σ)→ L^2 (Σ) definesthe Birman–Schwinger operator for H̆_β.Using Thm. 2.2 of <cit.> one obtains z ∈ρ (H̆_β )⇔ 0∈ρ (I-βR_ΣΣ(z)) . In the following we are interested in negative spectral parameter and thus we assume z=-λ where λ >0. Since the spectrum of H̆_β is lower bounded we conclude0∈ρ (I-βR_ΣΣ(-λ) ) ,for λ large enough.Next step is to find a bound for the second component contributing to (<ref>). In fact, it can be majorized by ∑_n=1^∞| Γ_n (-λ)^-1| w_n (-λ )_Σ^2 ≤ C ∑_n=1^∞w_n (-λ )_Σ^2 , where we applied the uniform bound| Γ_n (-λ)^-1| ≤ C, cf. (<ref>).Using the large argument expansion, cf. <cit.>, K_0 (z)∼√(π/2z) ^-z we get the estimate|w_n (-λ ,x)| ≤ C 1/λ^1/4^-r_min(n^2+λ )^1/2for λ→∞ ,where r_min= min _x∈Σ |x|. This implies thatthe norm of the second component of(<ref>) behave as o(λ^-1). Combining this result with (<ref>) we conclude that 0∈ρ (I-βR_α, ΣΣ (-λ )) for λ sufficiently large which shows that (<ref>) holds.To realize the strategy of <cit.> we observe that the embedding operator τ ^∗ :L^2 (Σ )→ W^-1,2(Ω ) acting asτ ^∗ f= f∗δ is bounded and, moreover,Ran τ ^∗∩ L^2 (Ω ) = {0} .Suppose that z∈ Z. Using(<ref>) together with Thm. 2.1 of <cit.> we conclude that the expression R_α, β(z)= R_α (z)+R_α, Σ (z)(I- βR_α , ΣΣ(z))^-1 R_α, Σ (z) .defines the resolvent of self adjoint operator. We have R_α, β(z)= (H_α , β - z)^-1 . To show the statement we repeat the strategy applied in the proof of Theorem <ref>. Operator H_α, β is defined as the self adjointextension of -Δ |_C^∞_0 (Ω∖ (I ∪Σ )) determined by imposing boundary conditions (<ref>) and (<ref>). The idea is show that any function from the domain D (H_α, β) satisfies (<ref>) and (<ref>). Since the proof can be done by the mimicking the arguments from the proof of Theorem <ref> we omit further details. Furthermore, repeating the arguments from the proof of Theorem <ref> we state thatσ_ess (H_α , β) = [1 ,∞ ) .Notation.In the following we will be interested in the spectral asymptotic for small|Σ |. Therefore, we introduce an appropriate scaling with respect to a point x_0 ∈Σ. Namely,for small positiveparameter δ we define Σ_δ as the graph of U ∋ q ↦ x_δ (q) ∈Ω wherex_δ (q):= δ x(q)-δ x_0+x_0 .For example a sphere of radius Roriginated at x_0turns tothe sphere of radius δ R after scaling. Note that equivalence |∂_q_1x_δ (q)×∂_q_1x_δ (q) |=δ^2 |∂_q_1x(q)×∂_q_1x (q) | implies the scalingof the surface area |Σ_δ |= δ^2 |Σ |. § PRELIMINARY RESULTS FOR THE ANALYSIS OF POLES The Birman-Schwinger argument relates the eigenvalues of H_α, β and zerosof I-βR_α , ΣΣ(z) determined by the condition(I-βR_α , ΣΣ(z))≠{0}.To recover resonances we show that R_α , ΣΣ (z) has a second sheet continuationR_α , ΣΣ^𝐼𝐼 (z) and the statement (I-βR^𝐼𝐼_α , ΣΣ(z))≠{0} holds for certainz∈_-.§.§ Analytic continuation of R_α , ΣΣ(z)We start with the analysis of the first componentof R_α , ΣΣ(z) determinedby R_ΣΣ(z), cf. (<ref>). Since R_ΣΣ(z) is defined by means of the embedding of kernel 𝒢 (z), see (<ref>),the following lemma will be useful for further discussion. For anyk∈ the function𝒢(z) admits the second sheet continuation 𝒢^𝐼𝐼(z) through J_k:= (k^2, (k+1)^2)) to an open set Π_k ⊂_- and ∂Π_k ∩ =J_k. Moreover, 𝒢^𝐼𝐼(z) takes the form𝒢^𝐼𝐼 (z; x,x', x_3, x_3') = 1/2π∑_n=1^∞ Z_0 (i√(z-n^2) |x-x'|)χ_n (x_3) χ_n (x'_3) ,whereZ_0 (i√(z-n^2)ρ ) ={[K_0 (-i√(z-n^2)ρ ), forn>k; K_0 (-i√(z-n^2)ρ )+iπ I_0 (i√(z-n^2)ρ ), forn≤ k, ].and I_0 (·) standardly denotesthe Bessel function. The proof is basedon edge-of-the-wedge theorem, i.e. our aim is to establish the convergence𝒢(λ+i 0) = 𝒢^𝐼𝐼(λ-i 0) ,for λ∈ J_k.In fact, it suffices to show that the analogous formula holds for Z_0 and z=λ± i 0. Assume first that n>k. Then √(λ -n^2 ± i0)= √(λ -n^2 ) since √(λ -n^2 ) >0. Furthermore,thefunctionK_0 (·) is analytic in the upper half-plane, consequently, we have K_0 (-i√(λ -n^2± i0)ρ ) = K_0 (√(n^2 -λ)ρ ).Assume now that n ≤ k. Then √(λ -n^2 ± i0)= ±√(λ -n^2 )∈ which implies K_0 (-i√(λ -n^2 + i0)ρ )= K_0 (-i√(λ -n^2 )ρ ) . On the other hand, using the analytic continuation formulae K_0 (z ^mπ i )=K_0 (z) - i mπ I_0 (z) and I_0(z ^mπ i ) = I_0 (z) ,for m∈, we getZ_0 (√(λ -n^2 - i0)ρ)= K_0 (i√(λ -n^2 )ρ )+ i πI_0 (-i√(λ -n^2 )ρ ) = K_0 (-i√(λ -n^2 )ρ ) .This completes the proof. The above lemma providesthe second sheet continuation of R (z) as well asR_ΣΣ(z); the latter is defined as the bilateral embedding of R^𝐼𝐼 (z) to L^2 (Σ ). Note that for each k∈ the analytic continuation of 𝒢 (·) through J_k leads to different branches. Therefore, we have to keep in mind that the analytic continuation of 𝒢 (·) is k-dependent.In the next lemma we show that the operator R^𝐼𝐼_Σ_δΣ_δ(z ) is bounded and derive the operator normasymptotics ifδ→ 0.Assume that k∈and λ∈ J_k.Let z=λ -iε, where ε is a small positive number. Operator R^𝐼𝐼_Σ_δΣ_δ (z) is bounded and its norm admits the asymptoticsR^𝐼𝐼_Σ_δΣ_δ (z) =o(1) ,where the error term is understood with respect to δ. To estimate the kernel of R^𝐼𝐼_Σ_δΣ_δ (z) we use(<ref>), i.e.𝒢^𝐼𝐼_Σ_δΣ_δ(z; ρ , x_3, x_3') = 1/2π(∑ _n=1^∞ K_0 (κ_n (z)ρ )χ_n (x_3)χ_n (x_3') ..+∑ _n≤ kI_0 (-κ_n (z)ρ )χ_n (x_3)χ_n (x_3') ) ,whereρ =|x-x'| ,andx ,x'∈Σ_δ .First, we consider (<ref>). The expression |I_0 (-κ_n (z)ρ )χ_n (· )χ_n (· )|is bounded. Therefore the operator defined by the kernel (<ref>) is also bounded and the corresponding operator norm in L^2 (Σ_δ ) behaves as |Σ_δ |^2 =𝒪(δ^4 ). The analysis of the term (<ref>) is more involving because it consists of infinite number of components. The asymptotics:K_0 (κ_n (z)ρ )-K_0 (n ρ ) = ln√(1-z/n^2)(1+𝒪(ρ ))implies∑_n=1^∞| (K_0 (κ_n (z)ρ )-K_0 (n ρ ))χ_n (x_3)χ_n (x_3 ') |= C+ 𝒪(ρ ) ;remind that κ_n (z)is defined by (<ref>). To estimate ∑_n=1^∞ K_0 (n ρ )χ_n (x_3)χ_n (x_3 ') we borrow the idea from <cit.> and use <cit.> to get∑_n=1^∞ K_0 (nρ )cos (na )= π/2√(ρ^2+a^2)+1/2(lnρ/4π - ψ(1)) + π/2∑ _n=1^∞( 1/√((2nπ +a)^2+ρ ^2)-1/2n π)+ π/2∑ _n=1^∞( 1/√((2nπ -a)^2+ρ^2)-1/2n π) .For x,x'∈Σ_δ the terms (<ref>) and (<ref>) can be majorized by C(∑_n=1^∞1/n^2), i.e. by a uniform constant. Consequently, using the above estimatestogether with the equivalence sin a sin b =1/2( cos(a-b)-cos(a+b)) we get after straightforward calculations| ∑_n=1^∞ K_0 (κ_n (z)ρ )χ_n (x_3)χ_n (x_3 ') | ≤ C(1/|x-x'|+ ln |x-x'|) ;the singular terms in the above estimates come from (<ref>). Let us analyze the left hand side of (<ref>). First, weconsider the component 𝒫(x,x'):=1/|x-x'| which gives∫_Σ_δ𝒫 (x,x') dΣ_δ = (𝒫∗δ_Σ_δ )(x) = ∫_Σ_δ1/|x -x' |dΣ_δ .To conclude the desired convergence we employ the concept of generalized Kato measure. Namely, since the Dirac delta on Σ_δ defines Kato measure we obtainsup _x∈Σ_δ∫_Σ_δ𝒫 (x,x')dΣ_δ= o(1) ,where the right hand side asymptotics is understood in the sense of convergence with respectto δ. Employing the Schur argument we conclude that the normof the integral operatorwith the kernel 𝒫 (x,x') acting from L^2 (Σ_δ ) to L^2 (Σ_δ ) behaves as o(1). The termln |x-x'| contributing to (<ref>) can be estimated in the analogous way. To recover the second sheet continuation of R_α, ΣΣ (·) it remains to construct the analytic extensions ofω_n (z) and Γ _n (z), cf. (<ref>). Given n∈ the functions ω_n (z) and Γ_n (z) admit the second sheet continuations ω^𝐼𝐼_n (z) and Γ^𝐼𝐼_n (z) to Π_k through J_k= (k^2, (k+1)^2), k∈ defined byω^𝐼𝐼_n (z; x ):= 1/2πZ_0 (i√(z-n^2 )|x|)χ_n (x_3) ,where Z_0 is determined by(<ref>), and Γ^𝐼𝐼_n (z)={[ 1/2π(2πα -ψ (1)+ln√(z-n^2)/2i),forn>k; 1/2π(2πα -ψ (1)+ln√(z-n^2)/2i- π i ),forn≤ k. ]. The construction of ω_n^𝐼𝐼(z) can be obtained mimicking the arguments from the proof of Lemma <ref>. To get Γ^𝐼𝐼(z) we first assume k<n and z=λ± iε, λ∈ (k^2, (k+1)^2). Then ln√(λ-n^2± i 0)/i= ln√(n^2 -λ) and, consequently, Γ_n(λ +i0)=Γ^𝐼𝐼_n(λ -i0). Assume now that n≤ k. Then we have λ -n^2 >0 and ln√(λ -n^2 ± i0 )/i= ln√(λ -n^2 ± i0 )∓π/2 i which impliesΓ_n(λ +i0)=Γ^𝐼𝐼_n(λ -i0) =1/2π(2πα -ψ (1)+ln√(λ -n^2)- π/2 i ) .This, in view of edge-of-the-wedge theorem,completes the proof.Henceforth, we assumethat ϵ_n ≠ k^2 for any k,n∈. Suppose z=λ-iε, where ε is a small non-negative number and λ∈ J_k.At most one eigenvalue ϵ_l can exist in the interval J_k.Assuming that z ∈ (Π_k ∪ J_k )∖ϵ_l we define the analytic functions z↦Γ_n ^𝐼𝐼(z)^-1 forn∈. Then the second sheet continuation of the resolvent takes the formR^𝐼𝐼 _α ,ΣΣ (z)= R^𝐼𝐼 _ΣΣ (z)+ ∑_n=1^∞Γ_n ^𝐼𝐼(z)^-1(w^𝐼𝐼_n (z̅ ), · )_Σ w^𝐼𝐼_n (z )for z ∈ (Π_k ∪ J_k )∖ϵ_l. Notation. In the following we will avoid the superscript 𝐼𝐼 keeping in mind that all quantities depending on zare defined for second sheet continuation if z <0 which admits infinitely manybranches Π_k, k∈. Assume that ϵ_l ∈ J_k. Havingin mind latter purposes we defineA_l (z):= ∑_n≠ lΓ_n (z)^-1 (w_n (z̅), · )_Σ_δw_n (z) ,for z∈ (Π_k ∪ J_k)∖ϵ_l. The following lemma states the operator norm asymptotics.Operator A_l(z) : L^2 (Σ_δ ) → L^2 (Σ_δ ) is bounded and the operatornorm satisfies A_l (z)≤ C |Σ_δ | . Suppose that z=λ - iε. We derive the estimates |(A_l (z)f,f)_Σ_δ|≤ ( ∑_n≠ l|Γ_n(z)^-1| w_n (z)_Σ_δ^2 ) f_Σ_δ^2≤C( ∑_n≠ l w_n (z)_Σ_δ^2 ) f_Σ_δ^2 ;to obtain (<ref>) we use(<ref>). Now our aim is to show∑_n≠ l w_n (z)_Σ_δ^2≤ C|Σ_δ | .To find a bound for the left hand side of (<ref>) we analyse first the behaviour of w_n (z) for large n and z∈ (Π_k ∪ J_k)∖ϵ_l. For this aim we employ (<ref>) and (<ref>). Note that forn>k function w_n (z) admits the representation:w_n (z,x)= 1/2π K_0 (κ_n (z)|x|) χ_n (x_3) ,where x∈Σ_δ. Using again the large argument expansion(<ref>)and the fact that (-i √(z-n^2))∼n we get the estimate|w_n (z,x)| ≤ C^-r_minn ,where r_min= min _x∈Σ_δ |x|. This implies∑_n >k , n≠ l w_n (z)_Σ_δ^2≤ C|Σ_δ |= 𝒪 (δ^2 ) .On the other hand for n≤ k function w_n (z) consists of K_0 and I_0, see (<ref>). Both functions are continuous on Σ_δ and therefore w_n^2_Σ_δ≤ C|Σ_δ |. Since the number of such components is finite, in view of (<ref>), we come to (<ref>) which completes the proof. § COMPLEX POLES OF RESOVENTAssume that ϵ_l ∈ J_k. Suppose that δ is sufficiently small.It follows from Lemmae <ref> and <ref> that the operatorsI-βR_Σ_δΣ_δ (z) and I-β A_l (z) acting in L^2 (Σ_δ )are invertible for z∈ (Π_k ∪ J_k)∖ϵ_l and it makes sense to introduce auxiliary notationG_Σ_δ (z):= (I-βR_Σ_δΣ_δ (z) )^-1 .Since the norm of G_Σ_δ(z) A_l (z) : L^2 (Σ_δ) → L^2 (Σ_δ) tends to 0 if δ→ 0 therefore the operator I +β G_Σ_δ(z) A_l (z) is invertible as well. The following theorem “transfer” the analysis of resonances from the operator equation to the complex valued function equation. Supposeϵ_l ∈ J_k and assume that z∈ (Π_k ∪ J_k )∖ϵ_l.Then the condition (I-βR_α , Σ_δΣ_δ (z))≠{0} is equivalent to Γ_l (z) +β ( w_l (z̅), T_l (z)w_l (z) ) _Σ_δ=0 ,where T_l (z) :=(I -β G_Σ_δ(z) A_l (z))^-1G_Σ_δ(z) .The strategy of the proof is partially based on the idea borrowed from <cit.>. The following equivalences I- βR_α ,Σ_δΣ_δ(z)= = (I- βR_Σ_δΣ_δ(z) )(I -β G_Σ_δ(z) A_l (z).. -βΓ_l (z)^-1(w_l (z̅ ), · )_Σ_δ G_Σ_δ(z)w_l (z) )=(I- βR_Σ_δΣ_δ(z) )(I- β G_Σ_δ(z) A_l (z)) × [ I-βΓ_l (z)^-1(w_l (z̅ ), · )_Σ_δ T_l (z)w_l (z) ] , show that (<ref>) is equivalent to [ I-βΓ_l (z)^-1(w_l (z̅ ), · )_Σ_δ T_l (z)w_l (z) ] ≠{0 } . The above condition is formulated fora rank one operator and, consequently, it is equivalentto (<ref>). Theorem <ref> shows that the problem of complex poles of resolvent R_α, β(z) can be shiftedtothe problem of the roots analysis of η_l (z,δ )=0,where η_l (z,δ ):=Γ_l (z) - βϑ_l (z,δ ) , and ϑ_l (z,δ ):= (w_l (z̅), T_l (z)w_l (z) ) _Σ_δ . The further discussion is devoted to figuring out rootsof(<ref>). In the following we apply the expansion(1+A)^-1=(1-A+A^2-A^3...) valid if A<1.Taking -βR_Σ_δΣ_δ (z) as A we get G_Σ_δ (z)=(I- βR_Σ_δΣ_δ (z))^-1=I+ R̆(z) ,R̆(z):=∑_n=1(βR_Σ_δΣ_δ (z))^n.Expanding the analogous sum for -β G_Σ_δ (z) A_l (z) one obtains (I- β G_Σ_δ (z) A_l (z))^-1= I+ β A_l (z) + βR̆(z)A_l (z) +... In view of Lemmae <ref> and <ref> the norm of R_Σ_δΣ_δ (z)A_l (z) behavesas o(1)A_l (z)_Σ_δ for δ→ 0 and the same asymptotics holds for the operator norm ofR̆(z)A_l (z). The further terms in (<ref>) are of smaller order with respectto δ. Consequently,applying again (<ref>) we conclude that T_l (z) admits the following expansion T_l (z)= I+β A_l (z)+ R̆(z)+... Using the above statements we can formulate the main result.Suppose that ϵ_l ∈ J_k and consider the function η_l (z,δ ) : Π_k ∪ J_k × [0, δ_0)→,where δ_0>0,defined by(<ref>). Then the equation η_l (z, δ ) =0 , possesses a solution which is determined by the functionδ↦ z(δ )∈ with the following asymptotics z_l(δ )= ϵ_l+μ_l (δ ) ,|μ_l(δ )|= o(1) .Moreover, the lowest order term of μ_l (· ) takes the formμ_l(δ ) =4πξ_αβ{w_l (ϵ_l )^2_Σ_δ.. +β∑_n≠ l Γ_n (ϵ_l )^-1 |(w_l (ϵ_l), w_n (ϵ_l))_Σ_δ|^2 ..+ (w_l (ϵ_l ),R̆(ϵ_l) w_l (ϵ_l ) )_Σ_δ} . Note thatz↦η_l (z,δ ), cf. (<ref>), is analytic andη_l (ϵ_l , 0) = 0. Using(<ref>) one obtains. d Γ _n (z)/dz|_z=ϵ_n =1/4πξ_α <0,n∈.Combining this with . ∂ϑ_n(z,δ )/∂ z|_z=ϵ_l, δ =0= 0 , we get.∂η_l /∂ z|_δ=0= 1/4πξ_α≠ 0.In view of the Implicit Function Theorem we conclude that the equation (<ref>) admits a unique solutionwhich a continuous function of δ↦ z_l(δ) and z_l (δ )= ϵ_l +o(1).To reconstruct asymptotics of z(· )first we expand Γ_l (z)into the Taylor sum Γ_l (z)= 1/4πξ_α (z-ϵ_l) +𝒪((z-ϵ_l)^2) . Then the spectral equation (<ref>) reads z=ϵ_l + 4 πξ_αβϑ_l (z,δ )+𝒪((z-ϵ_l)^2) .Now we expand ϑ_l (z,δ ). Using(<ref>) and (<ref>)we reconstruct itsfirst order term which reads{w_l (ϵ_l )^2_Σ_δ+β∑_n≠ l Γ_n (ϵ_l )^-1 |(w_l (ϵ_l), w_n (ϵ_l))_Σ_δ|^2 ..+ (w_l (ϵ_l ),R̆(ϵ_l) w_l (ϵ_l ) )_Σ_δ} . Applying the asymptoticsz_l(ϵ_l)=ϵ_l +o(δ ) and the fact that ϑ_l (· ,·) is analytic with respect tocomplex variable we get formula for μ(·). §.§ Analysis of imaginary part of the poleSince the imaginary component of resonance pole has a physical meaning we dedicate to this problem a special discussion. The information on the lowest order term of the pole imaginary component is contained in(<ref>) and (<ref>). On the otherhand, note that only the components subscripted by n≤ k admit a non-zero imaginary parts. Therefore( 4πξ_αβ( β∑_n≤ kΓ_n (ϵ_l )^-1 |(w_l (ϵ_l), w_n (ϵ_l))_Σ_δ|^2 . . . .+ (w_l (ϵ_l ),R̆(ϵ_l) w_l (ϵ_l ) )_Σ_δ) ).determines the lowest order term of μ (δ ). Sign and asymptotics of μ (δ )with respect to Σ_δ . Recall that ϵ_l ∈ J_k. First we analyse (<ref>) and for this aim we defineι_l,n: = 1/2π( 2πα +ln√(ϵ_l -n^2)/2-ψ(1)) ,for n≤ k. Relying on (<ref>) we getΓ_l (ϵ_l )^-1 = 1/ι_l,n^2 +(1/2)^2(ι_l,n +1/2i )ifn≤ k. Consequently,formula (<ref>) is equivalent to4πξ_αβ^2 ∑_n ≤ k 1/21/ι_l,n^2 +(1/2)^2 |(w_l (ϵ_l), w_n (ϵ_l))_Σ_δ|^2 .The above expression is negative becauseξ_α <0. Moreover, since both w_l (ϵ_l) and w_l (ϵ_n) are continuous in Ω∖ I we have |(w_l (ϵ_l), w_n (ϵ_l))_Σ_δ|^2∼ |Σ_δ |^2. This means that(<ref>) behavesas 𝒪(|Σ_δ|^2). To recover the asymptotics of (<ref>) we restrict ourselvesto the lowest order term of R̆(z), cf. (<ref>), namelyυ_l :=4πξ_αβ ^2 (w_l (ϵ_l ),R_Σ_δΣ _δ(ϵ_l) w_l (ϵ_l ) )_Σ_δ .Using analytic continuation formulae (<ref>) and employing the small argument expansion, cf. <cit.>, K_0 (z) ∼-ln z ,where -π < z <π states the plane cut for the logarithmicfunction, one getsυ_l ∼πξ_αβ ^2 ∑_n≤k ( ∫_Σ_δ w_l (ϵ_l )χ_n )^2 =𝒪(|Σ_δ |^2 ) . One can easily see that υ_l <0. Summing up the above discussion we can formulate the following conclusion. The resonance pole takes the form z_l (δ )=ϵ_l+μ (δ ) with the lowest order of μ (δ ) given byπξ_αβ ^2 ∑_n ≤ k (2/ι_l,n^2 +(1/2)^2 |(w_l (ϵ_l), w_n (ϵ_l))_Σ_δ|^2 +( ∫_Σ_δ w_l (ϵ_l )χ_n )^2 ).It follows from the above formula that μ (δ )<0 and the asymptoticsμ (δ )= 𝒪(|Σ_δ |^2)holds. Moreover, the lowest order of μ (δ ) is independent of sign of β. Note that for the special geometrical cases the embedded eigenvalues can survive after introducing Σ_δ since the "perturbed" eigenfunctions are not affected by presence of Σ_δ. Let us consider Π_l := {x∈Ω :x=(x, π/l ) ,l∈ and assume that Σ_δ⊂Π_l. Then w_ml (z) = 0 for each m∈ and, consequentlyϑ_ml (z,δ ) = 0, cf. (<ref>). This implies the following statement. Suppose that Σ⊂Π_l. Then for all m∈ the numbers ϵ_ml remain the embedded eigenvalues of H_α , β. §.§ Acknowledgements Theauthorthankstherefereesforreadingthepapercarefully, removing errors and recommending various improvements in exposition. The work wassupportedby the project DEC-2013/11/B/ST1/03067 of the Polish National Science Centre.99 AGHH S. Albeverio, F. Gesztesy, R. Høegh-Krohn, H. Holden: Solvable Models in Quantum Mechanics, 2nd printing (with Appendix by P. Exner), AMS, Providence, R.I., 2004. AS M. Abramowitz and I. Stegun: Handbook of Mathematical Functions, 1972.BPS L. M. Baskin, B. A. Plamenevskii, O. V. Sarafanov: Effect of magnetic field on resonant tunneling in 3D waveguides of variable cross-section J. Math. Sci. 196 (4) (2013) 469–489.BKNPS L. M. Baskin, M. Kabardov, P. Neittaanmäki, B. A. Plamenevskii, O. V. Sarafanov: Asymptotic and numerical study of resonant tunneling in two-dimensional quantum waveguides of variable cross section, Computational Mathematics and Mathematical Physics 53 (11) (2013), 1664–1683.BEHL J. Behrndt, P. Exner, M. Holzmann, V. Lotoreichik: Approximation of Schrödinger operators with δ-interactions supported on hypersurfaces, Math. Nachr. (2016), 1–34. BEG D. Borisov, P. Exner, A. Golovina: Tunneling resonances in systems without a classical trapping, J. Math. Phys. 54, 012102 (2013).BEKS J.F. Brasche, P. Exner, Yu.A. Kuperin, P. Šeba: Schrödinger operators with singular interactions, J. Math. Anal. Appl. 184 (1994), 112–139. BG P. Briet, M. Gharsalli: Stark resonances in 2-dimensional curved quantum waveguidesRep. Math. Phys.76 (3), (2015) 317–338.Chu Yu.P. Chuburin: Perturbation Theory of Resonances and Embedded Eigenvalues of the Schrodinger Operator For a Crystal Film Teoret. Mat. Fiz. 143 3 (2005), 417–430.DNG A. L. Delitsyn, B-T. Nguyen, D. S. Grebenkov: Trapped modes in finite quantum waveguides The European Physical Journal B-Condensed Matter and Complex Systems85 6 (2012), 1-12.EK1 P. Exner, S. Kondej: Curvature-induced bound states for a δ interaction supported by a curve in ℝ^3, Ann. H. Poincaré 3 (2002), 967-981.EK3 P. Exner, S. Kondej: Schrödinger operators with singular interactions: a model of tunneling resonances Journal of Physics A : Mathematical and General37 (2004), 8255–8277. EK-book P. Exner, H. Kovařík, Quantum wavequides, Springer 2015.EKrecirik99 P. Exner, D. Krejčiřík: Quantum waveguides with a lateral semitransparent barrier: spectral and scattering properties Journal of Physics A: Mathematical and General 32 (1999), 4475-4494.EN P. Exner, K. Nemcová:Quantum mechanics of layers with a finite number of point perturbation, Journal of Physics A: Mathematical and General 43(3) (2002), 1152-1184.Popov2000 S. V. Frolov, I. Yu. Popov:Resonances for laterally coupled quantum waveguides, Journal of Mathematical Physics41 (2000), 4391-4405.Popov2003 S. V. Frolov, I. Yu. Popov:Three laterally coupled quantum waveguides: breaking of symmetry and resonance asymptotics, Journal of Physics A: Mathematical and General36(6) (2003). Kato T. Kato: Pertubation theory for linear operators, Springer-Verlag Berlin Heidelberg New York 1980.Kondej2012 S. Kondej: Resonances induced by broken symmetry in a system with a singular potential, Ann. Henri Poincaré 13 (2012) KK013 S. Kondej, D. Krejčiřík: Spectral analysis of a quantum system with a double line singular interaction Publ. RIMS, Kyoto University 49 (2013), 831-859.KondejLeonski2014 S. Kondej, W. Leoński:Mathematical and Theoretical Journal of Physics A 47(22)(2014), 1416–1438. KS H. Kovařík, A. Sacchetti: Resonances in twisted quantum waveguides, J. Phys. A 40(2007) 8371–8384.LM J.L. Lions, E. Magenes: Non-Homogeneous Boundary Value Problems and Applications, vol. I, Springer, Heidelberg 1972.Po A. Posilicano: A Krein-like Formula for Singular Perturbations of Self-Adjoint Operators and Applications, J. Funct. Anal. 183 (2001), 109-147.Po2 A. Posilicano: Boundary triples and Weyls function for singular perturbations of self-adjoint operators, Meth. Fun. Anal. Top. 10 (2) (2004), 57-63.Prudnikov A. P. Prudnikov Y. O. Brychkov, O. I. Marichev, Integraly i rady, I.Elementarnye funkcii, II.Specialnye funkcii, III. Nauka Moskva. 1981-1983.
http://arxiv.org/abs/1708.07684v1
{ "authors": [ "Sylwia Kondej" ], "categories": [ "math-ph", "math.MP" ], "primary_category": "math-ph", "published": "20170825104304", "title": "Straight quantum layer with impurities inducing resonances" }
y alldecorations.pathmorphing calc
http://arxiv.org/abs/1708.08148v1
{ "authors": [ "Soumik Pal" ], "categories": [ "math.PR", "91G10, 46N10" ], "primary_category": "math.PR", "published": "20170827221815", "title": "Embedding optimal transports in statistical manifolds" }
Symplectic rational G-surfaces and equivariant symplectic cones Weimin Chen, Tian-Jun Li, and Weiwei Wu December 30, 2023 =============================================================== https://youtu.be/Cq1n0dqBiscShort Video Abstract Prediction of disease onset from patient survey and lifestyle data is quickly becoming an important tool for diagnosing a disease before it progresses. In this study, data from the National Health and Nutrition Examination Survey (NHANES) questionnaire is used to predict the onset of type II diabetes. An ensemble model using the output of five classification algorithms was developed to predict the onset on diabetes based on 16 features. The ensemble model had an AUC of 0.834 indicating high performance. § INTRODUCTIONMachine learning (ML) has proven itself a very useful tool in the ever expanding field of bioinformatics. It has been effectively used in the early prediction of diseases such as cancer <cit.>, and work is being done in predicting the onset of Alzheimer's and Parkinson's disease <cit.>. These predictions are based on data from gene sequencing and biomarkers, among other types of biological measurements. Efforts have also been made to predict the onset of diseases such as type II diabetes based on survey data. Thanks to technology that makes it much easier to collect survey data, in the future more data may become available on a larger sample of the population. This larger volume of survey data presents a new opportunity to improve overall prediction of disease, especially in diseases where lifestyle ishighly correlated to disease onset <cit.>.Previous work has focused on the use of survey data to predict the onset of diabetes in a large sample of the population using Support Vector Machines (SVM) <cit.>, obtaining an area under the receiver operating characteristic (ROC) curve of 0.83. The dataset used in <cit.> is called the National Health and Nutrition Examination Survey (NHANES) <cit.>. Our goal is to use this same dataset and attempt to improve prediction accuracy. A secondary objective is to more clearly interpret the results, and indicate how such a model could used in the real world to improve preventative care.This paper is organized as follows. First, we will describe the NHANES dataset to be analyzed in this study. Next a description of the implemented classification methods will be described, along with a description of the proposed ensemble model. Results of each proposed model will be presented that indicate performance. Model performance will be discussed as well as some real world applications of the proposed classification models. § NHANES SURVEY DATA, FEATURE SECTION, AND LABEL DETERMINATIONThe National Health and Nutrition Examination Survey (NHANES) data is an on-going cross sectional sample survey of the US population where information about participants is gathered during in-home interviews. In addition to the in-home surveys, participants also have the option to obtain a physical examination in mobile examination centers <cit.>. §.§ Patient Exclusion and Label AssignmentAs in <cit.>, we limited our study to non-pregnant participants over 20 years of age. We focused our efforts on three waves of the survey conducted between 1999-2004, given the consistency across survey questions between waves from this time period.Our problem is a binary classification, and as such we must assign labels to our samples. To do this we used measurements from each sample. The first was the patient's answer to the question, “Has a doctor ever told you that you have diabetes?" If that patient answered `yes,' than a `1' was assigned to the patient indicating that they had type II diabetes, and was `0' otherwise. Using patients that responded to this question produced about 900 samples. The second, and more common method for determining the label of a patient was the value of their plasma glucose level, tested during examination. If the patient's glucose level was greater than 126 mg/dL, they were labeled as diabetic and given a `1'. If their glucose level was less than this threshold they were assigned to the non-diabetic group with a `0'. This method of labeling produced the remaining 4600 samples. With the labels determined, we now move onto feature selection.§.§ Feature SelectionThe features were chosen to be similar to the features used in <cit.>, since much effort was already put in during this previous study to select good features. Additional features were examined, such as diet, however upon further analysis most of these were excluded due to their negligible effect on performance or a significant portion of feature data was missing (> 60% missing). Two features that were added to the data were cholesterol and leg length, which were found to be important features in <cit.>. Of the multitude of potential features, we chose to focus our attention on only 16 features. These 16 features are described in Table <ref>. This made interpretation simpler and training-time shorter. The training-time was a major concern because of the many parameters to be optimized during cross-validation. §.§ Cross-Validation and Test SetAfter exclusion and labeling, 5515 total samples were available from the NHANES 1999 to 2004 dataset. Before model training, a 20% test set was removed from the entire dataset. This left 4412 samples for training and 1103 for testing. For hyperparameter tuning 10-fold cross validation was used, with grid-search. To obtain testing accuracy error, bootstrapping was utilized. § METHODS All modeling was done using scikit-learn, a python-based toolkit for simple and efficient machine learning, data mining and data analysis <cit.>.§.§ Individual ModelsFive simple models were chosen that presented fairly high discriminative power on their own: Logistic Regression, K-Nearest Neighbors (KNN), Random Forests, Gradient Boosting, and Support Vector Machine (SVM). Other models such as Naive Bayes were tried, but did not produce accurate results compared to the five previous models. Each model had a few hyperparameters associated with it. The five models in total contained a total of 10 hyperparameters. Using grid search would be intractable to tune all the hyperparameters simultaneously, since to test even just 10 values for each would produce 10^10 evaluations. Instead, each model was tuned separately, with the result being shorter tuning times that were possible on a laptop computer.§.§ Ensemble ModelAfter training, the five previously described models were set to output probabilities p. These probabilities were then fed into the ensemble model (see Figure <ref>), which took an unweighted average of the inputs to obtain p̅. Initially, as is typical, T was set to 0.5. This however lead to a very poor recall rate on the diabetic patients. Because of this observation of very high type II error (i.e. low recall), and considering that it is of greater importance to identify a diabetic than misidentify a healthy patient, the decision boundary T was adjusted to obtain a more desirable recall rate.§ RESULTS§.§ Overall Discriminative Power of All ModelsThe overall discriminative power of the model is shown by the Receiver Operating Characteristic (ROC) curves in Figure <ref>. From the figure it can be seen that the Gradient Boosting Classifier performs best with an AUC of 0.84. Random forests also performs exceptionally well. The other three classifiers were noticeably worse at classification, with the worst performance being the KNN. The AUCs of each individual model, as well as other metrics, can be found in Table <ref>. Unfortunately, the ensemble model failed to significantly improve performance. In fact it was less accurate than the Gradient Boost Classifier as measured by AUC. However the difference between the two was very small, as seen in the performance metrics in Table <ref>.§.§ Recall and the Decision Boundary TThe recall rate for the diabetics is defined as the number of diabetics identified, over the total number of diabetics in the sample. The recall rate is a function of the decision boundary T, and is plotted for the best performing classifier, Gradient Boosting, in Figure <ref>. Recall was also plotted for all the models for each class in Figure <ref>. At the default decision boundary of T=0.5, the recall rate for diabetics was only 0.35, meaning that only 35% of the time we classify a diabetic patient that in-fact has diabetics. Since our objective is to maximize our ability to identify diabetics, we decided to shift our decision boundary to a point which would allow us to identify diabetics at least 75% of the time. To achieve this, we shifted out decision boundary rightward to T=0.78, where the recall rate for diabetics was 0.75, and coincidentally, the recall rate for non-diabetics was also 0.75. This means that by shifting our decision boundary, we were able to identify many more diabetics, only having to sacrifice a small decrease in recall rate of 18% for the non-diabetics.§.§ Testing ErrorIn order to obtain the error on the test accuracy results, bootstrapping was performed for training on the best performing model, the Gradient Boosting Classifier. This produced N_boot classifiers, all having slightly different discriminative abilities. To show the error in classification, ROC curves with continuous upper and lower error bars are shown in Figure <ref>. This figure plots three ROC curves, the middle one is the average ROC curve for all the N_boot trained models. The upper and lower curves are the 2.5% and 97.5% empirical quantiles of the bootstrap sample (i.e. 95% confidence intervals). To obtain this distribution, N_boot = 1000 models were trained. Mean AUC from bootstrapping was 0.83, with upper and lower confidence intervals of 0.82 and 0.84. This indicates that the training is not highly sensitive to the training data.§ DISCUSSION AND APPLICATIONS§.§ Model Performance Five models were used to classify diabetics and non-diabetics based on survey data collected from NHANES. These five models were given equal weighting in an ensemble model that used the probabilistic output of each model. From the ROC curves of all the models (seen in Figure <ref>) it was found that the best performing model (the highest AUC, seen in Table <ref>) was the Gradient Boosting Classifier, which actually beat out the ensemble method. At first glance, it is surprising that the lone Gradient Boosting Classifier performed best, as it was thought that the wisdom of each model would combine into the best classifier through an ensemble method. This would have been consistent with the theory of the `wisdom of crowds.'The poorer performance of the ensemble method could be due to a few factors. The first is that there was inadequate hyperparameter tuning. To prevent the need to tune 10 hyperparameters simultaneously, which would have taken an exorbitant amount of time, we instead chose to tune each model separately. In doing this we prevented the ensemble model from really using the best features of each model, as we did not allow for the tuning of hyperparameters based on ensemble performance. An additional hyperparameter that should have been optimized was the weighting of each model. Our ensemble model weighted each model equally, which may have given too much weight to poorer performing models such as the KNN Classifier (the AUC for this model was much lower than the AUC of the other 4 models). In the future, if computational time were not an issue, simultaneous tuning of hyperparameters would be carried out. Additional performance enhancement could also be achieve by implementing a `stacked model,' which is essentially a two-layered model that would use the output of each individual model as a feature to train a lower dimensional model. One of our preprocessing steps was to impute values for missing data. This is particularly important because >25% of cells are missing values. For numerical features (e.g. BMI and height) we calculated the mean from the training data and assigned it to missing fields in both training and test sets. For categorical variables (e.g. income bracket and alcohol use) we assigned missing values the most common label from the training set. Further performance enhancement could also be obtained by imputing missing data by building a model from the available data to predict missing values for a given feature and through matrix factorization. The top AUCs from the models presented in this paper were consistent with the work of <cit.>, who achieved and AUC of 0.83 for a well-tuned support vector machine. Our consistency with this previous study indicates that the 0.83 AUC may be a hard upper limit on the discriminative power of models trained on the NHANES data for the purpose of the prediction of diabetes using simple classification techniques such as those discussed in the present work and in <cit.>. However, considering only marginal effort was made to impute data in either works, clever data imputation may present the best path for increasing performance. §.§ Applications When creating classification models for disease detection it is always important to keep the end use in mind. The classification model presented here is used for the purpose of early detection of type II diabetes based on easily obtainable survey data. The features are responses to questions and simple examination measurements shown in Table <ref>. A decision boundary of T=0.78 was chosen that produced a recall rate of 75% for diabetics, meaning that 75 out of 100 diabetics were correctly identified. This was considered an acceptable outcome from a diagnostic standpoint. However, the caveat is that to ensure a 75% recall rate in diabetics, many non-diabetics will be identified as diabetics. To give specific numbers, with a recall rate of 75% for non-diabetics, we can eliminate 5515 * 0.81*0.75 = 3350 patients from the total population of 5515 (0.81 is the proportion of non-diabetics in the entire training set).This would leave 2165 patients left, which would be the number of patients that would need to be notified they were diabetic in order to ensure 75% of the actual diabetics are notified. This may provide a significant decrease in over healthcare cost, by increasing preventative care for the notified patients. With this possibility in mind, a public health expert can decide whether this classification scheme is cost-effective.§ CONCLUSIONSFive separate models were used to classify diabetics vs. non-diabetics. The output of these models was used to create an ensemble model, which had 0.834 AUC. The best performing model was not the ensemble model, but rather the Gradient Boosting Classifier, which obtained 0.84 AUC. This AUC is consistent with previous findings and indicates that in using survey data one can expect reasonable classification performance. §.§.§ AcknowledgmentsWe would like to thank Professor Alexei Efros (UC Berkeley) and Professor Isabelle Guyon (visiting professor at UC Berkeley) for for their input to this work.plain
http://arxiv.org/abs/1708.07480v1
{ "authors": [ "John Semerdjian", "Spencer Frank" ], "categories": [ "stat.ML" ], "primary_category": "stat.ML", "published": "20170824162048", "title": "An Ensemble Classifier for Predicting the Onset of Type II Diabetes" }
firstpage–lastpage A hydrodynamic bifurcation in electroosmotically-driven periodic flows Ronald G. Larson December 30, 2023 ======================================================================Previous studies on the rotation of Sun-like stars revealed that the rotational rates of young stars converge towards a well-defined evolution that follows a power-law decay. It seems, however, that some binary stars do not obey this relation, often by displaying enhanced rotational rates and activity. In the Solar Twin Planet Search program, we observed several solar twin binaries, and found a multiplicity fraction of 42%± 6% in the whole sample; moreover, at least three of these binaries (HIP 19911, HIP 67620 and HIP 103983) clearly exhibit the aforementioned anomalies. We investigated the configuration of the binaries in the program, and discovered new companions for HIP 6407, HIP 54582, HIP 62039 and HIP 30037, of which the latter is orbited by a 0.06 M_⊙ brown dwarf in a 1-month long orbit. We report the orbital parameters of the systems with well-sampled orbits and, in addition, the lower limits of parameters for the companions that only display a curvature in their radial velocities. For the linear trend binaries, we report an estimate of the masses of their companions when their observed separation is available, and a minimum mass otherwise. We conclude that solar twin binaries with low-mass stellar companions at moderate orbital periods do not display signs of a distinct rotational evolution when compared to single stars. We confirm that the three peculiar stars are double-lined binaries, and that their companions are polluting their spectra, which explains the observed anomalies.stars: fundamental parameters – stars: solar-type – stars: rotation – binaries: spectroscopic – binaries: visual § INTRODUCTION It is known that at least half of the stars in the Galaxy are multiple systems containing two or more stars orbiting each other <cit.>, thus in many surveys and large samples of stars, binaries are ubiquitous. This is in contrast with the Sun, which is a single star, and attempts to find a faint stellar companion orbiting it rendered no results thus far <cit.>. Many studies avoid contamination by binaries in their samples, the main reasons being because we do not understand well how binaries evolve and how the presence of a companion affects the primary star. However, with the development of instruments with higher spatial and spectral resolution and coronagraphs, it is now possible to better probe the secondary component of such systems.We have been carrying out a radial velocity planet search focused on solar twins using HARPS <cit.>. The definition of solar twin we use is a star with stellar parameters inside the ranges 5777 ± 100 K, 4.44 ± 0.10 dex(cgs) and 0.0 ± 0.1 dex, respectively, for T_eff, logg and [Fe/H]. In total, 81 solar twins[Some of the stars in our sample do not fit the strict definition of solar twins because one or more parameters are off the definition intervals, but they are still very close solar analogues.] were observed on HARPS. As part of our survey we previously identified 16 clear spectroscopic binaries (SB) . We report here the identification of four additional SBs (HIP 14501, HIP 18844, HIP 65708 and HIP 83276) and the withdrawal of HIP 43297 and HIP 64673, which are unlikely to host stellar-mass companions, bringing the number of solar twin SBs to 18. Most of these SBs are single-lined – they do not contain a second component in their spectral lines –, meaning that their companions are either faint stars or located outside the ∼1 aperture of the HARPS spectrograph. We confirm that there are three solar twins with spectra contaminated by a relatively bright companion (see discussion in Section <ref>). In our sample there are an additional 18 visual binaries[We define as visual companions those with separations larger than 1] or multiple systems, of which HIP 6407 and HIP 18844 have wide companions <cit.> as well as the spectroscopic companions reported here.Inwe saw that the single or visual binary solar twins display a rotational evolution that can described with a relation between stellar age t and rotational velocity v_rot in the form of a power law plus a constant: v_rot = v_f + m t^-b, where v_f, m and b are free parameters to be fit with observations. This relation is explained by loss of angular momentum due to magnetized winds <cit.>, and the index b reflects the geometry of the stellar magnetic field <cit.>. There are at least two solar twin binaries that display enhanced rotational velocities – above 2σ from the expected – and activity for their ages: HIP 19911 and HIP 67620; if we consider the revised age for HIP 103983 (Spina et al., in preparation), it can also be considered a fast rotator for its age.Besides the enhanced rotational velocities and higher chromospheric activity (; ), some of these binaries also display peculiar chemical abundances (; ). As pointed out by , the ultra-depletion of beryllium, which is observed on HIP 64150, could be explained by the interaction of the main star with the progenitor of the white dwarf companion. In addition to HIP 64150, the confirmed binaries HIP 19911 and HIP 67620 also display clearly enhanced [Y/Mg] abundances .One interesting aspect about stars with enhanced activity and rotation is that these characteristics were hypothesized to be the result of dynamo action from close-in giant planets <cit.>. In fact, some of our early results pointed out that the star HIP 68468, for which we inferred two exoplanets candidates , had an enhanced rotational velocity when compared to other solar twins of the same age. However, a more careful analysis showed that the enhancement was instead a contribution of macroturbulence. Another explanation for these enhancements is due to magnetic interactions with either a close-in or an eccentric giant planet <cit.>, but recent results obtained by, e.g., <cit.> and <cit.> show that they cannot explain such anomalies.In light of these intriguing results, we sought to better understand the nature of these solar twin multiple systems by studying their orbital parameters, and use them to search for explanations of the observed anomalies, especially stellar rotation. The orbital parameters can be estimated from the radial velocity data of the stars <cit.>, with the quality of the results depending strongly on the time coverage of the data.§ RADIAL VELOCITIES Our solar twins HARPS data[Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programs 188.C-0265, 183.D-0729, 292.C-5004, 077.C-0364, 072.C-0488, 092.C-0721, 093.C-0409, 183.C-0972, 192.C-0852, 091.C-0936, 089.C-0732, 091.C-0034, 076.C-0155, 185.D-0056, 074.C-0364, 075.C-0332, 089.C-0415, 60.A-9036, 075.C-0202, 192.C-0224, 090.C-0421 and 088.C-0323.] are completely described in . Their radial velocities (RV) are automatically measured from the HARPS Data Reduction Software (see Table <ref>), and the noise limit of the instrument generally remains around 1 m s^-1. In order to broaden the coverage of our RV data, we also obtained more datasets that were available in the literature and public databases, including the HARPS archival data for other programs.The mass and other stellar parameters of the solar twins were estimated with high precision using the combined HARPS spectra and differential analysis owing to their similarity with the Sun <cit.>. The ages for the solar twins were obtained using Yonsei-Yale isochrones <cit.> and probability distribution functions as described in <cit.> and in . The full description and discussion of the stellar parameters of the HARPS sample are going to be presented in a forthcoming publication (Spina et al., in preparation).The additional radial velocities data obtained from online databases and the literature are summarized in Table <ref>. These are necessary to increase the time span of the observations to include as many orbital phases as possible at the cost of additional parameters to optimize for (see Section <ref>).§ METHODS The variation of radial velocities of a star in binary or multiple system stems from the gravitational interaction between the observed star and its companions. For systems with stellar or substellar masses, the variation of radial velocities can be completely explained by the Keplerian laws of planetary motion. For the sake of consistency, we will use here the definitions of orbital parameters as presented in .To completely characterize the orbital motion of a binary system from the measured radial velocities of the main star, we need to obtain the following parameters: the semi-amplitude of the radial velocities K, the orbital period T, the time of periastron passage t_0, the argument of periapse ω and the eccentricity e of the orbit. In order to estimate the minimum mass m sini of the companion and the semi-major axis a of the orbit, we need to know the mass M of the main star.Due to their non-negative nature, the parameters K and T are usually estimated in logarithmic scale in order to eliminate the use of search bounds. Additionally, for orbits that are approximately circular, the value of ω may become poorly defined. In these cases, a change of parametrization may be necessary to better constrain them. <cit.>, for instance, suggest using √(e)cosω and √(e)sinω (which we refer to as the EXOFAST parametrization) instead of ω and e to circumvent this problem, which also can help improve convergence time.One issue that affects the radial velocities method is the contamination by stellar activity <cit.>. This activity distorts the spectral lines <cit.>, which in turn produces artificial RV variations that can mimic the presence of a massive companion orbiting the star. More active stars are expected to have RV variations with larger amplitudes and a shorter activity cycle period <cit.>. For most binaries in our sample, the contamination by activity in the estimation of orbital parameters is negligible; the cases where this is not applicable are discussed in detail. §.§ Binaries with well-sampled orbits For binaries with orbital periods T ≲ 15 yr, usually there are enough RV data measured to observe a complete phase. In these cases, the natural logarithm of the likelihood of observing radial velocities 𝐲 on a specific instrument, given the Julian dates 𝐱 of the observations, their uncertainties σ and the orbital parameters 𝐩_orb is defined as: lnp (𝐲|𝐱,σ, 𝐩_orb) = -1/2∑_n [ (y_n - y_model)^2/σ_n^2 + ln(2 πσ_n^2 )] , where y_n are the RV datapoints, y_model are the model RV points for a given set of orbital parameters, and σ_n are the RV point-by-point uncertainties. The RV models are computed from Eq. 65 in : v_r = γ + K (cos(ω + f) + e cosω) , where f is the true anomaly, and γ is the systemic velocity (usually including the instrumental offset). The true anomaly depends on e and the eccentric anomaly E: cosf = (1 - e^2)/1 - e cosE - 1 ; the eccentric anomaly, in turn, depends on T, t_0 and time t in the form of the so called Kepler's equation: E - e sinE = 2 π/T(t - t_0 ) .Eq. <ref> is minimized using the Nelder-Mead algorithm implementation from<cit.> to obtain the best-fit orbital parameters to the observed data. Because different instruments have different instrumental offsets, the use of additional RV data from other programs require the estimation of an extra value of γ for each instrument.The uncertainties of the orbital parameters are estimated using , an implementation of the Affine Invariant Markov chain Monte Carlo Ensemble sampler <cit.> using flat priors for all parameters in bothand EXOFAST parametrizations. These routines were implemented in the Python package [Available at <https://github.com/RogueAstro/radial>.], which is openly available online. The uncertainties in m sini and a quoted in our results already take into account the uncertainties in the stellar masses of the solar twins. §.§ Binaries with partial orbits For the binary systems with long periods (typically 20 years or more), it is possible that the time span of the observations does not allow for a full coverage of at least one phase of the orbital motion. In these cases, the estimation of the orbital parameters renders a number of possible solutions, which precludes us from firmly constraining the configuration of the system. Nevertheless, RV data containing a curvature or one inflection allows us to place lower limits on K, T and m sini, whilst leaving e and ω completely unconstrained. When the RV data are limited but comprise two inflections, it may be possible to use the methods from Section <ref> to constrain the orbital parameters, albeit with large uncertainties.For stars with very large orbital periods (T ≳ 100 yr), the variation of radial velocities may be present in the form of a simple linear trend. In these cases, it is still possible to obtain an estimate of the mass of the companion – a valuable piece of information about it: <cit.> describes a statistical approach to extract the sub-stellar companion mass when the only information available from radial velocities is the inclination of the linear trend, provided information about the angular separation of the system is also available. In this approach, we need to adopt reasonable prior probability density functions (PDF) for the eccentricity e, the longitude of periastron ϖ, phase ϕ and the inclination i of the orbital plane. As in , we adopt the following PDFs: p(i) = sini, p(e) = 2e and flat distributions for ϖ and ϕ.We sample the PDFs using , 20 walkers and 10000 steps; the first 500 burn-in steps are discarded. From these samples, we compute the corresponding companion masses and its posterior distribution. This posterior usually displays a very strong peak and long tails towards low and high masses which can be attributed to highly unlikely orbital parameters (see Fig. <ref> for an example). In our results, we consider that the best estimates for the companion masses are the central bin of the highest peak of the distribution in a histogram with log-space bin widths of about 0.145 dex(M_⊙).When no adaptive optics (AO) imaging data are available for the stars with a linear trend in their RVs, the most conservative approach is to provide the minimum mass for the putative companion. In the case of a linear trend, the lowest mass is produced when e = 0.5, ω = π/2 and sini = 1 <cit.>, yielding m_min≈( 0.0164 M_Jup) ( τ/yr)^4/3| dv/dt/m s^-1 yr^-1| ( M/M_⊙)^2/3, where τ is 1.25 multiplied by the time span of the radial velocities and dv/dt is the inclination of the linear trend.§ RESULTS We discovered new, short-period companions for the stars HIP 6407 and HIP 30037 (see Fig. <ref>) and new long-period companions for HIP 54582 and HIP 62039, and updated or reproduced the parameters of several other known binaries that were observed in our program (see Figs. <ref> and <ref>). We briefly discuss below each star, pointing out the most interesting results, inconsistencies and questions that are still open about each of them. The orbital parameters of the binaries with well-sampled orbits in their RV data are presented in Table <ref> and the systems with partial orbits are reported in Tables <ref> and <ref>. §.§ Withdrawn binary candidates In , we showed that HIP 43297 had a rotational velocity v sini higher than expected for its age. Moreover, its radial velocities had variations that hinted for one or more companions orbiting it. We carefully analyzed the RVs and concluded that the periodic (T = 3.8 yr) signal observed is highly correlated (Pearson R = 0.893) with the activity S-index of the star <cit.>. In addition, we tentatively fitted a linear trend to the combined RVs from HARPS, ELODIE and SOPHIE, and obtained an inclination of 4.53 ± 0.04 m s^-1 yr^-1, but further monitoring of the system is required to infer the presence of a long-period spectroscopic companion. The revised stellar age for HIP 43297 yields 1.85 ± 0.50 Gyr (Spina et al., in preparation), which explains the high rotational velocity and activity.The solar twin HIP 64673 displays significant fluctuations in its radial velocities, but they do not correlate with its activity index; the data covers approximately 5 years of RV monitoring and displays an amplitude > 20 m s^-1. If confirmed to be caused by massive companions, the RV variations of both HIP 43297 and HIP 64673 suggest substellar masses for the most likely orbital configurations. These stars are, thus, removed from the binaries sample of the Solar Twin Planet Search program. §.§ Solar twins with new companions HIP 6407: This is a known binary system located 58 pc away from the solar system <cit.>, possessing a very low-mass (0.073 M_⊙) L2-type companion separated by 44.8 (2222 AU), as reported by <cit.>. In this study, we report the detection of a new close-in low-mass companion with msini = 0.12 M_⊙ on a very eccentric orbit (e = 0.67) with a = 3 AU and an orbital period of approximately 5 years. As expected, the long-period companion does not appear in the RV data as a linear trend.HIP 30037: The most compact binary system in our sample, hosting a brown dwarf companion orbiting the main star with a period of 31 days. The high precision of its parameters owes to the wide time span of observations, which covered several orbits. This is one of the first detections of a close-in brown dwarf orbiting a confirmed solar twin[There are at least 4 solar twin candidates with a close-in brown dwarf companion listed in table A.1 in <cit.>.]. HIP 30037 is a very quiet star, displaying no excessive jitter noise in its radial velocities. We ran stellar evolution models with [Modules for Experiments in Stellar Astrophysics, available at <http://mesa.sourceforge.net>] <cit.> to test the hypothesis of the influence of tidal acceleration caused by the companion on a tight orbit, and found that, for the mass and period of the companion, we should expect no influence in the rotational velocity.HIP 54582: RV Curvature only. There are no reports of binarity in the literature. The slight curvature in the RVs of this star is only visible when we combine the HARPS data and the Lick Planet Search archival data. Owing to the absence of an inflection point, the orbital parameters of this system are highly unconstrained. We found that an orbit with e ≈ 0.2 produces the least massive companion and shortest orbital period (m sini = 0.03 M_⊙ and T = 102 yr).HIP 62039: Linear trend. There are no reports of visually detected close-in (ρ < 2) companions around it. This can be attributed to: i) low luminosity companion, which is possible if it is a white dwarf or a giant planet, and ii) unfavorable longitude of periapse during the observation windows. By using Eq. <ref>, we estimate that the minimum mass of the companion is 19 M_Jup. §.§ The peculiar binaries §.§.§ HIP 19911 This is one of the main outlier stars in the overall sample of solar twins in regards to its rotation and activity, which are visibly enhanced for both the previous and revised ages (; ; Spina et al., in preparation). For the estimation of orbital parameters reported below, we used only the LCES HIRES/Keck radial velocities, because there are too few HARPS data points to justify the introduction of an extra source of uncertainties (the HARPS points are, however, plotted in Fig. <ref> for reference). When using the HARPS data, although the solution changes slightly, our conclusions about the system remain the same.The orbital solution of HIP 19911 renders a 0.31 M_⊙ companion in a highly eccentric orbit (e = 0.82, the highest in our sample), with period T = 5.7 yr. Visual scrutiny reveals what seems to be another signal with large amplitudein the residuals of this fit (> 250 m s^-1, see Fig. <ref>); the periodogram of the residuals shows a very clear peak near the orbital period of the stellar companion.The cross-correlation function (CCF) plots for the HARPS spectra of HIP 19911 display a significant asymmetry – longer tail in the blue side – for the observations between October 2011 and February 2012, which suggests that the companion is contaminating the spectra. Upon visual inspection of the archival HIRES spectra[Available at <http://nexsci.caltech.edu/archives/koa/>.] taken on 17 January 2014, which is when we expect the largest RV difference between the main star and its companion, we saw a clear contamination of the spectrum by the companion (see Fig. <ref>). This contamination could explain the large residuals of the orbital solution, as it introduces noise to the measured radial velocities. The double-lines also explain the inferred high rotational velocity of HIP 19911, since they introduce extra broadening to the spectral lines used to measure rotation. The presence of a bright companion may also affect estimates of chemical abundances, which elucidates the yttrium abundance anomaly . The double-lined nature of this system is not observed on the HARPS spectra due to an unfavorable observation window.Even at the largest RV separation, we did not detect the Li I line at 6707.75 Å in the HIRES spectrum of the companion. This is expected because M dwarf stars have deeper convection zones, which means they deplete lithium much faster than Sun-like stars. This leads us to conclude that estimates of Li abundance on solar twin binaries using this line do not suffer from strong contamination by their companions; consequently, age estimates with lithium abundances may be more reliable for such binaries than isochronal or gyro ages.Another observation conundrum for this system is that <cit.>, using AO imaging without a coronagraph, reports the detection of a visual companion with orbital period ∼ 12.4 yr (roughly twice the one we estimated), a lower eccentricity (e = 0.1677) and similar semi-major axis of the orbit (a = 6.17 AU, if we consider a distance of 30.6 pc). Moreover, <cit.> reports that this visual companion has m = 0.85 M_⊙. The most likely explanation is that the observations ofdid in fact detect the spectroscopic companion, but the coarse timing of the observations produced a larger period; the lower eccentricity could be explained by a strong covariance between e and the inclination i. If i is lower, that means the mass of the companion is significantly higher than m sini = 0.316 M_⊙, and that would explain the value obtained by . A companion with a mass as large as 0.85 M_⊙ would likely pollute the spectra of HIP 19911, which agrees with our observation that this is a SB II system. If confirmed, this prominent ∼0.85 M_⊙ red dwarf companion could explain the observed activity levels for HIP 19911, since red dwarf stars are expected to be more active than Sun-like stars.§.§.§ HIP 67620 This is a well-known binary and the target with the largest amount of RV data available (see Fig. <ref>). Its orbital parameters have been previously determined by <cit.> and more recently by <cit.> and <cit.>. The orbital parameters we obtained are in good agreement with <cit.>. It has one of the most peculiar rotation rates from our sample (2.77 km s^-1 for an age of 7.18 Gyr), an enhanced chromospheric activity <cit.> and an anomalous [Y/Mg] abundance . The orbital period of the system is far too long for gravitational interaction to enhance the rotation of the main star through tidal acceleration, thus we should expect that they evolve similarly to single stars from this point of view.High-resolution imaging of HIP 67620 revealed a companion with V_mag≈ 10 and separations which are consistent with the spectroscopic companion (; ). As explained by <cit.>, the presence of a companion with m > 0.55 M_⊙ can produce contaminations to the spectra that introduce noise to the measured RVs; our estimate of m sini for this system is 0.58 M_⊙. These results suggest that, similarly to HIP 19911 but to a lesser degree, the companion of HIP 67620 may be offsetting our estimates for rotational velocity, stellar activity, chemical abundances and isochronal age.We were unable to discern double-lines in the HARPS spectra, likely resulting fromunfavorable Doppler separations (observations range from February 2012 to March 2013). However, an analysis of the CCF of this star shows slight asymmetries in the line profiles of the HARPS spectra, which indicates a possible contamination by the companion. <cit.> reported HIP 67620 as a double-lined binary using spectra taken at high-resolution (R ≈ 60,000) in February 2014 and July 2015. As expected due to the short time coverage of the HARPS spectra, we did not see any correlation between the bisector inverse slope <cit.> and the radial velocities of HIP 67620.<cit.> found an additional signal on the periodogram of HIP 67620 at 532 d, which could be fit with a 1 M_Jup planet, bringing down the rms of the fit by a factor of 2. However, we did not find any significant peak in the periodogram of the residuals of the radial velocities for HIP 67620.§.§.§ HIP 103983 The revised age for HIP 103983 (4.9 ± 0.9 Gyr; Spina et al., in preparation) renders this system as an abnormally fast rotator (3.38 km s^-1) for its age. However, upon a careful inspection of the HARPS data obtained at different dates, we identified that the spectrum from 2015 July 27 displays clearly visible double-lines, albeit not as well separated as those observed in the HIRES spectra of HIP 19911 (see Fig. <ref>). No other anomalies besides enhanced rotation were inferred for this system. The CCF plots of the HARPS spectra show clear longer tails towards the blue side for most observations.Inwe reported distortions in the combined spectra of HIP 103983; this is likely a result from the combination of the spectra at orbital phases in which the Doppler separation between the binaries is large. Since the observing windows of the HARPS spectra of HIP 19911 and HIP 67620 do not cover large RV separations (see Fig. <ref>), the same effect is not seen in the combined spectra of these stars. This effect also explains why HIP 103983 is an outlier in fig. 4 of .Although we have limited RV data, thesimulations converge towards a well-defined solution instead of allowing longer periods, as these produce larger residuals. It is important, however, to keep monitoring the radial velocities of this system in order to confirm that the most recent data points are in fact a second inflection in the radial velocities. The residuals for the fit for the HIRES spectra are on the order of 100 m s^-1, which is likely a result from the contamination by a bright companion.reported a 0.91 M_⊙ visual companion at a separation of 0.093, which is consistent with the spectrocopic semi-major axis we estimated: 0.149 for a distance of 65.7 pc <cit.>. §.§ Other binaries with updated orbital parameters Among the known binaries in the solar twins sample, five of them display curvature in their RV data which allows the estimation of limits for their orbital parameters (see Table <ref> and Fig. <ref>). Some of the linear trend binaries observed in our HARPS Solar Twin Planet Search program are targets with large potential for follow-up studies. For the companions with visual detection, we were able to estimate their most likely mass (see Table <ref>).HIP 14501: Linear trend. Its companion is reported by <cit.> as the first directly imaged T dwarf that produces a measurable doppler acceleration in the primary star. Using a low-resolution direct spectrum of the companion, <cit.> estimated a model-dependent mass of 56.7 M_Jup. Using the HARPS and HIRES/Keck RV data and the observed separation of 1.653 <cit.>, we found that the most likely value of the companion mass is 0.043 M_⊙ (45 M_Jup), which agrees with the mass obtained by <cit.>. The most recent HARPS data hints of an inflection point in the orbit of HIP 14501 B (see Fig. <ref>), but further RV monitoring of the system is necessary to confirm it.HIP 18844: Linear trend. It is listed as a multiple system containing a closer-in low-mass stellar companion (estimated 0.06 M⊙, which agrees with our most likely mass) and orbital period T = 6.5 yr . For the companion farther away, <cit.> reported a minimum orbital period of ∼ 195 yr and msini = 0.33 M_⊙, with a separation of 29 in 1941 (∼750 AU for a distance of 26 pc).HIP 54102: RV curvature only. It is listed as a proper motion binary by <cit.>, but there are no other information about the companions in the literature. Its eccentricity is completely unconstrained due to lack of RV coverage. We estimate that its companion's minimum mass is 12.6 M_Jup, with an orbital period larger than 14 years.HIP 64150: Linear trend. The most likely companion mass obtained by the method explained in Section <ref> renders an estimate of 0.26 M_⊙, as seen in Fig. <ref>. The higher mass (0.54 M_⊙) obtained byandcan be attributed to less likely orbital configurations, but it is still inside the 1-σ confidence interval of the RV+imaging mass estimate. The main star displays clear signals of atmosphere pollution caused by mass transfer from its companion during the red giant phase <cit.>, characterizing the only confirmed blue straggler of our sample. The measured projected separation of the binary system is 18.1 AU , which indicates that even for such a wide system the amount of mass transferred is still large enough to produce measurable differences in chemical abundances. It seems, however, that the amount of angular momentum transfer was not enough to produce significant enhancement in the rotation rate and activity of the solar twin. It is also important to note that the isochronal age measured for this systemhas a better agreement with the white dwarf (WD) cooling age estimated bythan previous estimates, illustrating the importance of studying these Sirius-like systems to test the various methods of age estimation.HIP 65708: This star has previously been reported as a single-lined spectroscopic binary with an orbital solution <cit.>. Here we update this solution by leveraging the extremely precise radial velocities measured in the Lick Planet Search program and with the HARPS spectrograph. The minimum mass of the companion is 0.167 M_⊙, indicating it is a red dwarf, orbiting at less than 1 AU with a slight eccentricity of 0.31. Our results agree with the previous orbital solution, which was based solely on data with uncertainties two orders of magnitude higher than the most recent data from HARPS and the Lick Planet Search.HIP 72043: RV curvature only. Similarly to HIP 54102, it is listed as a proper motion binary and we could not constrain its eccentricity. A fairly massive (> 0.5 M_⊙) companion is inferred at a very large period; this fit suggests that the longitude of periapse of the companion of HIP 72043 is currently at an unfavorable position for visual detection.HIP 73241: RV curvature only. The companion's orbit is eccentric enough to allow an estimation of the minimum eccentricity; its companion has been previously been confirmed by <cit.> and visually detected bywith a separation 0.318. Inwe listed this star as having an unusually high rotation, but here we revise this conclusion and list HIP 73241 as a candidate peculiar rotator because its v sini is less than 2σ above the expected value for its age. Similarly to HIP 67620, this peculiarity, if real,could also be explained by contamination by a bright companion, since we determined that the minimum companion mass m sini > 0.49 M_⊙.HIP 79578: The companion is a well-defined 0.10 M_⊙ red dwarf orbiting the main star approximately every 18 years in a fairly eccentric orbit (e = 0.33). The orbital parameters we obtained differ significantly from the ones obtained by <cit.> by more than 10%, except for the eccentricity; also in contrast,report it as a brown dwarf companion. The fit for this binary displays residuals of up to 30 m s^-1 for the AATPS radial velocities, and the periodogram of these residuals shows a peak near the period 725 days. When we fit an extra object with msini = 0.70 M_Jup at this period (a = 1.62 AU and e = 0.87), it improves the general fit of the RVs by a factor of 7. It is important to mention, however, that there are only 17 data points for the AATPS dataset, and the HARPS dataset does not display large residuals for a single companion fit. We need thus more observations to securely infer the configuration of this binary system, and if it truly has an extra substellar companion at a shorter period.HIP 81746: This is another high-eccentricity (e = 0.7) binary that does not display clear anomalies in its rotation and activity. Its companion is a 0.1 M_⊙ red dwarf orbiting the main star every 9 years. The orbital parameters we obtained are in good agreement with the ones reported by <cit.>.HIP 83276: RV curvature only. Although the HARPS radial velocities suggest the presence of a stellar mass companion, we do not have enough RV data points to infer any information about the orbital parameters of the system. Using radial velocities measured with the CORAVEL spectrograph, <cit.> found the companion has msini=0.24 M_⊙, e=0.185 and an orbital period of 386.72 days.HIP 87769: RV curvature only. It is reported as a binary system bybut, similarly to HIP 54102, lacks an inflection point in its RV data from HARPS, which spans 3.3 yr. There is a wide range of possible orbital solutions that suggest m sini varying from brown dwarf masses to ∼ 1 M_⊙. Higher eccentricities (e > 0.8) can be ruled out as unlikely because they suggest a companion with m sini≈ 1 M_⊙ at an orbital period of more than 500 yr and a > 80 AU. §.§ Considerations on multiplicity statistics Although planet search surveys are generally biased against the presence of binaries due to avoiding known compact multiple systems, the fraction of binary or higher-order systems in the whole sample of the Solar Twin Planet Search program is 42%± 6%[Counting stellar and brown dwarf companions. The uncertainty is computed using a bootstrap resampling analysis with 10,000 iterations, similarly to <cit.>. In each iteration, a new set of 81 solar twins is randomly drawn from the original sample allowing stars to be selected more than once.]. This value agrees with previous multiplicity fractions reported by, e.g., <cit.> and ; however, it is signficantly lower than the 58% multiplicity factor for solar-type stars reported by <cit.>, who argues that previous results are subject to selection effects and are thus biased against the presence of multiple systems.The orbital period vs. mass ratio plot of companions in the Solar Twin Planet Search is shown in Fig. <ref>. A comparison with the sample of solar-type stars fromreveals two important biases in our sample: i) Mass ratios are mostly below 0.3 because of selection of targets that do not show large radial velocity variations in previous studies; ii) Orbital periods are mostly lower than 30 yr because longer values cannot be constrained from the recent RV surveys targeting solar-type stars with low-mass companions. In such cases, further monitoring of linear trend and RV curvature-only binaries may prove useful to understand the origins of the brown dwarf desert <cit.>. These targets are particularly appealing because the long periods mean that the separation from the main star is large enough to allow us to observe them directly using high-resolution imaging.Previous studies on the period-eccentricity relation for binary stars found that systems with orbital periods below 10 days tend to have eccentricities near zero, while those between 10 and 1000 days follow a roughly flat distribution of eccentricities <cit.>, an effect that is due to the timescales for circularization of orbits. In relation to our sample, with the exception of HIP 30037, HIP 65708 and HIP 83276, all of the binaries we observed have periods longer than 1000 days and eccentricities higher than 0.3, which agrees with the aforementioned findings. According to <cit.>, the distribution of eccentricities on systems with T > 1000 d is a function of energy only, and does not depend on T (see fig. 5 in ). Interestingly, HIP 30037, which hosts a brown dwarf companion with T = 31.6 d, falls inside the 25–35 days interval of orbital periods found bythat corresponds to a short stage of evolution of binaries undergoing a fast change in their orbits.§ CONCLUSIONS The Solar Twin Planet Search and several other programs observed 81 solar twins using the HARPS spectrograph. In total, 18 of these solar twins are spectroscopic binaries, 18 are visual binaries, and two intersect these categories. We found a multiplicity fraction of 42%± 6% in the whole sample, which is lower than the expected fraction (∼58%) because of selection effects that are generally seen in exoplanet search surveys.We updated or reproduced the solutions of several known binaries, and determined all the orbital parameters of HIP 19911, HIP 65708, HIP 67620, HIP 79578, HIP 81746 and HIP 103983. The stars HIP 43297 and HIP 64673, which we previously reported as binaries, are likely to host long-period giant planets instead of stellar companions. For binaries with partial orbits, we were able to place lower limits for some of their orbital parameters owing to the presence of curvature or an inflection point in their RV data. We estimated the most likely mass of the companions of the binaries that display only linear trends in their RV data. Future work is needed on studying the long-period binaries using photometry data and high-resolution imaging in order to constrain the nature of their companions. These wide solar twin binaries are prime targets for detailed physical characterization of their companions owing to the favorable separation for AO imaging and the precision with which we can measure the stellar parameters of the main star – this is particularly important for fully convective red dwarf stars and very low-mass companions such as the T dwarf HIP 14501 B, whose evolution and structure is still poorly constrained.Additionally, we reported the detailed discovery of new companions to the following solar twins: HIP 6407, HIP 30037, HIP 54582, and HIP 62039, for which we are able to determine an orbital solution for the first two using radial velocities. The latter two do not have enough RV data to obtain precise orbital parameters, but we can nonetheless estimate their minimum companions masses. We found that these new companions are likely very low-mass, ranging from 0.02 to 0.12 M_⊙ (although stressing that these are lower limits), which should be useful in understanding the origins of the brown dwarf desert in future research.The anomalies and RV residuals observed on HIP 19911, HIP 67620 and HIP 103983 are likely due to contamination by the companion on the spectra of the main star. Although the peculiar stars in our sample are no longer considered blue straggler candidates, it is important to note that the detection of WD companions is particularly important for the study of field Sun-like stars because they allow the estimation of their cooling ages; these are more reliable than isochronal and chromospheric ages in some cases, providing thus robust tests for other age estimate methods. We do not expect that the presence of M dwarf companions contaminate lithium spectral lines in Sun-like stars, thus stellar ages derived from Li abundances may be more reliable for double-lined solar twins. We recommend a revision of the stellar parameters of the peculiar binary stars by analyzing high-resolution spectra at the highest Doppler separations possible, or using Gaussian processes to disentangle the contaminated spectra <cit.>.We conclude that single-lined solar twin binaries with orbital periods larger than several months and moderate to low eccentricities do not display signals of distinct rotational evolution when compared to single solar twins. The most compact system in our sample, HIP 30037, which hosts a 0.06 M_⊙ brown dwarf companion at an orbital period of 31 days is, in fact, one of the quietest stars in the sample (in regards of its activity levels), and is thus a viable target for further efforts in detecting moderate- to long-period circumbinary planets.§ ACKNOWLEDGEMENTS LdS acknowledges the financial support from FAPESP grants no. 2016/01684-9 and 2014/26908-1. JM thanks FAPESP (2012/24392-2) for support. LS acknowledges support by FAPESP (2014/15706-9). This research made use of SciPy <cit.>, Astropy <cit.>, Matplotlib <cit.>, and the SIMBAD and VizieR databases <cit.>, operated at CDS, Strasbourg, France. We thank R. P. Butler, S. Vogt, G. Laughlin and J. Burt for allowing us to analyze the LCES HIRES/Keck data prior to publication. LdS also thanks B. Montet, J. Stürmer and A. Seifahrt for the fruitful discussions on the results and code implementation. We would also like to thank the anonymous referee for providing valuable suggestions to improve this manuscript.mnras
http://arxiv.org/abs/1708.07465v1
{ "authors": [ "Leonardo A. dos Santos", "Jorge Meléndez", "Megan Bedell", "Jacob L. Bean", "Lorenzo Spina", "Alan Alves-Brito", "Stefan Dreizler", "Iván Ramírez", "Martin Asplund" ], "categories": [ "astro-ph.SR", "astro-ph.EP" ], "primary_category": "astro-ph.SR", "published": "20170824154614", "title": "Spectroscopic binaries in the Solar Twin Planet Search program: from substellar-mass to M dwarf companions" }
-10mm -10mm -5 mm 225 mm 180 mm a>Grayc ⟨⟩ł()̊ł()̊[email protected] University of Science and Technology, Faculty of Physics and Applied Computer Science,al. Mickiewicza 30, 30-059 Krakow, [email protected] of Physical Sciences,National Instituteof Science Education and Research, HBNI,Jatni-752050, [email protected] of Physical Sciences,National Instituteof Science Education and Research, HBNI,Jatni-752050, [email protected] of Physical Sciences,National Instituteof Science Education and Research, HBNI,Jatni-752050, IndiaWe investigate systematics of the freezeout surface in heavy ion collisions due to the hadronspectrum. The role of suspected resonance states that are yet to be confirmed experimentallyin identifying the freezeout surface has been investigated. We have studied two differentfreezeout schemes - unified freezeout scheme where all hadrons are assumed to freezeout at thesame thermal state and a flavor dependent sequential freezeout scheme with different freezeoutthermal states for hadrons with or without valence strange quarks. The data of mean hadron yieldsas well as scaled variance of net proton and net charge distributions have been analysed. We findthe freezeout temperature T to drop by ∼5% while the dimensionless freezeout parametersμ_B/T and VT^3 (μ_B and V are the baryon chemical potential and the volume at freezeoutrespectively) are insensitive to the systematics of the input hadron spectrum. The observed hintof flavor hierarchy in T and VT^3 with only confirmed resonances survives the systematicsof the hadron spectrum. It is more prominent between ∼10 - 100 GeV where the maximumhierarchy in T∼10% and VT^3∼40%. However, the uncertainties in the thermal parametersdue to the systematics of the hadron spectrum and their decay properties do not allow us to makea quantitative estimate of the flavor hierarchy yet.Freezeout systematics due to the hadron spectrum Subhasis Samanta December 30, 2023 ================================================§ INTRODUCTIONThe determination of the last surface of inelastic scattering, the chemical freezeout surface (CFO),is an integral part of the standard model of heavy ion collision <cit.>. An ideal gas of all the confirmed hadrons and resonances as listed bythe Particle Data Group (PDG) <cit.> forms the Hadron Resonance Gas (HRG) model that has metwith considerable success across a broad range of beam energies in describing the mean hadron yields <cit.> and more recently moments of conserved charges of QCD like baryon number (B), strangeness(S) and charge (Q) <cit.> with a fewthermal parameters. Such an analysis gives us an access to the thermodynamic state of the fireball just prior tofreezeout. The ongoing hunt for the QCD critical point crucially depends on our knowledge of the backgrounddominated by the thermal hadronic physics close to freezeout. The HRG partition function Zł T,μ_B,Q,S$̊ for a thermal state atł T,μ_B,μ_Q,μ_S$̊where T is the temperature and μ_B, μ_Q and μ_S are the chemical potentials correspondingto the three conserved charges B, Q and S respectively, can be written as ln Z= ∑_iln Z_ił T,μ_B,Q,Seq.lnZ where Z_i is the single particle partition function corresponding to the ith hadron species written as ln Z_ił T,μ_B,Q,S=VT^3ag_i/2π^2∫ dpp^2/T^3 ×lnł1+ae^-ł√(ł p^2+m_i^2)+μ_i/̊Teq.lnZi where a=-1(+1) for mesons (baryons), g_i and m_i refer to the degeneracy factor and mass of the ithhadron species and μ_i is its hadron chemical potential which within a complete chemical equilibriumscenario is written asμ_i=B_iμ_B+Q_iμ_Q+S_iμ_S where B_i, Q_i and S_i are the baryon number, charge and strangeness of the ith hadron species. The sum in Eq. <ref> runs over all the established resonances from the PDG. However, quarkmodels <cit.> and studies on the lattice <cit.> predict many moreresonances than that have been confirmed so far in experiments. It has been pointed out in studies based oncomparison between QCD thermodynamics on the lattice and HRG that these resonances could have significantcontribution to several thermodynamic quantities <cit.> and influence the extractionof the freezeout parameters within the HRG framework <cit.>. There have been interestingstudies on the status and influence of the systematics of the hadron spectrum on several quanitites <cit.>. In this work, we have studied the systematics in the determination ofthe freezeout surface within the HRG framework due to the uncertainties over the hadron spectrum.§ EXTRACTING CHEMICAL FREEZEOUT SURFACEThe standard pratice has been to extract the CFO parameters by fitting the mean hadron yields. The primaryyields N^p_i are obtained from Eq. <ref> as followsN^p_i= ∂/∂μ_iln Z while the total yields of the stable hadrons that are fitted to data are obtained after adding the secondarycontribution from the resonance decays to their primary yieldsN^t_i=N^p_i + ∑_j BR_j→ iN^p_j where BR_j→ i refers to the branching ratio (BR) of the decay of the jth to ith hadron species.In this work we characterise the freezeout surface by three parameters of which only T has dimension. Theother two are suitably scaled dimensionless paramteters μ_B/T and VT^3. While μ_B/T controls thebaryon fugacity factor, VT^3 can be interpreted as the effective phase space volume occupied by the HRG atfreezeout. The masses of the hadrons are the relevant scales in this problem. They decide the freezeout T.Thus, it is natural that systematic variation of the hadron spectrum will result in corresponding variationof the freezeout T. On the other hand, the influence of the systematics of the hadron spectrum on thedimensionless parameters μ_B/T and VT^3 is expected to be lesser. This motivates us to work with ł T,μ_B/T, VT^3$̊ instead of the standard choice ofł T, μ_B, V$̊. The other parameter that is often usedin literature, the strangeness understauration factor γ_S has been taken to be unity here. μ_S andμ_Q are solved consistently from the strangeness neutrality condition and demanding the ratio of net B toQ be equal to that of the colliding nuclei (this ratio ∼2.5 for Au and Pb nuclei which we consider here) Net S=0 Net B/Net Q=2.5 As is evident from Eq. <ref>, knowledge of the BRs is essential to compute the contribution ofthe secondary yield which is the feeddown from the heavier unstable resonances to the observed hadrons. As a result,extraction of the freezeout surface based on the hadron yield data suffer from systematic uncertaintiesof the decay properties of these addtional resonances.The freezeout surface can also be estimated by comparing higher moments of the conserved charges inexperiment and theory <cit.>. One of the important advantage in using the fluctuations of conserved charges overhadron yields in estimating the freezeout surface is that it is enough to know only the quantum numbers of theseunconfirmed states. Decays under strong interactions should conserve the charges B, Q and S. Thus the conservedcharge susceptibilities are not influenced by the systematic uncertainties of the BRs of the unconfirmed resonances. On the theoretical side, it is straightforward to compute the conserved charge susceptibilities χ^ijk_BQSof order ł i+j+k$̊ from the partition function χ^ijk_BQS = ∂^i+j+kł P/T^4/∂^iłμ_B/T∂̊^jłμ_Q/T∂̊^kłμ_S/T where the pressurePis obtained fromP= T/Vln Z The above susceptibilities computed in a model can then be converted easily to moments for a comparisonwith the measured data. For example, the meanMand varianceσ^2of the conserved charge distributionhas one-to-one correspondence with the first two orders of susceptibility of the respective chargecM_c=N_c = VT^3χ^1_c σ^2_c= ł N_c- N_c^̊2 = VT^3χ^2_c withN_cbeing the observed net charge of typecin an event while N_cis the ensemble average. We have evaluated the susceptibilities within HRG and estimated the influence of the missing resonances tothe extraction of freezeout parameters thereof. It has been found that scaled varianceσ^2/Mof netQand netBare well described within the HRG framework while higher moments like skewness and kurtosis showup discrepancises, particularly at lower energies <cit.>. These highermoments are also sensitive to non-ideal corrections like incorporating repulsive and attractive interactionswithin the HRG framework <cit.>. Hence,in this study we stick toσ^2/Mof netQandBto ascertain the influence of the systematics ofthe hadron spectrum on the freezeout surface extracted from the data on conserved charge fluctuation. On the experimental front, several uncertainties can creep into the measurement of conserved charge fluctuations.The acceptance cuts in transverse momentum and rapidity is one of them <cit.>. Also, the neutral particles arenot detected which means net-proton fluctuations only act as an approximate proxy for netB<cit.>.Currently, there is a tension between the PHENIX <cit.> and STAR <cit.>measurements for net charge fluctuation. In this work, we have extracted the freezeout parameters for STAR data alone.We expect the dependence of the freezeout parameters on the uncertainties of the hadron spectrum to be similar forSTAR and PHENIX data.While a single unified freezeout picture (1CFO) provides a good qualitative description across a broad range of beamenergies, recent studies have shown that a natural step beyond 1CFO would be to consider flavor hierarchy infreezeout (2CFO), based on various arguments like flavor hierarchy in QCD thermodynamic quantities on thelattice <cit.>, hadron-hadron cross sections <cit.> and meltingof in-medium hadron masses <cit.>. We have analysed the yield data in both the freezeoutschemes: 1CFO and 2CFO. However, for the conserved charge fluctuation study, currently only data for moment of netproton (proxy for net baryon) <cit.> and net charge <cit.> are available. The netBand netQfluctuations are dominated by the non-strange sector as the lightest hadrons contributing to thesequantities are non-strange. Thus, the analysis of the data on the conserved charge fluctuations is not sensitive tothe thermal state of the strange sector. Hence, we have analysed the data on higher moments of the conserved chargesonly within 1CFO.§ HADRON SPECTRUM
http://arxiv.org/abs/1708.08152v2
{ "authors": [ "Sandeep Chatterjee", "Debadeepti Mishra", "Bedangadas Mohanty", "Subhasis Samanta" ], "categories": [ "nucl-th", "hep-lat", "hep-ph", "nucl-ex" ], "primary_category": "nucl-th", "published": "20170827230754", "title": "Freezeout systematics due to the hadron spectrum" }
KFUPM]Jun Li MTU]Chunpei Cai CARDC]Zhi-Hui Li [KFUPM]Center for Integrative Petroleum Research, College of Petroleum Engineering and Geosciences, King Fahd University of Petroleum & Minerals, Saudi Arabia [MTU]Department of Mechanical Engineering-Engineering Mechanics, Michigan Technological University, Houghton, MI 49931, USA [CARDC]Hypervelocity Aerodynamics Institute, China Aerodynamics Research and Development Center, Mianyang 621000, China Gaseous thermal transpiration flows through a rectangular micro-channel are simulated by the direct simulation BGK (DSBGK) method. These flows are rarefied, within the slip and transitional flow regimes, which are beyond many traditional computational fluid dynamic simulation schemes, such as those based on the continuum flow assumption. The flows are very slow and thus many traditional particle simulation methods suffer large statistical noises. The adopted method is a combination of particle and gas kinetic methods and it can simulate micro-flows properly. The simulation results of mass flow rates have excellent agreement with experimental measurements. In another case of 2D channel, the DSBGK comparisons with the DSMC result and the solution of Shakhov equation are also in very good agreement. Another finding from this study is that numerical simulations by including two reservoirs at the channel ends lead to appreciable differences in simulation results of velocity and pressure distributions within the micro-channel. This is due to the inhaling and exhaling effects of reservoirs at the channel ends. Even though excluding those reservoirs may accelerate the simulations significantly by using a single channel in simulations, special attentions are needed because this treatment may over-simplify the problem, and some procedures and results may be questionable. One example is to determine the surface momentum accommodation coefficient by using analytical solution of the mass flow rate obtained in a single-channel problem without the confinement effect of reservoirs at the two ends. § INTRODUCTIONIt is well known that rarefied gas flows through a tube with a constant pressure but variable temperature along the wall boundary may experience appreciable bulk speed <cit.>. In fact, small scale gas flows within micro-channels, including micro- thermal transpiration flows, may have high rarefaction effect, which is a challenge for investigations. These effects can be characterized by the Knudsen number (Kn) <cit.>:Kn = λ /L,where λ is the molecular mean free path of gas, and L is a characteristic length, which can be the micro-channel height. According to different Kn numbers, gas flows can be continuum(Kn<0.01), slip (0.01 < Kn <0.1), transitional (0.1<Kn <10), andcollisionless (Kn>10). Micro and thermal transpiration flows can be within any of these regimes with Kn>0.01.The Micro- /Nano- Electro-Mechanical-Systems (MEMS/NEMS) have decreased to sub-microns in recent decades, where Kn can be large enough to be transitional. As such, thermal transpiration flows have more applications and become more important. For example, using the pumping effects of thermal transpiration to create micro-compressor without moving parts leads to the work of Vargo <cit.>, Young <cit.> and Alexeenko <cit.>. Gupta and Gianchandani <cit.> developed a 48 multi-stage Knudsen compressor for on-chip vacuum resulting in compression ratios up to 50. It is easy to understand that further investigations on thermal transpiration flow are necessary, and this is the major goal of this paper.The rest of this paper is organized as follows. Section <ref> reviews related past work; Section <ref> discusses the numerical method used in this study, i.e., the direct simulation BGK (DSBGK) method; Section <ref> presents the simulation schematic used to mimic the experiments; Section <ref> shows the test cases with a small micro-channel to study the effects of reservoirs connected at the channel ends; Section <ref> gives the simulation results of a real micro-channel with validation against experimental data.The last section summarizes this study with several conclusions.§ RELATED PAST WORKIn the literature, there are studies on gaseous flows at micro scale, including experimental measurements and numerical simulations. These micro- flows can be pressure-driven flows and thermal transpiration flows.Here we only name a few.Experimental measurements are valuable to study micro-flows. Interesting phenomena are observed and can offer valuable physical insights and benchmarks to test simulations. Inmany situations, experiential studies are not replaceable. Liang <cit.> analyzed the behavior of the Thermal Pressure Difference (TPD) and the Thermal Pressure Ratio (TPR) for different gases by applying various temperature differences, searched a correction factor for pressure measurements, and obtained an easy-to-use equation than the Weber and Schmidt's ones <cit.>. Later, Rosenberg and Martel <cit.> performed and compared their own measurements with those by Weber, Schmidt and Liang.Marcos studied unsteady thermal transpiration rarefied gas flows inside a micro-tube and a micro-rectangular channel <cit.>. They found that the unsteady pressure developments in the two reservoirs at the micro-channel ends can be well approximated with two exponential functions.They measured the slops of initial pressure changes inside the two reservoirs for flows with different degrees of rarefaction, the TPR, TPD and the thermal-molecular pressure ratio γ. Los and Fergusson <cit.> noted the existence of a maximum value in theirTPD results. Takaishi and Sensui <cit.> improved Liang's law. Annis <cit.> compared his measurements with the numerical results of Loyalka and Cipolla <cit.>, and found his results are quite different from Maxwell's initial approach. Sone and Sugimoto <cit.> and Sugimoto <cit.> performed original experiments, using a micro-windmill set at the end of a bent capillary, allowing qualitative but not quantitative analysis of the mass flow rate induced by thermal transpiration. Variation work <cit.> derived from the constant volume technique tracked pressure variation with time at the inlet and outlet of the tube, and obtained the mass flow rate, which is related to the pressure variations.The most recent experimental work on thermal transpiration probably is the measurement of the mass flow rates of gas flows through a rectangular channel by measuring the initial pressure change rate inside the reservoirs at the channel ends <cit.>.There are many Computational Fluid Dynamics (CFD) schemes, which can be categorized into three classes. The first class is on the macroscopic level and applicable to simulating flows in the continuum and slip regimes, where the governing equations is usually the Navier-Stokes equation or the Burnett equation, and the non-slip boundary condition or general velocity-slip and temperature-jump boundary conditions shall be used. But, it is improper to use these CFD schemes in transitional and collisionless flow regimes, which may happen in thermal transpiration flows. The second class is on the mesoscopic level and based on the gas kinetic theory and velocity distribution functions. The fundamental governing equation is the Boltzmann equation or its simplified Bhatnagar-Gross-Krook (BGK) model <cit.>. Related methods include the Lattice Boltzmann Method <cit.>. Graur and Shripov <cit.> used gas kinetic method to simulate rarefied gas flows along a long pipe with an elliptical cross-sections. There are also various kinds of hybrid methods that are based on the gas kinetic theory, such as the so-called gas kinetic scheme (GKS) or unified gas kinetic scheme (UGKS) <cit.>,which are applicable to the simulations of micro-flows. The GKS method continues to re-construct the velocity distribution function at the mesoscopic level, based on which the mass, momentum and energy fluxes can be computed correctly, and then the macroscopic properties are updated, such as density, velocity, pressure and temperature.The last class of CFD schemes is based on the molecular dynamics, which targets each molecule or atom. One example is the direct simulation Monte Carlo (DSMC) method <cit.>.This paper aims to report investigations on thermal transpiration flows with a specific numerical simulation method to be discussed in the next section. Gaseous micro-flows usually have different degrees of rarefaction and thus cannot be properly modeled by the first class of CFD schemes. The third type of methods is demanding because it traces particles' movements and computes particles' collisions in a statistical approach. However, the common issue associated with the traditional particle methods is the large statistical noises in low-speed gas flows including microchannel flows, and much effort has been spent to reduce these noises, such as the Information Preservation method <cit.>. The second class of methods solves the velocity distribution function and is also quite demanding, especially when the intermolecular collisions are considered.§ THE DSBGK METHODThe direct simulation BGK (DSBGK) method was proposed recently <cit.>, and it is based on the BGK model for the Boltzmann equation. The BGK model approximates the standard Boltzmann equation quite well in rarefied gas problems with small perturbations, where the solution of distribution function is close to the local Maxwell velocity distribution.The thermal transpiration phenomenon is simulated here at different pressure conditions and for different gas species by the Fortran MPI software package NanoGasSim developed using the DSBGK method. As a molecular simulation method, the DSBGK method works like the standard DSMC method <cit.> but actually is a rigorous mathematical solver of BGK-like equation, instead of physical modeling of the molecular movements. At the initial state, the computational domain is divided uniformly in each direction into many cells, which are either void and solid. About twenty simulated molecules are randomly distributed inside each void cell and assigned with initial positions, velocities and other molecular variables according to a specified initial probability distribution function. The cell size and time step are selected the same as in the DSMC simulations. During each time step, each simulated molecule moves uniformly and in a straight line before randomly reflecting at the wall surface and its molecular variables are updated along each segment of the trajectory located inside a particular void cell according to the BGK equation. At the end of each time step, the number density, flow velocity and temperature at each void cell are updated using the increments of molecular variables along these segments located inside the concerned cell according to the conservation laws of mass, momentum and energy of the intermolecular collision process.The major differences of the DSBGK method from the traditional DSMC method are: 1) the DSMC method uses the transient values of molecular variables to compute the cell's variables, which are subject to large stochastic noise due to random and frequent molecular movements into and out of each cell, while the DSBGK method employs the increments of molecular variables due to intermolecular collisions to update the cell's variables based on the conservations laws mentioned above; 2) the DSBGK method computes the effect of intermolecular collisions by solving the BGK model while the DSMC method randomly handles the intermolecular collision effect using an importance sampling scheme, which costs a noticeable percent of computational time to generate a huge number of random fractions. These two differences significantly improve the efficiency of DSBGK method particularly at low speed conditions such as the micro thermal transpiration flows.More algorithm discussions as well as the convergence proof of the DSBGK method are detailed in <cit.>. The DSBGK method has been comprehensively verified against the DSMC method over a wide range of Kn number in several benchmark problems and is much more efficient than the DSMC method. Recently, the DSBGK method was successfully applied to study shale gas flows inside a real three-dimensional digital rock sample with 100-cubed voxels over a wide range of Kn <cit.> and the Klinkenberg slippage effect in the computation of apparent permeability <cit.>.It is also important to emphasize the differences between LBM and DSBGK method. The former can have superior parallel computing performance, however, it suffers a severe issue in discretizing the molecular velocity space due to its simplicity in algorithm. The velocity space discretization in LBM is rather simple and thus it is crude. LBM sacrifices the physical accuracy to achieve the mathematical simplicity. For example, in two-dimensional flows, the ordinary LBM adoptes the D2Q9 model, where only 9 points are used to discretize the whole velocity distribution function <cit.>. By contrast, the DSBGK method uses dynamic molecular velocities to discretize the velocity space, which is physically more accurate. It allows as fine discretization as desired since the molecular velocity set used in the discretization is dynamically updated during the simulation as in the DSMC simulation. § SIMULATION SCHEMATICFig.  <ref> illustrates the simulation domain, which is similar to the configuration used in the gas flow experiments <cit.>. Two reservoirs are connected to a micro- rectangular channel at the two channel ends. Compared with the true dimensions adopted in experiments, the reservoir sizes are decreased to reduce the simulation cost. The sizes of these two reservoirs are chosen sufficiently large to make negligible the influence of reservoir sizes on the simulation results. These results include the pressure difference between the two reservoirs and the mass flow rate through the micro-channel. In these simulations, the wall temperature T_ wall solely depends on the x coordinateand increases linearly from T_ L to T_ H, which are the constant wall temperatures of the two reservoirs. This temperature difference drives the gas flow through the channel with the thermal transpiration effect.The total length, width and height of the computational domain are denoted asL_ all, W_ all and H_ all, respectively.L_ micro, W_ micro and H_ micro are three dimensions for the micro-channel. In the experiments, a micro-valve is used to close or open the passage between the two reservoirs. Correspondingly, the two boundaries at x=0 and x=L_ all switch between wall boundaries and periodic boundaries in the simulations. When the micro-valve is close/open, the pressure difference/mass flow rate at steady state through the micro-channel depends on the initial Knudsen number Kn_0 and the temperatures of two reservoirs.The pressure difference is studied by adopting a wall boundary at x=L_ all and an open boundary at x=0. At x=0, the pressure is fixed as p≡ p_0,the temperature is set as T≡ T_ wall(x=0) and the transient flow velocity u⃗=(u,v,w) is computed at the cell adjacent to x=0 the same as in the DSMC simulations.To ease the numerical study, separate simulations are performed to compute the mass flow rates by using two open boundaries at x=0 and x=L_ all, respectively. The experiments <cit.> can be conveniently modified to achieve the current simulation setups with at least one open boundary. The setups are close to those in real applications. It is not surprising to observe experimental and numerical results maybe not comparable,which usually is due to different settings.These discrepancies do not appear in the current simulations for the steady state pressure difference after closing the micro-valve <cit.> and the mass flow rate before closing the micro-valve <cit.>. § SIMULATION TESTS WITH SMALLER CHANNEL AND TEMPERATURE DIFFERENCESeveral basic parameters are listed here: the lowest temperature T_ L=300 K, the highest temperature T_ H=320 K, initial temperature T_0≡ (T_ L+T_ H)/2, and initial pressure p_0 = 50 Pa. The gas is argon,the dynamic viscosity is μ=2.117×10^-5×(T/273)^0.81 Pa· s and molecular mass is m=66.3×10^-27 kg <cit.>. At T_0 and p_0, the dynamic viscosity is μ_0=2.346×10^-5 Pa· s and the mean free path is λ_0=0.1522 mm.The simulation time step is set as Δ t=0.8λ_0/√(2k_ BT_0/m)=0.3389×10^-6 s, where k_ B is the Boltzmann constant. Correspondingly, the cell sizes are set as Δ x=Δ y=Δ z=0.05 mm <λ_0. The sizes of micro-channel located at the center of the computational domain are L_ micro× W_ micro× H_ micro≡7.3 mm × 1 mm × 0.5 mm and there are 146× 20× 10 cells located inside the micro-channel with Kn_0=λ_0/H_ micro=0.3044. In the DSBGK simulations, the relaxation parameter υ of BGK model is computed using the transient T, μ(T) and number density n by υ=2nk_ BT/(3μ) for each cell as discussed in Section 4 of <cit.>. About 20 simulated molecules per cell are used unless stated otherwise. §.§ Simulations of pressure differenceIn the simulation with a wall boundary at x=L_ all and an open boundary at x=0, the steady state is almost static inside the reservoirs and the presences of reservoirs might have negligible influence on the steady state pressure difference between the two reservoirs. Thus, two simulations are performed for comparison: Case 1, a full domain simulation with L_ all× W_ all× H_ all=10 mm × 5 mm × 5 mm and the total cell number is 200× 100× 100 with porosity ϕ=0.2846; Case 2, a reduced domain simulation with L_ all× W_ all× H_ all=L_ micro× W_ micro× H_ micro, where the two reservoirs at the channel ends are removed. Fig. <ref> shows full domain simulation results. The left side displays the transient distributions of T/T_0, n/n_0 and p/p_0 at t= 195,000 Δ t and 190,000 Δ t on the middle XOY plane with z≡ H_ all/2. The results are represented with black and green lines, respectively, for different moments. As shown, at these two moments, the temperatures, number densities and pressures reached a steady state, which is consistent with the observation in the pressure evolutions with time at two points of the simulation domain as shown in Fig. <ref> (right). The first point is at the bottom, front and left corner of the left reservoir,and the second at the upper, back and right corner of the right reservoir. Fig. <ref> shows the results of reduced domain simulation. The left side displays the transient distributions of T/T_0, n/n_0 and p/p_0 on the middle XOY plane at t= 10,000 Δ t and 15,000 Δ t using black and green lines, respectively. As shown, the temperatures, number densities and pressures reached a steady state after only t=15,000 Δ t, which is also verified by the pressure evolutions shown in Fig. <ref> (right). The steady state inside the micro-channel is not static and thus the pressures collected at the ends of micro-channel contain obvious noises. To obtain smoother profiles, time average process is needed. The comparison between Figs. <ref> and <ref> indicts that a full domain simulation needs much more time steps to converge, with a further increase of computational cost due to using a larger number of cells. Thus, the computational cost of full domain simulation is significantly higher than that of the reduced domain simulation. To save the simulation cost of studying the steady state pressure difference, a reduced domain simulation is favored and recommended.§.§ Simulation of mass flow rateA full domain simulation with two open boundaries at x=L_ all and x=0 is used to study the mass flow rate as mentioned before. To reduce the influence of reservoir sizes, we first use L_ all× W_ all× H_ all=20 mm × 10 mm × 5 mm. Time average process is used to smoothen the pressure and velocity distributions, as shown in Fig. <ref>. Parallel computation is adopted because the total cell number and porosity are increased to 400×200×100 and 0.63865, respectively. As shown by the white lines in Fig. <ref>, the computational domain is divided only along the x direction in the parallelization when visualization is needed.§.§ Influence of reservoir sizes on the mass flow rateFig. <ref> shows that the reservoir sizes can be reduced to save computational cost. Accordingly, another full domain simulation with L_ all× W_ all× H_ all=10 mm × 5 mm × 5 mm is performed, andFig. <ref> shows the comparisons between the previous results of Fig. <ref> (right) and the current results inside the same geometry configuration surrounding the micro-channel. Although the comparison shows that the solutions outside the micro-channel have appreciable differences, the solutions of T/T_0, n/n_0, p/p_0 and u inside the micro-channel are almost the same when both computational domain sizes are not less than L_ all× W_ all× H_ all=10 mm × 5 mm × 5 mm (could be even smaller) for this particular micro-channel with L_ micro× W_ micro× H_ micro=7.3 mm × 1 mm × 0.5 mm. The magnitude of v inside the micro-channel is too small to make comparison due to stochastic noise but the agreement of dominant v outside the micro-channel is very good. Note that the micro-channel can be simplified by using periodic boundary conditions in the y direction, when W_ micro≫ H_ micro <cit.>, where it only requires that the artificial reservoirs are much larger than the micro-channel in height. §.§ Confinement effect of reservoirs on the mass flow rateUsually there are entrance and exit effects at the two ends of micro-channel and the rarefaction effect further complicates the results <cit.>. On the other hand, as shown in Section <ref>, it also is desirable to accurately simulate the mass flow rate through the micro-channel without reservoirs, which make the simulation time-consuming. Thus, the micro-channel is modeled alone with two open boundaries. The comparisons between the previous results computed with L_ all× W_ all× H_ all=10 mm × 5 mm × 5 mm (i.e., Fig. <ref> (right)) and the current results computed without reservoirs are given in Fig. <ref>. The agreements between the two simulations for both T/T_0 and n/n_0 distributions inside the micro-channel are quite good. However, the current magnitude of dominant u (about 0.19 m/s) is noticeably larger than the previous one (about 0.16 m/s) inside the micro-channel, which is consistent with the discrepancy in the comparison of p/p_0. According to Eq. (3.4) of <cit.>, the pressure gradient in the negative direction of x axis enhances the mass flow rate in the current simulation, however, the pressure gradient in the positive direction of x axis depresses the mass flow rate in the previous simulation, as shown in Fig. <ref> (right). These two facts lead to a higher mass flow rate in the current simulation since the contribution of temperature gradient to the mass flow rate is almost the same. The presences of reservoirs require the inhaling and blowing effects near the two ends of the micro-channel to maintain the flow inside the two reservoirs with constant wall temperatures, which implies that the pressure gradient inside the micro-channel is certainly in the positive direction of x axis since the pressures at x=0 and x=L_ all are equal to p_0. Note that the concentrated pressure variations created by the inhaling and blowing effects depend mostly on the mass flow rate as long as the reservoirs are much larger than the cross-section of micro-channel, which interprets the good agreement of pressure differences across the micro-channel computed using different reservoir sizes as shown in Fig. <ref>. Thus, the confinement effect due to the presences of reservoirs as in the experiments <cit.> always leads to a pressure gradient in the driving direction inside the micro-channel, which depresses the mass flow rate (e.g., the reduction could be as large as Δ u≈ (0.19-0.16) m/s for this particular case). This confinement effect should be reflected in the simulations by adding reservoirs into the configuration even though the objective is to study the flow quantities that depend mostly on the properties of gas (e.g., molecular species and pressure) and micro-channel (e.g., sizes and temperature distribution on the wall). This observation also implies that we need to be cautious when using the experimental data measured with the confinement effect to extrapolate the accommodation coefficients <cit.> by using the analytical solution obtained from a single-channel problem without the confinement effect <cit.>, unless it is intended to use these coefficients in the same analytical solution to predict the performances of similar micro-channels. As shown in Section <ref>, the results computed using the Maxwell diffuse reflection model (i.e., complete accommodation) agree well with the experimental data.Fig. <ref> shows velocity u (left) and pressure (right) profiles along a micro-channel centerline extracted from Fig. <ref>. The two vertical dash lines mark the channel entrance and exit. As shown, the average velocities over the micro-channel centerline have appreciable difference.A quick estimation indicates that the peak values are 0.21 m/s and 0.18 m/s, respectively, ora difference of 17% when the full domain simulation is chosen as the reference because it is closer to the reality. In the full domain simulation, the velocity outside the micro-channel is small due to the large cross-section of the reservoirs. Fig. <ref> (right) shows the pressure profiles along the micro-channel centerline. In this thermal transpiration flow, the pressure through the channel is almost constant, and the maximum relative variation of pressure is about 0.04 % along the centerline.Inside the micro-channel, even though the pressure variations are small, the difference between the two profiles is striking with completely opposite variation trends as discussed above.Fig. <ref> shows the velocity u (left) and pressure (right) profiles at the middle station of the channel extracted from Fig. <ref>. The velocity of full domain simulation is noticeably smaller than that of the reduced domain simulation, which is consistent with Fig. <ref> (left).The profiles are parabolic and the velocity slips along the channel surface are evident. The two pressure profiles do not have large fluctuations and they are quite flat with maximum relative fluctuations of 0.01 %.§ MASS FLOW RATES OF DIFFERENT GAS SPECIES THROUGH REAL MICRO-CHANNEL AT DIFFERENT KN_0Gas flows inside a real micro-channel with L_ micro× W_ micro× H_ micro≡73 mm × 6 mm × 0.22 mm <cit.> are simulated by using L_ all× W_ all× H_ all≡80 mm × 10 mm × 1 mm with ϕ=0.20795. Δ x=Δ y≫Δ z are chosen at low pressure conditions to optimize the cell division. The three cell sizes are always smaller than λ_0 at different pressure conditions as required, e.g., a total of 3200×400×50 cells are used for argon gas flow at δ_0(p_0=294Pa)=7.41 with λ_0≈0.0268 mm (note: this simulation takes about one day for 2000 time steps when using 40 CPU cores), where δ_0 is a mean rarefaction parameter to characterize the mass flow rate Ṁ <cit.>: δ_0=p_0H_ microμ_0√(2k_ BT_0/m)≈0.9025Kn_0. In addition to argon, we use μ=1.865×10^-5×(T/273)^0.66 Pa· s for helium molecules (m=6.65×10^-27 kg) and μ=2.975×10^-5×(T/273)^0.66 Pa· s for neon molecules (m=33.5×10^-27 kg) <cit.>. Pure Maxwell diffuse reflection model is used at the wall surface as in the previous tests. Since the relative density variation inside the micro-channel is very small, the volumetric velocity component <u>_Ω in the x direction at steady state is used to compute Ṁ_ DSBGK as follows: Ṁ_ DSBGK =<u>_Ω mn_0W_ microH_ micro=(∑_jn_jV_ju_j)mn_0W_ microH_ micron_0L_ microW_ microH_ micro=m∑_jn_jV_ju_jL_ micro,where n_j, V_j≡Δ xΔ yΔ z and u_j are the number density, volume and flow velocity component of the cell j inside the micro-channel, respectively, and n_0=p_0/(k_ BT_0). Similarly, the average velocity components <u>_∂Ω, in and <u>_∂Ω, out are also computed by using the summations over cells on the sections at the inlet and outlet of the micro-channel, respectively. The purpose is to monitor local convergence. The convergence processes of three average velocity components in a representative case are given in Fig. <ref>, which shows that <u>_Ω converges much faster than the local quantities. Thus, the computational cost can be reduced in studying the mass flow rate by using the global quantity <u>_Ω. Fig. <ref> and Table <ref> show that the DSBGK results agree very well with the experimental data over a wide rage of δ_0 for different gas species. Fig. <ref> also shows that the DSBGK results have smoother and milder variations with δ_0 than the experimental results. 3c|He, p_0∈[67.3, 799] Pa 3c|Ne, p_0∈[66.9, 532] Pa 3cAr, p_0∈[67.4, 294] Paδ_0 Ṁ_ Exp. Ṁ_ DSBGK δ_0 Ṁ_ Exp. Ṁ_ DSBGK δ_0 Ṁ_ Exp. Ṁ_ DSBGK .6240.137 0.180 .873 0.307 0.349 1.70 0.355 0.371 .8650.193 0.219 1.22 0.399 0.423 2.01 0.409 0.401 1.110.233 0.252 1.41 0.435 0.459 2.36 0.430 0.432 1.480.278 0.296 1.58 0.469 0.488 2.70 0.487 0.464 1.980.341 0.342 2.10 0.571 0.560 3.03 0.496 0.485 2.470.402 0.383 2.78 0.629 0.640 3.38 0.531 0.511 2.980.429 0.414 3.48 0.718 0.711 4.01 0.554 0.542 3.470.465 0.441 4.18 0.795 0.753 4.73 0.636 0.575 4.330.526 0.479 4.87 0.819 0.806 5.37 0.655 0.604 5.570.532 0.521 5.24 0.845 0.819 6.04 0.611 0.625 6.800.607 0.559 6.09 0.886 0.861 6.72 0.670 0.645 7.410.587 0.574 6.94 0.896 0.897 7.41 0.702 0.664 § FURTHER COMPARISON IN A 2D CASEThe schematic of a 2D thermal transpiration argon gas flow is given in Fig. <ref> andFig. <ref> shows the comparison on the centerline between the DSMC result (black), the solution of Shakhov model equation (blue), and the DSBGK result of matching heat conductivity coefficient via υ=2nk_ BT/(3μ) (green), at Kn_0=0.2 (left) and Kn_0=1 (right), respectively. We also present the DSBGK result of matching viscosity via υ=nk_ BT/μ(red) as the ordinary implementation of BGK equation that has noticeable error in the thermal transpiration flow problems as also reported elsewhere. § CONCLUSIONSNumerical simulations ofthermal transpiration flows through a micro-channel with different species at different Kn are performed with the DSBGK method. Simulation setup effects on the final simulation results are discussed.It is found that for flows of several species with different degrees of rarefaction, the mass flow rates predicted by the simulations andmeasurements agree quite well.It indicates that the DSBGK method is more superior than traditional particle simulation methods that are subject to large statistical noises in simulating low-speed micro gas flows. Meanwhile, the DSBGK method can simulate micro flow with high rarefaction, which is a serious challenge to traditional computation schemes based on the continuum flow assumption. This study also indicates that simulation cost without reservoirs can be much lower than that of full domain simulation. However, their final flow field patterns are different because the confinement effect (inhaling and exhaling) is neglected in the reduced domain simulation without reservoirs. The confinement effect happens in outside regions close to the micro-channel ends and changes the inlet and outlet conditions of the micro-channel. It is easy to understand that simulations with reservoirs attached to the micro-channel ends are closer to the real experiments, and thus the results are more accurate. Simulations without reservoirs could develop much faster but the difference between the mass flow rates computed with reservoirs and without reservoirs is appreciable. Thus, it is questionable to determine the surface momentum accommodation coefficient by using analytical solution of the mass flow rate obtained in a single-channel problem without the confinement effect of reservoirs at the two ends.§ ACKNOWLEDGEMENTSJ. Li thanks Prof. Irina Graur for her helpful suggestions. 17 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURLReynolds O. Reynods, “On certain dimensional properties of matter in the gaseous temperature,”Philos. Trans. R. Soc. London, 170, 727-845 (1879).Maxwell J. Maxwell, “On stresses in rarefied gases arising from inequilities of temperature,” Philos. Trans. R. Soc. London, 170, 231-256 (1879).Knudsen M. Knudsen, “Eine revision der gleichgewichtsbedingung der gase. Thermische molekularstromung,”Ann. Phys., 336, 205-229 (1909).Shen2005 C. Shen,Rarefied Gas Dynamics: Fundamentals, Simulations and Micro Flows, Springer. (2005).Ali G. Karniadakis, A. Beskok and N. Alura, Microflows and Nanoflows: Fundamentals and Simulation, 2005, Springer-Verlag, New York. ISBN: 978-0-387-22197-7. doi:10.1007/0-387-28676-4.Vargo S. Vargo, E. Muntz, G. Shiflett and W.Tang, “Kunden compressor as a micro- and macro- scale vacuum pump without moving parts or fluids,”J. Vac. Sci. Technol. A, 17, 2308 (1999).Young M. Young, Y. Han, E. Muntz, G. Shiflett, A. Ketsdever and A. Green, “Theraml transpiration in micro-sphere membrances,” AIP Conf. Proc., 663, 743-751 (2003).Alexeenko A. Alexeenko, S. Gimelshein, E. Muntz, and A. Ketsdever, “Kinetic modeling of temperature driven flows in short microchannels,”Int. J. Therm. Sci, 45, 1045-1051(2006).Gupta N. K. Gupta, S. An and Y.B. Gianchandani, “A Si-micromachined 48-stage Knudsen pump for on-chip vacuum,” J. Micromech. Microeng., 22, 105026 (2012).Liang S. Liang, “Some measurements of thermal transpiration,”J. Appl. Phys., 22, 148 (1951).Weber S. Weber and G. Schmidt,Commun. Leiden. Rapp. et Commun, 246c, 72 (1936).Rosenberg A. Rosenberg and C. Martel Jr., “Theraml transpiration of gases at low pressures,”J. Phys. Chem, 62, 457-459 (1958).marcros1 M.R. Cardenas, I. Graur, P. Perrier and J.G. Meolans, “Time-dependent experimental analysis of a thermal transpiration rarefied gas flow,” Phys. Fluids, 25, 072001 (2013). doi:10.1063/1.4813805.marcros2 M.R. Cardenas, I. Graur, P. Perrier and J.G. Meolans, “Thermal transpiration flow: a circular cross-section microtube submitted to a temperature gradients,” Phys. Fluids, 23, 031702 (2011).marcros3 M.R. Cardenas, I. Graur, P. Perrier and J. Meolans, “An experimental and numerical study of the final zero-flow theraml transpiration stage,” J. Therm. Sci. Technol, 7, 437-452 (2012).los J. Los and R. Fergusson, “Measurements of thermomolecular pressure differences on argon and nitrogen,” Trans. Faraday Soc., 48, 730-738 (1952).Takaishi T. Takaishi and Y. Sensui, “Thermaltranspiration effect of hydrogen, rare gases and methane,”Trans. Faraday Soc., 59, 2503-2514 (1963).annis B. Annis, “Thermal creep in gases,”J. Chem. Phys., 57, 2898 (1972).Loyalka S. Loyalka and J. Cipolla Jr., “Thermal creep slip with arbitary accommondation at the surface,” Phys. Fluids, 14, 1656 (1971).sone Y. Sone and H. Sugimoto, “Vacuum pump without a moving part and its performance,”AIP Conf. Proc., 663, 1041 (2003).Sugimoto H. Sugimoto, S. Kawakami and K. Moriuchi, “Rarefied gas flows induced through a pair of parallelmeshes with different temperatures,”AIP Conf. Porc., 1084, 1021 (2008).Ewart T. Ewart, P. Perrier, I. Graur and J. G. Meolans, “Mass flow rate measurements in gas micro flows,”Exp. Fluids, 41, 487-498 (2006).Yamaguchi2014 H. Yamaguchi, M.R. Cardenas, P. Perrier, I. Graur and T. Niimi, “Thermal transpiration flow through a single rectangular channel,”J. Fluid Mech., 744, 169-182 (2014).Yamaguchi2016 H. Yamaguchi, P. Perrier,M.T. Ho, J.G. Meolans, T. Niimi and I. Graur, “Mass flowrate measurement of thermal creep flow from transitional to slip flow regime,” J. Fluid Mech., 795, 690-707 (2016).BGK P.L. Bhatnagar, E.P. Gross and M. Krook, “A model for colliion processes in gases I: small amplitude processes in charged and neutral one-component systems,”Phys. Rev., 94, 511-525 (1954).Qian1992 Y.H. Qian, D. d'Humieres and P. Lallemand, "Lattice BGK models for Navier-Stokes equation," Europhysics Letters, 17, 479-484 (1992). luo X. He and L. Luo,“Theory of the lattice Boltzmann method: from the Boltzmann equation to the lattice Boltzmann equation,”Phys. Rev. E, 56, 6811 (1997).chen S. Chen and G. Doolen, “Lattice Boltzmann method for fluid flows,” Annual Rev. Fluid Mech., 30, 329-364 (1998).https://doi.org/10.1146/annurev.fluid.30.1.329.Sharipov I. Graur and F. Sharipov, “Non-isothermal flow of rarefied gas through a long piple with elliptic cross section,” Microfluid. Nonofluid., 6, 267-275 (2009).xugks K. Xu, “A gas-kinetic BGK scheme for the Navier-Stokes equations and its connection with artificial dissipation and Godunov method,”J. Comput. Phys., 171,289335 (2001).doi:10.1006/jcph.2001.6790.xu K. Xu and Z.H. Li, “Microchannel flow in the slip regime: gas-kinetic BGK-Burnett solutions,”J. Fluid. Mech., 513,87-110 (2004). https://doi.org/10.1017/S0022112004009826.xuuks K. Xu and J.C. Huang, “A unified gas kinetic scheme for continuum and rarefied flows,” J. Comput. Phys., 229 (2010).Bird1963 G. A. Bird, “Approach to translational equilibrium in a rigid sphere gas,” Phys. Fluids, 6, 1518 (1963).Bird1994 G. A. Bird, Molecular Gas Dynamics and the Direct Simulation of Gas Flows, Clarendon Press, Oxford (1994).fan0 J. Fanand C. Shen, “Statistical simulation of low-speed unidirectional flows in transition regime”, in Rarefied Gas Dynamics, edited by, R. Brun, et al., Cepadus-Editions, Toulouse, 245 (1999).cai C. Cai, I.D. Boyd, J. Fan and G.V. Candler,“Direct simulation methods for low-speed microchannel flows,”J. Thermophys. Heat Transfer, 14 (3), 368-378 (2000).fan1 J. Fan andC. Shen, “Statistical simulation of low speed rarefied gas flows,”J. Comput. Phys., 167 (2), 393-412 (2001) sun Q. Sun and I.D. Boyd, “A direct simulation method for subsonic, microscale gas flows,”J. Comput. Phys., 179 (2), 400-425 (2002). https://doi.org/10.1006/jcph.2002.7061.fan2 C. Shen,J. Fan and C. Xie, “Statistical simulaiton of rarefied gas flows in micro-channles,”J. Compt. Phys., 189, 512-526 (2003). https://doi.org/10.1016/S0021-9991(03)00231-6Li2010 J. Li, “Direct simulation method based on BGK equation,” In 27th International Symposium on Rarefied Gas Dynamics, AIP Conference Proceedings, 1333: 283-288 (2011).Li2012 J. Li, “Comparison between the DSMC and DSBGK methods,” arXiv: 1207.1040 [physics.comp-ph] (2012).LiSultan2015 J. Liand A.S. Sultan, “Permeability computations of shale gas by the pore-scale Monte Carlo molecular simulations,” In International Petroleum Technology Conference, IPTC-18263-MS (2015).LiSultan2016 J. Liand A.S. Sultan, “Klinkenberg slippage effect in the permeability computations of shale gas by the pore-scale simulations,” J. Natural Gas Sci. Engineering, in press (2016). https://doi.org/10.1016/j.jngse.2016.07.041.entrance1 E.M. Sparrow, S.H. Lin and T.S. Lundgren, “Flow development in the hydrodynamic entrance region of tubes and ducts,”Phys. Fluids, 7 (3), 338-347 (1964).entrance2 Z. Duan and Y. Muzychka, “Slip flow in the hydrodynamic entrance region of circular and noncircular microchannels,”J. Fluids Eng., 132 (1), 011201 (2009).coeff E. B. Arklic, K. S. Breuer and M.A. Schmidt, “Mass flow and tangential momentum accommodation in sillicon micromachined channels,” J. Fluid Mech., 437, 29-43 (2001).
http://arxiv.org/abs/1708.08105v2
{ "authors": [ "Jun Li", "Chunpei Cai", "Zhi-Hui Li" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20170827163000", "title": "Numerical study on thermal transpiration flows through a rectangular channel" }
Single- and double-scattering production of four muonsin ultraperipheral PbPb collisionsat the Large Hadron Collider Antoni Szczurek[Also at Faculty of Mathematics and Natural Sciences, University of Rzeszow, ul. Pigonia 1, 35-310 Rzeszów, Poland.] December 30, 2023 ======================================================================================================================================= The unique Steiner triple system of order 7 has a point-block incidence graph known as the Heawood graph.Motivated by questions in combinatorial matrix theory, we consider the problem of constructing a faithful orthogonal representation of this graph, i.e., an assignment of a vector in ℂ^d to each vertex such that two vertices are adjacent precisely when assigned nonorthogonal vectors. We show that d=10 is the smallest number of dimensions in which such a representation exists, a value known as the minimum semidefinite rank of the graph, and give such a representation in 10 real dimensions. We then show how the same approach gives a lower bound on this parameter for the incidence graph of any Steiner triple system, and highlight some questions concerning the general upper bound.December 30, 2023 § INTRODUCTIONFundamental to what follows is the idea of assigning a vector to each vertex of a graph so that the inner products among the vectors in some way reflect the adjacency relation on the vertices.A geometric representation of this sort may then be useful in studying properties of the graph.This approach dates back at least to the celebrated work of Lovász <cit.> in determining the Shannon capacity of the 5-cycle;see also <cit.> for a unifying discussion.The following definition provides one realization of this idea. Let G be a graph and X be an inner product space.An orthogonal representation of G in X is a function r:V(G) → X such that two vertices of G are adjacent if and only if they are mapped by r to nonorthogonal vectors, i.e., for any distinct u,v ∈ V(G),⟨ r(u), r(v) ⟩≠ 0⟺ {u,v}∈ E(G).We note that the notion requiring only the forward direction of (<ref>) has received a good deal of attention; many authors would refer to the notion set out in Definition <ref> as that of a faithful orthogonal representation. Also, what some authors would consider an orthogonal representation of G would be considered by others to be an orthogonal representation of the complement of G. Variations of this sort must be kept in mind when considering the related literature; some results derived in the context of different such choices are surveyed in each of <cit.> and <cit.>.The particular notion of an orthogonal representation given by Definition <ref> has relevance to combinatorial matrix theory in the context of certain variants of the minimum rank problem, which, broadly construed, calls for finding the smallest possible rank among all matrices meeting a given combinatorial description.Instances of this problem arise naturally in applications such as computational complexity theory <cit.> and quantum information theory <cit.>.Often, a matrix is first required to be symmetric (or Hermitian) and then further conditions are imposed in terms of the graph whose edges correspond to the locations of the off-diagonal nonzero entries of the matrix. This is made precise by the following definition.Note that, in all that follows, we denote by A_ij the entry in row i and column j of matrix A. Let A be an n× n Hermitian matrix.The graph of A is the unique simple graph on vertices v_1,…,v_n such that, for every i≠ j, vertices v_i and v_j are adjacent if and only if A_ij≠ 0. The associated minimum rank problem is to determine the smallest rank among the Hermitian (or real symmetric) matrices with a fixed graph, a value known as the minimum rank of the graph. This problem has received considerable attention in combinatorial matrix theory; see <cit.> for a survey.The present work bears on a variant of the problem in which only positive semidefinite Hermitian matrices are considered.In particular, we study the graph invariant defined as follows. Let G be a simple graph on n vertices.The minimum semidefinite rank of G is the smallest rank among all positive semidefinite Hermitian matrices with graph G.This value is denoted by G. The following observation connects this variant of the minimum rank problem with the notion of an orthogonal representation.Itis a simple consequence of the characterization of positive semidefinite matrices as Gram matrices. The smallest d such that G has an orthogonal representation in ℂ^d is d=G. The smallest d allowing an orthogonal representation of G in ℝ^d may also be of interest, in which case the minimum of Definition <ref> can be taken over the real symmetric matrices; we refer to this value as the minimum semidefinite rank of G over ℝ.The ordinary Laplacian matrix of a graph G on n vertices shows G to be well-defined and at most n-1.In addition, the minimum semidefinite rank is additive on the connected components of a graph, so that it is sufficient to consider connected graphs only.The question of how combinatorial properties of a graph relate to its minimum semidefinite rank has received a good deal of interest; see, e.g., <cit.> and <cit.>.One simple result is the following. If G is a cycle on n vertices, then G = n-2. The motivation for the present work begins with a result of <cit.> that can be recast as follows. If G is a connected triangle-free graph on n ≥ 2 vertices, then G≥1/2 n. The question of how Theorem <ref> might generalize has received some attention.For instance, the implications of replacing the triangle-free condition with a larger upper bound on the clique number are explored in <cit.>.In <cit.>, it was observed that (except in trivial cases) a graph meeting the lower bound of Theorem <ref> must have a girth of 4, suggesting that, in seeking a generalization, the condition that the graph be triangle-free be viewed as a lower bound on its girth.In particular, the following conjecture was put forward. Suppose G is a connected graph on n ≥ 2 vertices and k is an integer with k ≥ 4.If G has girth at least k, then G≥(k-2/k)n. In light of Theorem <ref>, this conjecture may be viewed as asserting that, among the connected graphs of girth at least k, the minimum semidefinite rank as a fraction of the number of vertices is minimized by the k-cycle.While Conjecture <ref> was found to hold for all graphs on at most 7 vertices, it was suggested that a revealing test case might be provided by the cage graphs, defined as follows. A graph that has girth g in which each vertex has degree d is called a (d,g)-cage when no graph on fewer vertices has both of those properties. The well-known Petersen graph, with 10 vertices, is the unique (3,5)-cage.Its minimum semidefinite rank is 6, meeting the lower bound of Conjecture <ref>.There is also a unique (3,6)-cage, known as the Heawood graph, with 14 vertices, shown in Figure <ref>. Conjecture <ref> would require its minimum semidefinite rank to be at least 10.Unfortunately, the problem of determining for a given graph the value of its minimum rank or minimum semidefinite rank may be very difficult. One of the few general techniques (introduced in <cit.>) that is available for lower-bounding the minimum rank involves computing a zero forcing parameter for the graph. Such a parameter gives an upper bound on the dimension of the null space, and hence a lower bound on the rank, of a matrix by exploiting how the graph of the matrix constrains the zero-nonzero patterns (i.e., supports) that may occur among its null vectors.A variant of the zero forcing technique specific to the positive semidefinite case was introduced in <cit.>.Namely, the positive semidefinite zero forcing number of G, denoted G, is defined for any graph G. While a precise definition of G is beyond the scope of this paper, we note that the definition is purely combinatorial, so that G may be considered from a strictly graph-theoretic perspective.(See, e.g., <cit.>.)Nevertheless, G≥ n - G for every graph G, though the gap may be arbitrarily large <cit.>.With H denoting the Heawood graph, computer calculation using <cit.> gives Z_+(H) = 5, implying that H≥ 9. The results developed in this paper show that in fact H=10. This is accomplished via a geometric approach that may be applied to the graph describing the incidence structure of any Steiner triple system.The Heawood graph is one such graph; we treat the general case in Section <ref>. The remainder of this paper is organized as follows.Section <ref> presents the necessary background regarding the Heawood graph and its relevant connections with other mathematical objects.Section <ref> develops the main results of the paper, establishing upper and lower bounds on the minimum semidefinite rank of the Heawood graph.Section <ref> explores how the same approach may be applied to the incidence graph of any Steiner triple system, and exhibits further bounds derived by this method.Finally, Section <ref> highlights some questions suggested by this work and possible directions for future research.§ THE HEAWOOD GRAPH The Heawood graph, shown in Figure <ref>, has served as an important example in the study of minimum rank problems.For example, the complement of this graph was used in <cit.> to give separation between various minimum rank parameters and corresponding combinatorial bounds.Also, <cit.> presented the first example of a zero-nonzero pattern for which the minimum rank (over the reals) was unequal to a combinatorial lower bound known as the triangle number, and the pattern given was exactly that of the biadjacency matrix (defined in Section <ref>) of the Heawood graph.The minimum rank of the Heawood graph may be obtained as follows.First, the ordinary zero forcing number (not the positive semidefinite variant) gives a lower bound of 8.Meanwhile, the adjacency matrix A of the graph has the eigenvalue √(2) with multiplicity 6, so that (A - √(2) I)=14-6=8 gives a corresponding upper bound.Hence, the minimum rank of the Heawood graph is 8.In the context of minimum semidefinite rank, interest in the Heawood graph emerged due to properties making it an attractive test case for Conjecture <ref>, as outlined in Section <ref>.For what follows, the most important way to view the Heawood graph is through its connection with the Fano plane, the finite projective plane of order 2,illustrated in Figure <ref>.This is a finite geometry comprising seven points and seven lines in which each line contains exactly three points and each point lies on exactly three lines.Hence, its points and lines give a Steiner triple system (in fact, the unique one) of order 7.We return to this connection in Section <ref>; for now, we need to note only the following.The points of the Fano plane may be identified with the integers 1,2,…,7 so that the set of its lines becomes{{1,2,6}, {2,3,7}, {1,3,4},{2,4,5},{3,5,6},{4,6,7}, {1,5,7}}.Figure <ref> shows the points of the Fano plane labeled to reflect such an identification. The Heawood graph is the point-edge incidence graph of the Fano plane.That is, its vertices can be partitioned into two independent sets, one in correspondence with the points of the Fano plane, and the other in correspondence with its lines, such that a point and a line are incident precisely when the corresponding vertices are adjacent.Through this connection, many of the properties of the Heawood graph that we will need follow from properties of the Fano plane.One such property concerns the smallest size of a set from which one may color the points of the Fano plane without inducing a monochromatic line, a line with all of its points colored the same.In particular, the following simple fact (a special case of a result of <cit.>; see Section <ref>) is straightforward to verify.Every 2-coloring of the points of the Fano plane induces a monochromatic line. § THE MINIMUM SEMIDEFINITE RANK OF THE HEAWOOD GRAPH The goal of this section is to establish that the minimum semidefinite rank of the Heawood graph is 10, and that this in fact holds over ℝ as well.We begin by noting that if a graph is bipartite, then a special attack is possible on the problem of determining its minimum semidefinite rank. A brief argument in the case of the Heawood graph follows; for a general discussion, see <cit.> or <cit.>. Let F be ℝ or ℂ.The Heawood graph has an orthogonal representation in F^7+n if and only if some matrix A ∈ M_7+n,7(F) with mutually orthogonal columns has the form [ [ * * 0 0 0 * 0; 0 * * 0 0 0 *; 0 * * 0 0 0; 0 * 0 * * 0 0; 0 0 * 0 * * 0; 0 0 0 * 0 * *; 0 0 0 * 0 *; u_1 u_2 u_3 u_4 u_5 u_6 u_7 ]],where each ∗ denotes a nonzero entry, and each u_i is a (column) vector in F^n.Let B be a biadjacency matrix for the Heawood graph; that is, B is a (0,1)-matrix with rows in correspondence with one of its partite sets and columns in correspondence with the other such that B_ij=1precisely when the vertices corresponding to row i and column j are adjacent.Then, subject to an appropriate ordering of its rows and columns, the zero-nonzero pattern of B is given by the upper 7× 7 submatrix of (<ref>).That is, A ∈ M_7+n,7(F) is of the form (<ref>) if and only if the upper 7× 7 submatrix of A has the zero-nonzero pattern of B.Thus, if such a matrix A exists with mutually orthogonal columns, then the columns of A together with the initial 7 unit coordinate vectors in F^7+n form an orthogonal representation of the Heawood graph.Conversely, given an orthogonal representation of the Heawood graph in F^7+n, it may be assumed (subject to an appropriate unitary transformation) that one of the partite sets is assigned the first 7 standard coordinate vectors. Taking the vectors assigned to the other partite set as the columns of a matrix A then gives A ∈ M_7+n,7(F) of the form (<ref>) with mutually orthogonal columns. Hence, the minimum semidefinite rank of the Heawood graph is seen to be the smallest value of 7+n such that some A ∈ M_7+n,7(ℂ) of the form (<ref>) has mutually orthogonal columns.Lemma <ref> gives a useful reformulation of this condition; its proof relies on the followingobservation. Let A be a matrix of the form (<ref>).In particular, each of the seven lines of the Fano plane, as given in (<ref>), gives the locations of the nonzero entries within one of the first seven rows of A. Since each pair of points of the Fano plane lies on exactly one line, it follows that, for every pair of distinct columns i and j of A, there exists a unique k ∈{1,2,…,7} such that both columns have a nonzero entry in row k.Hence, when A has entries from ℂ, the two columns are orthogonal if and only if A_kiA_kj = -⟨ u_i, u_j ⟩.Note that, combinatorially, Observation <ref> derives from the fact that the Heawood graph is the incidence graph of the Fano plane, precisely because the lines of the Fano plane form a Steiner triple system.Suppose u_1,u_2,…, u_7 ∈ℂ^n.Then the following are equivalent. * The vectors u_1,u_2,…,u_7 occur as the u_i of (<ref>) in some matrix A ∈ M_7+n,7(ℂ) of that form having mutually orthogonal columns. * The product ⟨ u_i, u_j ⟩⟨ u_j, u_k ⟩⟨ u_k, u_i⟩ is real and negative whenever {i,j,k} is a line in the Fano plane, i.e., whenever {i,j,k} is contained in the set (<ref>). Moreover, if condition (<ref>) is satisfied with each u_i ∈ℝ^n, then a matrix A witnessing condition (<ref>) exists with A ∈ M_7+n,7(ℝ). Suppose first that condition (<ref>) is satisfied.Then there exists some A ∈ M_7+n,7(ℂ) with mutually orthogonal columns such thatA = [ [ a b 0 0 0 c 0; 0 * * 0 0 0 *; 0 * * 0 0 0; 0 * 0 * * 0 0; 0 0 * 0 * * 0; 0 0 0 * 0 * *; 0 0 0 * 0 *; u_1 u_2 u_3 u_4 u_5 u_6 u_7 ]],where a, b, c and each ∗ entry are nonzero. Since the columns indexed by the set {1,2,6} are mutually orthogonal, it follows from Observation <ref> thata = -⟨ u_1,u_2⟩/b,b = -⟨ u_2, u_6 ⟩/c andc = -⟨ u_6, u_1 ⟩/a.Combining the first and third of these equations yieldsc = -⟨ u_6, u_1 ⟩/a = -⟨ u_6, u_1 ⟩b/-⟨ u_2,u_1⟩ = ⟨ u_6, u_1 ⟩/⟨ u_2,u_1⟩ b,and combining this with the second equation of (<ref>) givesb = -⟨ u_2, u_6 ⟩/c = -⟨ u_2, u_6 ⟩/b⟨ u_1, u_2 ⟩/⟨ u_1, u_6⟩,which implies that 0 > -|b|^2 = ⟨ u_1, u_2 ⟩⟨ u_2, u_6 ⟩/⟨ u_1, u_6⟩⟨ u_6, u_1⟩/⟨ u_6, u_1⟩ = ⟨ u_1, u_2 ⟩⟨ u_2, u_6 ⟩⟨ u_6, u_1⟩/|⟨ u_1, u_6⟩|^2.In particular, then, ⟨ u_1, u_2 ⟩⟨ u_2, u_6 ⟩⟨ u_6, u_1⟩ is real and negative.This same argument may be applied to every row of A, and hence condition (<ref>) holds.Conversely, suppose condition (<ref>) holds. Equations analogous to (<ref>), (<ref>) and (<ref>) then yield values for the nonzero entries in each of the initial 7 rows of a matrix A of the form (<ref>).Explicitly, for each m ∈{1,2,…,7}, if the three nonzero entries in row m fall in columns i, j and k with i < j < k, then values for those entries may be taken asA_mj =√(-⟨ u_i, u_j ⟩⟨ u_j, u_k ⟩⟨ u_k, u_i⟩/|⟨ u_i, u_k⟩|^2),A_mi = -⟨ u_i,u_j⟩/A_mj, and A_mk = A_mj⟨ u_k, u_i ⟩/⟨ u_j,u_i⟩.It then follows from Observation <ref> that A has mutually orthogonal columns. Hence, condition (<ref>) holds. Lemmas <ref> and <ref> together show that the minimum semidefinite rank of the Heawood graph is the smallest value of 7+n such that vectors u_1,u_2,…,u_7 ∈ℂ^n exist satisfying condition (<ref>) of Lemma <ref>.In particular, to establish an upper bound of 10 on this value, it suffices to construct vectors in ℂ^3 satisfying this condition; we next show that in fact such vectors can be constructed in ℝ^3.There exist vectors u_1,u_2,…,u_7 ∈ℝ^3such that ⟨ u_i, u_j ⟩⟨ u_j, u_k ⟩⟨ u_k, u_i⟩ is real and negative whenever {i,j,k} is a line in the Fano plane, i.e., whenever {i,j,k} is contained in the set (<ref>).Given a positive real number α, letu_j = (cos(2π j/7),sin(2π j/7), √(α) )for each j∈{1,2,…,7}. Then, for any j and k, ⟨ u_j,u_k ⟩ = cos(2π j/7)cos(2π k/7) + sin(2π j/7)sin(2π k/7) + α= cos( 2π(j-k)/7) + α,so that ⟨ u_j,u_k⟩ is completely determined by the difference j-k modulo 7. But the set of pairwise differences modulo 7 is the same for every set contained in (<ref>); explicitly, it is {1,4,5}.Hence, it suffices to ensure that the conclusion holds for any one such set, e.g., to guarantee that ⟨ u_1, u_2 ⟩⟨ u_2, u_6 ⟩⟨ u_6, u_1⟩ is real and negative.This can be achieved by choosing α such that cos(3π/7) < α< cos(π/7), as then⟨u_1,u_2 ⟩ =cos(2π/7) + α >0,cos(2π/7) + α ⟨u_2,u_6 ⟩ =cos(8π/7) + α =-cos(π/7) + α< 0,and⟨u_6,u_1 ⟩ = cos(10π/7)+ α = -cos(3π/7)+α> 0. This yields an upper bound on the minimum semidefinite rank of the Heawood graph. The minimum semidefinite rank of the Heawood graph is at most 10.By Lemma <ref>, there exist u_1,u_2,…,u_7 ∈ℝ^3 satisfying condition (<ref>) of Lemma <ref>, and hence appearing as the u_i of (<ref>) for some matrix A ∈ M_10,7(ℝ) with orthogonal columns.Hence, by Lemma <ref>, the Heawood graph has an orthogonal representation in ℝ^10, and hence in ℂ^10. To establish our main result, we turn now to the requisite lower bound, namely that the minimum semidefinite rank of the Heawood graph is at least 10.By the discussion above, it suffices to show that no vectors from ℂ^2 exist satisfying the conditions of Lemma <ref>.Our approach can be summarized as follows.Assuming to the contrary that such vectors do exist, we identify them with points on the Riemann sphere.We then argue that the two conditions on these vectors shown to be equivalent by Lemma <ref> are themselves equivalent to a condition on the corresponding points on the sphere that, when satisfied, implies that these points must be arranged in such a way as to induce a 2-coloring of the Fano plane with no monochromatic line, in contradiction to Lemma <ref>.Our first task is to establish the appropriate correspondence between vectors in ℂ^2 and points on the appropriate sphere in ℝ^3.We begin with a crucial observation. Each of the two conditions of Lemma <ref> is unaffected by multiplying any individual vector by an arbitrary nonzero complex scalar. In light of Observation <ref>, we may regard the conditions of Lemma <ref> as applying to points on the projective line ℂP^1, which can be thought of as the extended complex plane, ℂ∪{∞}.Through the usual stereographic projection, the extended complex plane can be transformed bijectively to a sphere in ℝ^3.The image of such an identification is typically referred to as the Riemann sphere. (See <cit.> for details.)Any sphere in ℝ^3 can be made the image of such an identification; for the sake of making our computations explicit in what follows, we will choose the sphere of radius 1/2 centered at (0,0,1/2), namelyS = { (x,y,z) : x^2+y^2+(z-1/2)^2 = (1/2)^2 } = { (x,y,z) : x^2+y^2+z^2 = z }.Again for the sake of explicit computation, we now define a function φ that effects the identification outlined above, mapping points in ℂ^2 to points on S.Let φ:ℂ^2 →ℝ^3 be defined as follows. First, let φ_1 map ℂ^2 to ℂP^1 in the usual way, i.e.,φ_1(z_1,z_2) =z_1/z_2ifz_2≠ 0,∞ otherwise.Next, apply the familiar stereographic projection to map the image of φ_1 to the sphere S defined in (<ref>).Specifically, identify ∞ with the “pole” of the sphere at (0,0,1), and identify a+bi ∈ℂ with the unique point at which S∖{(0,0,1)} intersects the line parameterized byt(a,b,0) + (1-t)(0,0,1), t ∈ℝ.It follows from (<ref>) that this point of intersection is 1/1+a^2+b^2(a,b,a^2+b^2).Thus, we letφ_2(z) = 1/1+a^2+b^2(a,b,a^2+b^2)ifz=a+bi,and (0,0,1)ifz = ∞.Finally, let φ = φ_2 ∘φ_1. Having identified each vector in ℂ^2 with a point on the sphere S,the conditions of Lemma <ref> applied to triples of vectors in ℂ^2 can be reinterpreted as applying to triples of points on S.The next lemma provides two equivalent such interpretations. Let C denote the center of the sphere S defined in (<ref>), i.e., C=(0,0,1/2).For any u_i, u_j, u_k ∈ℂ^2, the following are equivalent. *The product ⟨ u_i,u_j ⟩⟨ u_j,u_k ⟩⟨ u_k,u_i ⟩ is real and negative. *No two of φ(u_i), φ(u_j) and φ(u_k) are antipodal on S, but the convex hull of all three contains C. *Every plane passing through C that contains none of φ(u_i), φ(u_j) and φ(u_k) separates one of those latter three points from the other two.Moreover, each of those three points is separated from the other two by some such plane. We start with some simplifying assumptions.First, subject to the appropriate scaling, we may assume that each of u_i, u_j and u_k is equal either to (1,0) or to (z,1) for some z∈ℂ.(That is, we may work projectively.)Since it is clear from Definition <ref> that the image of a point under φ isdetermined only by the line through the origin in ℂ^2 on which the point lies, this cannot affect conditions (<ref>) or (<ref>), while by Observation <ref> it does not affect condition (<ref>).Next, observe that some pair of rotations of the sphere can be applied sequentially to move φ(u_i) to the origin and φ(u_j) to a point on the xz-plane with a nonnegative x-coordinate.This clearly leaves conditions(<ref>) and (<ref>) unaffected.Moreover, such a rigid motion of the sphere corresponds to a unitary transformation of ℂ^2 <cit.> and hence preserves condition (<ref>).Hence, we may assume that φ(u_i)=(0,0,0), so that, equivalently, u_i = (0,1), and also that for some nonnegative real number s,φ(u_j)=1/1+s^2(s,0,s^2),so that, equivalently,u_j = (s,1).These assumptions are illustrated in Figure <ref>. Finally, for any z ∈ℂ^2, let φ(z) denote the point on S antipodal to φ(z). In particular,φ(u_j)= 1/1+s^2( -s,0, 1 ).We now begin the proof by showing conditions (<ref>) and (<ref>) to be equivalent.Suppose first that (<ref>) holds.This is incompatible with u_k=(1,0), since u_i=(0,1).Therefore u_k=(t,1) for some t∈ℂ.Hence, ⟨ u_i,u_j ⟩⟨ u_j,u_k ⟩⟨ u_k,u_i ⟩= 1+st is real and negative by (<ref>), and so t ∈ℝ with t < 0 and s > 0. Thus,φ(u_k)=1/1+t^2(t,0,t^2),and so φ(u_k) lies on the xz-plane with a negative x-coordinate. Moreover,1+st < 0|st| > 1s^2t^2 > 11+t^2 < t^2(1+s^2) 1/1+s^2 < t^2/1+t^2.By (<ref>) and (<ref>), this shows that the z-coordinate of φ(u_k) exceeds that ofφ(u_j), so that C is in the convex hull of φ(u_i), φ(u_j) and φ(u_k), and also shows that φ(u_j) ≠φ(u_k).Moreover, neither φ(u_j) nor φ(u_k) may equal φ(u_i)=(0,0,1). Hence, the three points φ(u_i), φ(u_j) and φ(u_k) do not contain an antipodal pair.Thus, condition (<ref>) holds.Now suppose that (<ref>) holds.Then s>0, as s=0 would give φ(u_j)=φ(u_i), requiring φ(u_k) = φ(u_i) in order that C lie in the convex hull of the three points.Further, since φ(u_k) ≠φ(u_i), we cannot have u_k=(1,0).Therefore, u_k=(t,1) for some t ∈ℂ. The fact that C is in the convex hull of φ(u_i), φ(u_j) and φ(u_k) implies that φ(u_k) lies on the plane containing φ(u_i), φ(u_j) and C, namely the xz-plane.Thus, we have t∈ℝ, so thatφ(u_k) = 1/1+t^2(t,0,t^2). Moreover, since φ(u_k) and φ(u_j) must lie on opposite sides of the yz-plane, we have t<0. Finally, the z-coordinate of φ(u_k) must exceed that of φ(u_j), so that t^2/1+t^2 > 1/1+s^2. This gives t^2+t^2s^2 = t^2(1+s^2) > 1+t^2.Hence, s^2t^2 > 1, so that |st| > 1.Combining this with the fact that t<0 and s > 0 so that st < 0, we have ⟨ u_i,u_j ⟩⟨ u_j,u_k ⟩⟨ u_k,u_i ⟩=1+st < 0, so that condition (<ref>) holds.Having shown that conditions (<ref>) and (<ref>) are equivalent, we now complete the proof by proving the equivalence of conditions (<ref>) and (<ref>). Assume first that (<ref>) holds.Then C is in the convex hull of φ(u_i), φ(u_j) and φ(u_k), so that all three points lie on the xz-plane.Now consider a plane P passing through C that contains none of φ(u_i), φ(u_j) and φ(u_k).As P may not coincide with the xz-plane, its intersection with the xz-plane is a line L passing through C.Since C is in the convex hull of φ(u_i), φ(u_j) and φ(u_k), these three points cannot lie all on the same side of L.This implies that L, and hence P, separates one of the three points from the other two.It remains to show that each of φ(u_i), φ(u_j) and φ(u_k) is separated from the other two by some plane containing C.By symmetry, it suffices to prove that φ(u_i) is separated from φ(u_j) and φ(u_k) by some such plane.Toward that end, consider the plane P perpendicular to the xz-plane and passing through φ(u_j) and φ(u_j).Since C is in the convex hull of the three points, φ(u_k) cannot lie on the same side of P as does φ(u_i).Moreover, φ(u_k) cannot lie on the plane P, as this would imply φ(u_k)=φ(u_j), which (<ref>) forbids.Hence, φ(u_i) and φ(u_k) must lie on opposite sides of P.Since P contains the line through C that is perpendicular to the xz-plane, it follows that rotating P about this line bya sufficiently small angle produces a plane through C separating φ(u_i) from φ(u_j) and φ(u_k), as desired.Now suppose (<ref>) holds. If any two points among φ(u_i), φ(u_j) and φ(u_k) were antipodal, then those two points could not be separated from the third by any plane containing C, which would contradict (<ref>).Hence, φ(u_i), φ(u_j) and φ(u_k) do not contain an antipodal pair, and it remains to show that C is in their convex hull.First, since φ(u_i) and φ(u_j) are not antipodal, they lie on the same side of some line in the xz-plane passing through C.If φ(u_k) were not in the xz-plane, then rotating the xz-plane about this line by some small angle would produce a plane relative to which all three of φ(u_i), φ(u_j) and φ(u_k) would lie on the same side, contradicting (<ref>).Hence, φ(u_k) lies on the xz-plane along with φ(u_i), φ(u_j) and C.We have by (<ref>) that φ(u_i) is separated from φ(u_j) and φ(u_k) by some plane P that contains none of those points but does contain C.As P may not coincide with the xz-plane, it intersects the xz-plane in some line L passing through C.Let A be the point at which the line through φ(u_i) and φ(u_j) intersects L and let B be the point at which the line through φ(u_i) and φ(u_k) intersects L.(See Figure <ref>.) Note that φ(u_j) and φ(u_k) must lie on opposite sides of the yz-plane, as otherwise rotating that plane by some small angle about the line through C that is perpendicular to the zx-plane would produce a plane containing C on one side of which would lie all three of the points φ(u_i), φ(u_j) and φ(u_k), contradicting (<ref>).It follows that A and B lie on opposite sides of the yz-plane as well.This implies that C is on the line segment with endpoints A and B.Since A and B were chosen within the convex hull of φ(u_i), φ(u_j) and φ(u_k), it followsthat C lies in the convex hull of those points as well.Hence, condition (<ref>) holds. We now have that any collection of vectors in ℂ^2 satisfying the algebraic condition (<ref>) of Lemma <ref> corresponds to a collection of points on the sphere S arranged such that every triple of points corresponding to a line in the Fano plane satisfies the geometric conditions (<ref>) and (<ref>) of Lemma <ref>.We next show that such an arrangement gives rise to an impossible coloring of the Fano plane, a contradiction that yields our desired lower bound.The minimum semidefinite rank of the Heawood graph is at least 10.Suppose to the contrary that the Heawood graph has an orthogonal representation in ℂ^9.Then, by Lemma <ref>, there exist vectors u_1,u_2,…,u_7 ∈ℂ^2 satisfying the equivalent conditions of Lemma <ref>.By Lemma <ref>, these vectors induce points φ(u_1),φ(u_2),…,φ(u_7) on the sphere S defined in (<ref>) such that whenever {i,j,k} is a line in the Fano plane, i.e., whenever {i,j,k} is contained in the set (<ref>), the triple of points φ(u_i), φ(u_j) and φ(u_k) satisfies condition (<ref>) of Lemma <ref>.Let P be any plane through the center of S that contains none of the points φ(u_1),φ(u_2),…,φ(u_7).Then P divides S into hemispheres.Choose one of the hemispheres, and color red every point i of the Fano plane such that φ(u_i) lies on that hemisphere.Color the other points of the Fano plane green.By Corollary <ref>, this coloring must result in some line of the Fano plane, say {r,s,t}, all of whose points are colored the same.But this means that φ(u_r), φ(u_s) and φ(u_t) lie all on the same hemisphere of S, contradicting condition (<ref>) of Lemma <ref>.Our main result now follows from the combination of Propositions <ref> and <ref>. The minimum semidefinite rank of the Heawood graph is 10. § INCIDENCE GRAPHS OF STEINER TRIPLE SYSTEMSWe now identify the Heawood graph as one of a general family of graphs to which the approach of Section <ref> may be applied.Recall the following definition from combinatorial design theory; see, e.g., <cit.>. A Steiner triple system of order v consists of a set X, whose elements are called the points of the system, such that |X|=v, together with a collection of 3-subsets of X, called the triples of the system, such that every 2-subset of X is contained in exactly one triple. It follows from Observation <ref> that the lines of the Fano plane form a Steiner triple system of order 7.(Actually it is the unique Steiner triple system of that order.)A fact crucial to the proof of Proposition <ref> was previously noted as Lemma <ref>, namely that every 2-coloring of the points of the Fano plane induces a monochromatic line.More generally, the weak chromatic number of a Steiner triple system is the smallest number of colors from which the points of the system may be colored such that no triple is left with all of its points colored the same; Lemma <ref> is a special case of the following result of <cit.>. Every Steiner triple system of order 7 or greater has a weak chromatic number of at least 3. Every Steiner triple system is represented by a bipartite graph in the same sense in which the Fano plane is represented by the Heawood graph. The incidence graph of a Steiner triple system is the graph G whose vertices can be partitioned into two sets, one in correspondence with the points of the system and the other in correspondence with its triples, such that two vertices are adjacent precisely when they correspond to a point and a triple containing that point. Hence, the Heawood graph is the incidence graph of the unique Steiner triple system of order 7.The approach developed in Section <ref> to establish a lower bound on the minimum semidefinite rank of the Heawood graph can be adapted to do the same for the incidence graph of any Steiner triple system of order at least 7. Let G be the incidence graph of a Steiner triple system of order v ≥ 7, let b be the number of triples of the system, and let n be the number of vertices of G.Then b = 1/3v2, n= b + v = 1/6 (v^2+5v), andG≥b + 3 = 1/6 (v^2-v+18). That b = 1/3v2 follows immediately from Definition <ref>.The claim that n=b+v is trivial. By definition, G has a biadjacency matrix M of size b × v. With the zero-nonzero pattern of M playing the role of the upper portion of (<ref>), a result analogous to Lemma <ref> is obtained by the same argument. It follows from Definition <ref> that, for any matrix whose initial b rows have a zero-nonzero pattern matching that of M, a statement analogous to Observation <ref> holds.Hence, the statement and proof of Lemma <ref> can be adapted in a straightforward way, and Lemma <ref> can then be applied without modification.The conclusion of the argument then proceeds as in the proof of Proposition <ref>.That is, any supposed orthogonal representation of G in fewer than b+3 dimensions gives rise to v points on the Riemann sphere arranged so as to induce a 2-coloring of the points of the Steiner triple system in which at least two different colors occur within every triple, contradictingTheorem <ref>. We wish to point out the limitations of Theorem <ref>,so as to make clear why we did not attempt to develop our main results in such general terms.To this end, note that the incidence graph G of a Steiner triple system of order v has an independent set (corresponding to the triples of the system) of size b=1/3v2.This trivially implies that G≥ b = 1/3v2 = 1/6(v^2 - v), and Theorem <ref>provides only a slight improvement on this bound.In the case of the Heawood graph, what is interesting is that this improved bound is sharp.It is unclear whether this remains the case for the incidence graphs of larger Steiner triple systems, however.Nevertheless, there are many (see <cit.>) Steiner triple systems of small order, and the application of Theorem <ref> to their incidence graphs may be illuminating. § CONCLUSION AND OPEN QUESTIONS Theorem <ref> gives a lower bound on G whenever G is the incidence graph of a Steiner triple system.It is natural to compare this bound with that obtained from the positive semidefinite zero forcing number G referenced in Section <ref>.Table <ref> details the result of this comparison for each Steiner triple system of order v, with v > 3 to avoid the trivial case, up to v = 15. (The next order for which any Steiner triple systems exist is v=19, but in this case it would be computationally expensive to determine G for even just one of these, and there are altogether 11,084,874,829 of them <cit.>.)Table <ref> invites some observations.The first is that in each case the lower bound obtained from Theorem <ref> exceeds the lower bound provided by the zero forcing number by exactly one.This happens to be the case because, for each graph G of the 84 detailed in the table, G turns out to be 2 less than the order of the corresponding Steiner triple system.The question as to whether this holds in general is outside the scope of the present work, but seems interesting.Does G = v - 2 whenever G is the incidence graph of a Steiner triple system of order v? An affirmative answer to Question <ref> would imply that the positive semidefinite zero forcing number of the incidence graph of a Steiner triple system of order v is determined by v alone.It is open as well whether this may be the case for the positive semidefinite minimum rank itself. Do there exist two nonisomorphic Steiner triple systems of the same order whose incidence graphs differ in their minimum semidefinite rank? In particular, although the lower bound provided by Theorem <ref> is met by the Heawood graph, we do not expect that this is uniformly the case for the incidence graphs of Steiner triple systems of larger order.Nevertheless, the question remains open even for the unique Steiner triple system of order 9. Is the Heawood graph the only incidence graph of a Steiner triple system for which the lower bound of Theorem <ref> is met?In particular, with G the incidence graph of the unique Steiner triple system of order 9, does an orthogonal representation of G in ℂ^15 exist? Given a lower bound on the minimum semidefinite rank, the problem of establishing a corresponding upper bound is often handled via some appropriate geometric construction.Here this is done for the Heawood graph via Lemma <ref>.For the incidence graphs of larger Steiner triple systems, however, the problem remains open. Can properties of Steiner triple systems in general be exploited to constructlow-dimensional orthogonal representations for their incidence graphs?Of course, the questions explored here for Steiner triple systems may be considered for the incidence graphs of other families of combinatorial designs.By definition, such graphs are bipartite, and so a natural analog of Lemma <ref> is always available. No appropriate generalization of Lemma <ref>, however, seems forthcoming in any case beyond that of a Steiner triple system.For the case of Steiner triple systems, Lemma <ref> gives a useful geometric interpretation of the conditions on the vectors u_i of Lemma <ref>, and this was crucial to the approach used to obtain the lower bound of Theorem <ref>. This raises the question as to whether there can be found some analogous geometric interpretation for these conditions as they apply to vectors in ℂ^k for k ≥ 3.Such an interpretation might provide an avenue toward generalizing the lower bound established here for the Heawood graph to the minimum semidefinite ranks of the incidence graphs of other Steiner triple systems.§ ACKNOWLEDGMENTS The present work developed through a collaboration of the authors that began at the 2010 NSF-CBMS Regional Research Conference entitled The Mutually Beneficial Relationship of Matrices and Graphs, supported by the IMA and by the NSF through grant number DMS-0938261.The authors wish to thank those organizations as well as Iowa State University, which hosted the meeting. plain
http://arxiv.org/abs/1708.07741v1
{ "authors": [ "Louis Deaett", "H. Tracy Hall" ], "categories": [ "math.CO", "52C99 (Primary) 05C50 (Secondary)" ], "primary_category": "math.CO", "published": "20170825135450", "title": "Orthogonal representations of Steiner triple system incidence graphs" }
We describe sofic groupoids in elementary terms and prove several permanence properties for soficity. We show that soficity can be determined in terms of the full group alone, answering a question by Conley, Kechris and Tucker-Drob.Keywords: Groupoids, sofic, ultraproducts, full groups. Complexity Measures of Music Bruce J. West^2 December 30, 2023 ============================§ INTRODUCTION Sofic groups were first considered by Gromov <cit.> in his work on Symbolic Dynamics (originally under the nable “initially subamenable groups“), and in 2010, Elek and Lippner <cit.> introduced soficity for equivalence relations. Since then, the classes of sofic groups and equivalence relations have been shown to satisfy several important conjectures,see for example <cit.>.The original definitions of soficity are graph-theoretical, but alternative definitions by Ozawa <cit.> and Pǎunescu <cit.> describe soficity at the level of the so-called full semigroup of R, or in terms of the action of the full group on the measure algebra, which can be immediately generalized to groupoids. We will describe general elementary techniques to deal with (abstract) sofic groupoids.§ GROUPOIDS A groupoid is a small category with inverses. More precisely, it consists of a set G together with a partially defined binary operation G^(2)→ G, (g,h)↦ gh, where G^(2)⊆ G× G, called product, satisfying (1) If (g,h),(h,k)∈ G^(2) then (gh,k),(g,hk)∈ G^(2) and g(hk)=(gh)k;(2) For all g∈ G, there exists g'∈ G such that (g,g'),(g',g)∈ G^(2), and if (g,h),(k,g)∈ G^(2) then g'(gh)=h and (kg)g'=k.Given g_1,g_2,…,g_n such that (g_i,g_i+1)∈ G^(2), the product g_1⋯ g_n is uniquely determined by (1), and the element g' in (2) is unique – we denote it g^-1 and call it the inverse of g. The source and range of g∈ G are s(g)=g^-1g and r(g)=gg^-1, respectively. The unit space of G is G^(0)=s(G)=r(G). We then obtain G^(2)={(g,h)∈ G× G:s(g)=r(h)}.A discrete measurable groupoid is a groupoid G with a standard Borel structure for which the product and inverse maps are Borel and s^-1(x) is countable for all x∈ G^(0). In this case the source and range maps are also Borel, and G^(2) and G^(0)={x∈ G:x=s(x)} are Borel subsets of G.The Borel full semigroup of a discrete measurable groupoid G is the set [[G]]_B of Borel subsets α⊆ G such that the restrictions s|_α and r|_α of the source and range maps are injections, and hence Borel isomorphisms onto their respective images <cit.>. Moreover, a simple application of the Lusin-Novikov Theorem <cit.> implies that G can be covered by countably many elements of [[G]]_B.[[G]]_B is an inverse monoid[An inverse monoid is a set M with an associative binary operation (x,y)↦ xy, which has a neutral element 1 and such that for each element g∈ M there is an unique element h∈ M satisfying g=ghg and h=hgh, called the inverse of g and denoted h=g^-1.] with the natural product and inverse of sets, namelyαβ={ab:(a,b)∈ (α×β)∩ G^(2)},α^-1={a^-1:a∈α}and G^(0) is the unit of [[G]]_B, which we may instead denote by G^(0)=1 or 1_G. Moreover, [[G]]_B is closed below, i.e., if β⊆α and α∈[[G]]_B then β∈[[G]]_B.A probability measure-preserving (pmp) groupoid is a discrete measurable grou­poid G with a Borel measure μ on G^(0) satisfying μ(s(α))=μ(r(α)) for all α∈[[G]]_B.We write (G,μ) for a pmp groupoid when we need the measure μ to be explicit.The measure μ induces a pseudometric d_μ on [[G]]_B viad_μ(α,β)=μ(s(αβ))=μ(r(αβ)). The trace of α∈[[G]]_B is defined as tr(α)=μ(α∩ G^(0)). For us, it will be easier to deal with the trace instead of the metric above, which is allowed by Proposition <ref> below.The following properties of d_μ are useful, and we leave the proof to the interested reader.Given α,β,γ,δ∈ [[G]]_B; * d_μ(α,β)=d_μ(α^-1,β^-1). * d_μ(αβ,γδ)≤ d_μ(α,γ)+d_μ(β,δ);* d_μ(α,β^-1)≤ d_μ(α,αβα)+d_μ(β,βαβ); The (measured) full semigroup of a pmp groupoid (G,μ) is the metric quotient [[G]] (or [[G]]_μ to make μ explicit) of [[G]]_B under the pseudometric d_μ. The proposition above implies that [[G]] is also an inverse monoid with the natural structure.The Borel full group [G]_B of a discrete measurable groupoid G is the set of those α∈[[G]]_B with s(α)=r(α)=G^(0). If (G,μ) is a pmp groupoid, the image of [G]_B in [[G]] is called the (measured) full group G and is denoted [G] or [G]_μ.We will not make a distinction between measured and Borel full semigroups and groups, except when necessary. Let Γ be a countable group acting on a probability space (X,μ) by measure-preserving automorphisms. The transformation groupoid G=Γ⋉ X is defined as Γ× X with product(h,gx)(g,x)=(hg,x), x∈ X, g,h∈Γ[row sep=tiny] x[r,"g"][rr,bend right,"hg",swap] gx[r,"h"] hgxIn this case, the unit space G^(0) can be identified with X.Subexample 1: If X={*} is a singleton we retrieve the group Γ, in which case [G]=Γ and [[G]] consists of Γ and a zero (absorbing) element.Subexample 2: If Γ=1 is a trivial group we retrieve X, [[G]] is the measure algebra of X and [G] is the trivial group. A measure-preserving equivalence relation R on a probability space (X,μ) can be regarded as a groupoid with operation (x,y)(y,z)=(x,z) whenever (x,y),(y,z)∈ R. We can identify the unit space R^(0) as X and [[R]] as the set of all partial Borel automorphisms f:dom(f)→ran(f), where dom(f),ran(f)⊆ X, such that graph(f)⊆ R. Namely, to each such f we associate the element {(f(x),x):x∈dom(f)} of [[R]]. The product becomes the usual composition of partial maps, and the trace becomes tr(f)=μ{x:f(x)=x}. If {(G_n,μ_n)}_n is a family of pmp groupoids and t_n are non-negative numbers such that ∑_n t_n=1, we construct the convex combination groupoid G=∑ t_n G_n as follows: G is the disjoint union of all G_n, G^(2) is the disjoint union of G_n^(2), the product on G restricts to the product on each G_n, and the measure μ on G is given by μ(A)=∑_n t_nμ_n(A∩ G_n).§.§.§ Finite groupoids Every finite groupoid is a convex combination of groupoids of the form G=Γ× Y^2, where Γ is a (finite) group, Y is a (finite) set and Y^2 is the largest equivalence relation on Y. These are the connected finite groupoids. We see both Γ and Y^2 as groupoids on their own right, and the product has the obvious groupoid structure. In this case, G^(0)=Y, and the only probability measure on Y which makes G pmp is the normalized counting measure, μ_#(A)=|A|/|Y|.We will analyse the full semigroup [[Y^2]] as in Example <ref>. (a) If G is a connected finite pmp groupoid, then there exists a finite set Y and an isometric embedding π:[[G]]→[[Y^2]].(b) If G is a finite pmp groupoid and ϵ>0, then there exists a finite set Y and a map π:[[G]]→[[Y^2]] such that d(π(αβ),π(α)π(β))<ϵ and |tr(α)-tr(π(α))|<ϵ for all α∈[[G]].(a) Suppose G=Γ× Y^2. For every (g,(y,x))∈ G, set π(g,(y,x))∈[[(H× Y)^2]] by dom(π(g,(y,x)))=H×{x} and π(g,(y,x))(h,x)=(gh,y). Then π:[[G]]→[[(H× Y)^2]], π(α)=⋃_g∈απ(g) has the desired properties.(b) Suppose G=tH+(1-t)K, where H,K are connected finite pmp groupoids, and suppose further that t is rational, say t=p/q, p,q∈ℕ. Take finite sets X,Y and isometric embeddings π_H:[[H]]→[[X^2]] and π_K:[[K]]→[[Y^2]]. Let [q]={0,1,…,q-1} be a finite set with q elements, and set π:[[G]]→[[([q]× X× Y)^2]] byπ(α)(j,x,y)=(j,π_H(α∩ H)x,y)if j≤ p-1(j,x,π_K(α∩ K)y)if p≤ j≤ q-1(and the domain of π(α) consists of all (j,x,y) for which we can apply the definition above.) Then π is an isometric embedding.We can iterate the argument above to finite rational convex combinations. In the non-rational case, we approximate the coefficients in the convex combination by rational numbers and obtain approximate embeddings as in the proposition. §.§.§ UltraproductsThe language of metric ultraproducts is useful for soficity, and we'll describe them briefly here. We refer to <cit.> and <cit.> for the details. Let (M_k,d_k) be a sequence of metric spaces of diameter ≤ 1, and 𝒰 a free ultrafilter on ℕ. The metric ultraproduct of (M_n,d_n) along 𝒰 is the metric quotient of ∏_k M_k under the pseudometric d_𝒰((x_k),(y_k))=lim_k→𝒰d_k(x_k,y_k), and we denote it ∏_𝒰M_k. We denote the class of a sequence (x_k)_k∈∏_k M_k by (x_k)_𝒰.If (G_k,μ_k) is a family of pmp groupoids, the trace on ∏_𝒰[[G_k]] is given bytr(g_k)_𝒰=lim_k→𝒰tr(g_k).Moreover, by <ref> ∏_𝒰[[G_k]] is an inverse monoid with respect to the canonical product, namely (g_k)_𝒰(h_k)_𝒰=(g_kh_k)_𝒰. One can avoid ultraproducts when dealing with sofic groupoids as follows: For every n∈ℕ, let Y_n={0,…,n-1} be a finite set with n elements, and denote [[n]]=[[Y_n^2]]. Consider the product space ∏[[n]] endowed with the supremum metric and define an equivalence relation ∼ on ∏[[n]] by setting(x_n)∼ (y_n)ifflim_n→∞d_#(x_n,y_n)=0.Denote by ∏^ℓ^∞/c_0[[n]]=∏[[n]]/∼ the quotient. Proposition <ref> also implies that ∏^ℓ^∞/c_0[[n]] is an inverse monoid with the obvious operations.We can naturally embed [[n]] into [[n+1]] (because Y_n⊆ Y_n+1), and this modifies the metric by at most 1/n+1. Also, [[n]] embeds isometrically into [[kn]] as follows: Given α∈[[n]], set π(α)∈[[kn]] as π(α)(qn+j)=qn+α(j), whenever 0≤ q≤ k-1 and j∈dom(α).This way, we can embed [[n]] into any p≥ n as follows: if p=qn+r, with 0≤ r<n, embed [[n]] into [[qn]] and then into [[qn+1]],[[qn+2]],…,[[qn+r]. The metric changes by at most 1/qn+r+⋯+1/qn+1≤n/qn=n/p-r≤n/p-n, and this goes to 0 as p→∞. With these embeddings and a couple of diagonal arguments, one easily proves the following:A separable metric space (semigroup) M embeds isometrically into ∏_𝒰 [[n_k]] (where n_k is a sequence of natural numbers and 𝒰 is an ultrafilter on ℕ) if and only if M embeds into ∏^ℓ^∞/c_0[[n]]. In particular, the choice of free ultrafilter 𝒰 or sequence (n_k) does not matter for the existence of an embedding into ∏_𝒰[[n_k]]. We will be dealing with ultraproducts of full semigroups, and most natural operations extend to ultraproducts. For example, if α,β∈[[G]] are such that β^-1α and βα^-1 are idempotents, then α∪β∈[[G]]. We can consider similar unions in ultraproducts under the same hypotheses. We won't make further reference to these facts during the remainder of the paper.§ SOFIC GROUPOIDS We fix, for the remainder of this paper, a free ultrafilter 𝒰 on ℕ. A pmp groupoid G is sofic if there exists a sequence {G_k}_k of finite pmp groupoids and an isometric embedding π:[[G]]→∏_𝒰[[G_k]]. Equivalently, a pmp groupoid G is sofic if and only if for every ϵ>0 and every finite subset K of [[G]] (or [[G]]_B, for that matter), there exist a finite groupoid H and a map π:[[G]]→[[H]] such that d(π(αβ),π(α)π(β))<ϵ and |tr(α)-tr(π(α))|<ϵ for all α,β∈ K. The map π is called a (K,ϵ)-almost morphism.This definition differs from the usual one (see <cit.> or <cit.>) on the initial choice of finite models, but Proposition <ref> and Theorem <ref> show that they are equivalent. An embedding Φ:M→∏_𝒰[[G_k]] from any sub-inverse monoid M of [[G]] is isometric if and only if it preserves the trace. We simply need to write the distance in terms of the trace and vice versa. First one verifies that if Φ is isometric then Φ(1)=1, since this is the only element of trace 1, and then thattr(α)=1-d_μ(s(α),1)-d_μ(s(α),α).For the converse, one usesd_μ(α,β) =tr(s(α))+tr(s(β))-tr(s(α)s(β))-tr(β^-1α).If {G_n}_n is an increasing sequence of sofic groupoids, then G=⋃_n=1^∞ G_n is also sofic. Indeed, {[[G_n]]}_n is an increasing sequence of semigroups of [[G]] with dense union, so almost morphisms of each [[G_n]] give us the necessary almost morphisms of [[G]]. § PERMANENCE PROPERTIES In this section we will be concerned with permanence properties of the class of sofic groupoids. We will simply say that a measure μ on a discrete measurable groupoid G is sofic if (G,μ) is a (pmp) sofic groupoid.Given a non-null subgroupoid H of G, denote by μ_H the normalized measure on H^(0), i.e., μ_H(A)=μ(A)/μ(H^(0)) for A⊆ H^(0), and by tr_H for the corresponding trace on [[H]]. Let G be a discrete measurable groupoid. * If μ is a strong limit of sofic measures[ Recall that a net {μ_i}_i∈ I of measures on a measurable space (X,ℬ) converges strongly to a measure μ if μ_i(A)→μ(A) for all A∈ℬ. ], then μ is sofic as well..* A countable convex combination of sofic measures is sofic.* If μ has a disintegration of the form μ=∫_G^(0) p_xdν(x), where ν-a.e. p_x is a probability measure such that (G,p_x) is sofic, then (G,μ) is also sofic. In particular, if a.e. ergodic component of (G,μ) is sofic, so is (G,μ).*If (G,μ) is sofic and H is a non-null subgroupoid of G then (H,μ_H) is sofic.* If ν≪μ, where (G,ν) is pmp, and μ is sofic, then ν is sofic.* If {H_n} is a countable Borel partition of G by non-null subgroupoids, then G is sofic if and only if each H_n is sofic. Item 1. is clear since soficity is an approximation property for the measure. 2. Suppose ν,ρ are sofic measures and μ=tν+(1-t)ρ, 0<t<1. Take sofic embeddings Φ_ν:[[G]]_ν→∏_𝒰[[G_k]] and Φ_ρ:[[G]]_ρ→∏_𝒰[[H_k]]. Set Φ:[[G]]_μ→∏_𝒰[[tG_k+(1-t)H_k]] asΦ(α)=(Φ_ν(α))∪ (Φ_ρ(α))which, it is easy to check, is a sofic embedding. The countable infinite case follows from 1. 3. From the previous items it suffices to check that μ is a limit of convex combinations of sofic p_x. Let K be a finite collection of Borel subsets of G^(0) and ϵ>0. The maps x↦tr_x(A), A∈ K, take values in [0,1], so by partitioning [0,1] and taking preimages, we can find a finite partition {X_j}_j=1^N of G^(0) for which |p_x(A)-p_y(A)|<ϵ for all A∈ K whenever x and y belong to the same X_j. For each non-null X_j, choose x(j)∈ X_j with p_x(j) sofic. Then for A∈ K,μ(A) =∫_G^(0)p_x(A)dν(x)=∑_j(∫_X_jp_x(j)(A)dν(x))±ϵ=(∑_jν(X_j)p_x(j))(A)±ϵ.4. Let K⊆[[H]] be a finite subset and ϵ>0. Since [[H]] is contained in [[G]] (as a semigroup, but with a different metric), there exists a (K,ϵ)-almost morphism θ:[[G]]→[[F]] for some finite pmp groupoid F. We may assume that 1_H∈ K, and that θ(1_H) is an idempotent in [[F]]. For α∈[[H]], we have H^(0)α H^(0)=α, so substituting θ(α) by θ(1_H)θ(α)θ(1_H) (and making ϵ smaller) if necessary, we can further assume that θ(α) is contained in F':=θ(1_H)Fθ(1_H), which is a subgroupoid of F. This defines a map θ_H:[[H]]→[[F']].To see that θ_H approximately preserves the trace, note that the trace on [[H]] and the trace on [[F']] are given respectively bytr_H(α)=tr_μ(α)/tr_μ(1_H)andtr_F'(θ_H(α))=tr_F(θ(α))/tr_F(θ(1_H)),and these numbers are as close as necessary if ϵ is small enough. Products are dealt with similarly, so θ_H is an approximate morphism as necessary. 5. Let ϵ>0. Let f=dν/dμ. Take a countable partition X_1,X_2,… of G^(0) such that |f(x)-f(y)|<ϵ whenever x and y belong to the same X_j, and fix points x(j)∈ X_j. Then for all A⊆ X,ν(A)=∑_j∫_X_j∩ Af(x(j))dμ(x)±ϵ=∑_jf(x(j))μ(X_j)μ_j(A)±ϵ,where μ_j is the normalized measure on X_j. Since 1=ν(X)=∑_j f(x_j)μ(X_j)±ϵ, it is not hard to obtainν(A)=∑_j(f(x_j)μ(X_j)/∑_i f(x_i)μ(X_i))μ_j(A)±2ϵ/1±ϵEach μ_j is sofic by item 4., so items 1. and 2. imply that ν is sofic.6. Apply items 4. and 2. with the fact that G=∑_j μ(H_j)H_j.Now we will deal with finite-index subgroupoids.A subgroupoid H⊆ G is said to have finite index in G if there exist ψ_1,…,ψ_n∈[G] such that {ψ_iH:i=1… n} is a partition of G. We will call ψ_1,…,ψ_n left transversals of H in G.This definition restricts to the usual notions of finite index subgroups and equivalence relations, as defined in <cit.>, in the ergodic case. Note that if H⊆ G is of finite index then H^(0)=G^(0).Suppose H⊆ G is of finite index. If H is sofic, so is G. Suppose that ψ_1,…,ψ_N are left transversals for H⊆ G. For each α∈ [[G]], let α_i,j=ψ_i^-1αψ_j∩ H. Note that α_i,j∈[[H]] and that α_i,j^-1=α_j,i. Moreover, α_i,jα_k,l=∅ whenever j≠ l. Let Y be a set with N elements. Given (i,j)∈ Y^2, let E_i,j={(i,j)}∈[[Y^2]] (as a partial transformation on Y, E_i,j is simply defined by E_i,j(j)=i.Given k and γ∈[[G_k]], set γ⊗ E_i,j=γ× E_i,j∈[[G_k× Y^2]], and then define Ξ:[[G]]→∏_𝒰[[G_k× Y^2]] byΞ(α)=⋃_i,jΦ(α_i,j)⊗ E_i,j First let's show that Ξ is well-defined, or more precisely that the terms in the right-hand side have disjoint sources and ranges: Let (i,j) and (k,l) be given. Then(Φ(α_i,j)⊗ E_i,j)(Φ(α_k,l)⊗ E_k,l)^-1=Φ(α_i,jα_l,k)⊗(E_i,jE_l,k)If j≠ l the right-hand side is empty, so assume j=l. Thenα_i,jα_j,k=(ψ_i^-1αψ_j∩ H)(ψ_j^-1αψ_k∩ H)1If this product is nonempty, then we have p_i∈ψ_i, p_j,q_j∈ψ_j, q_k∈ψ_k and g,h∈α such that the product (p_i^-1gp_j)(q_j^-1hq_k) is defined, and both terms belong to H. But in particular s(p_j)=r(q_j^-1)=s(q_j), so p_j=q_j. Similarly g=h, and so p_i^-1q_k∈ H, thus q_k∈ψ_i H∩ψ_k H, which implies i=k.This proves that the ranges of the terms in the definition of Ξ(α) are disjoint. The sources are dealt with similarly, and so Ξ is well-defined.Now we need to show that Ξ is a morphism. Suppose α,β∈[[G]]. We haveΞ(α)Ξ(β)=⋃_i,j,k,lΦ(α_i,jβ_k,l)⊗ (E_i,jE_k,l)=⋃_i,j,lΦ(α_i,jβ_j,l)⊗ E_i,l.On the other hand Ξ(αβ)=⋃_i,lΦ((αβ)_i,l)⊗ E_i,l, so we are done if we show that for given i,l,⋃_jα_i,jβ_j,l=(αβ)_i,l. The inclusion ⊆ is quite straightforward, using a similar argument to the one right after (1) above. For the converse, suppose p_i^-1abp_l∈(αβ)_i,l, where p_i∈ψ_i, p_l∈ψ_l, a∈α and b∈β. Choose j such that bp_l∈ψ_j H, so there is a unique p_j∈ψ_j such that the product p_j^-1bp_l is defined and in H. Therefore p_i^-1abp_l=(p_i^-1a p_j)(p_j^-1bp_l)∈α_i,jβ_j,l. Finally, we need to show that Ξ is trace-preserving. Note thattrΞ(α)=1/N∑_i=1^Ntr(α_i,i),so we are done if we prove that tr(α_i,i)=tr(α).Let's show that α_i,i∩ G^(0)=s{g∈ψ_i:r(g)∈α∩ G^(0)}. An element of α_i,i∩ G^(0) has the form x=p_i^-1ap_i for a∈α and p_i∈ψ_i. It follows that x=s(p_i), so r(p_i)=a∈α∩ G^(0). Conversely, if x=s(g), where g∈ψ_i and r(g)∈α∩ G^(0), then x=g^-1r(g)g∈ψ_i^-1αψ_i∩ G^(0).Finally, we obtaintr(α_i,i) =μ(ψ_i^-1αψ_i∩ G^(0))=μ(s(ψ_i∩ r^-1(α∩ G^(0))))=μ(r(ψ_i∩ r^-1(α∩ G^(0))))=μ(α∩ G^(0))=tr(α),because r|_ψ_i:ψ_i→ G^(0) is surjective.We will say that a pmp groupoid G is periodic if s^-1(x) is finite for all x∈ G^(0), and that G is hyperfinite if it is an increasing union of subgroupoids with finite fibers (this is the measured analogue of the AF groupoids introduced in <cit.>) Every hyperfinite groupoid is sofic. First note that every measure space (X,μ), seen as a trivial groupoid (i.e., X=X^(0)), is sofic. Indeed, μ=∫_Xδ_xμ(x), where δ_x is the point-mass measure on x, and (X,δ_x), as a pmp groupoid, is isomorphic to a singleton, hence finite and sofic.Suppose G has finite fibers. Let G_n={g∈ G:|s^-1(s(g))|=n}. Then the G_n are subgroupoids of G with G=⋃_nG_n, so it suffices to show that each G_n is sofic. An application of Lusin-Novikov implies that the subgroupoid G_n^(0), which is simply a measure space, has finite index in G_n, which is therefore finite. The general case follows from the remark after Proposition <ref>. Two pmp groupoids (G,μ) and (H,ν) are sofic if and only if (G× H,μ×ν) is sofic. One direction is clear, since [[G]] embeds isometrically into [[G× H]] via α↦α× H^(0), and similarly for [[H]].Let M be the submonoid of [[G× H]] of elements of the form ⋃_i=1^nα_i×β_i, where α_i∈[[G]], β_i∈[[H]], and for i≠ j, s(α_i)∩ s(α_j)=∅ or s(β_i)∩ s(β_j)=∅, and r(α_i)∩ r(α_j)=∅ or r(β_i)∩ r(β_j)=∅. Let's show that M is dense in [[G× H]].Let ϕ∈[[G× H]] and ϵ>0. We can take α_i∈[[G]] and β_i∈[[H]] such that (μ⊗ν)(ϕ(⋃α_i×β_i))<ϵ, and with the α_i×β_i disjoint. For i≠ j, let h_i,j=s|_α_i×β_i^-1(s(α_j×β_j))=α_is(α_j)×β_is(β_j)and h_i=⋃_j≠ ih_i,j. Let x∈ h_i,j∩ϕ, so s(x)=s(a_j,b_j) for some (a_j,b_j)∈α_j×β_j. If (a_j,b_j)∈ϕ, we'd obtain (a_j,b_j)=x∈α_i×β_i, which happens only if i=j. This proves thats(⋃_i(h_i∩ϕ))⊆ s(⋃_j(α_j×β_j)∖ϕ)In particular, d(ϕ,ϕ∖⋃_ih_i)<ϵ, from which follows that d(⋃_i(α_i×β_i∖ h_i),ϕ)<2ϵ.Since each h_i is a rectangle, we can rewrite ⋃_i(α_i×β_i∖ h_i) as a union of disjoint rectangles with the desired property for the source map. To deal with the range map one can apply the same argument to ϕ^-1 and take intersections. So given sofic embeddings Φ:[[G]]→∏_𝒰[[G_n]] and Ψ:[[H]]→∏_𝒰[[H_n]] set Φ⊗Ψ:M→[[G_n× H_n]] byΦ⊗Ψ(⋃α_i×β_i)=⋃Φ(α_i)×Ψ(β_i),where the α_i and β_i satisfy the condition in the definition of M.The element in the right-hand side is well-defined and doesn't depend on the choice of α_i and β_i since sofic embeddings preserve sources, ranges, and intersections. Φ⊗Ψ is then an trace-preserving embedding of M, and hence extends uniquely to a isometric embeddings of [[G× H]].§ SOFICITY AND THE FULL GROUP Let's fix some notation here as well. Given a probability space X, we denote its measure algebra (i.e., the algebra of measurable subsets of X modulo null sets) by MAlg(X). Given a pmp groupoid (G,μ), the set of idempotents of [[G]] (i.e., elements α such that α^2=α) is precisely MAlg(G^(0)).Given α∈[[G]], define fixα=α∩ G^(0) and suppα=s(α)∖fixα. These defines maps fix,supp:[[G]]→MAlg(G^(0)), and we can extend these maps to ultraproducts of these semigroups.Again, for each n∈ℕ, fix Y_n a set with n elements, consider the full equivalence relation Y_n^2, whose full group [Y_n^2] can be identified as the permutation group 𝔖_n on n elements, as in Example <ref>. We denote the measure algebra of Y_n by MAlg(n).A well-known theorem of Dye <cit.> states that when R is an aperiodic equivalence relation, the full group [R] completely determines R. With this in mind, we prove that a pmp groupoid G is sofic if and only if [G] embeds isometrically into ∏_𝒰𝔖_n, as long as G doesn't contain “trivial parts”. This solves a question posed by Conley, Kechris and Tucker-Drob in <cit.> in this case. A metric group (Γ,d) is metrically sofic if it embeds isometrically into ∏_𝒰𝔖_n. We will need a few technical lemmas relating the full group [G], the measure algebra MAlg(G^(0)) and the full semigroup [[G]]. Let θ:[G]→∏_𝒰𝔖_n be an isometric embedding and α,β∈[G]. *suppα=fixβ if and only if supp(θ(α))=fix(θ(β)).*suppα∩suppβ=∅ if and only if suppθ(α) ∩suppθ(β)=∅* Simply note that suppα=fixβ is equivalent tod_μ(α,β)=1andtr(α)+tr(β)=1,and the same condition applies to ultraproducts.* We have suppα∩suppβ=∅ if and only if d_μ(α,β)=d_μ(1,α)+d_μ(1,β), and this condition is preserved by θ. We will say that a groupoid G is aperiodic if |s^-1(x)|=∞ for all x∈ G^(0). Suppose (G,μ) is an aperiodic groupoid. Then for all A∈MAlg(G^(0)), there exists α∈[G] such that suppα=A. We can decompose G as G=H+K, where H and K are subgroupoids, |r(s^-1(x))|=∞ for all x∈ H^(0) and for every x∈ K^(0), K^x_x=s^-1(x)∩ r^-1(x) is infinite.The equivalence relation R(H)=(r,s)(H) on H^(0) is aperiodic, in the usual sense, so if A⊆ H^(0), <cit.> allows us to take f∈[[R(H)]] with supp(f)=A, and f lifts to an element of [[H]].If A⊆ K^(0), we use the fact that K is covered by countably many elements of [[K]], and it is easy to construct α∈[K] with s(g)=r(g) for all g∈α and α∩ K^(0)=K^(0)∖ A. If γ∈[[G]], then there exists γ∈[G] with γ⊆γ. Let γ_0=γ, and for all n≥ 1, setγ_n={g∈γ^-n:s(g)∉s(γ), r(g)∉r(γ)}One shows that s(γ_n)∩ s(γ_m)=r(γ_n)∩ r(γ_m)=∅ for n≠ m, and that ⋃_n s(γ_n) and ⋃_n r(γ_n) are both contained and conull in s(γ)∪ r(γ). Therefore γ=⋃_nγ_n∪(G^(0)∖(s(γ)∪ r(γ)) has the desired properties.If Γ_1 and Γ_2 are groups acting on sets X_1 and X_2, respectively, θ:Γ_1→Γ_2 is a homomorphism and ϕ:X_1→ X_2 is a function, we say that the pair (θ,ϕ) is covariant if ϕ(γ x)=θ(γ)ϕ(x) for all γ∈Γ_1 and x∈ X_1.Given α∈[G] and A∈MAlg(G^(0)), define α· A=r(s|_α^-1(A)). This defines an (isometric, order-preserving) action of [G] on MAlg(G^(0)). This action also extends to ultraproducts of full groups and measure algebras. An aperiodic pmp groupoid G is sofic if and only if the full group [G] is metrically sofic. More precisely, every isometric embedding of [G] into an ultraproduct ∏_𝒰𝔖_n extends uniquely to an isometric embedding of [[G]]. Let's deal with uniqueness first: If θ:[[G]]→∏_𝒰𝔖_n is an isometric embedding, then for every α∈ [[G]] choose, by Lemmas <ref> and <ref>, α,β∈ [G] with suppβ=s(α) and α⊆α. Then θ(α)=θ(α)suppθ(β), so θ is uniquely determined by its restriction to [G].Now suppose θ:[G]→∏_𝒰𝔖_n is an isometric embedding, and let's use the ideas above to extend it to [[G]].Given A∈MAlg(G^(0)), choose α∈[G] with supp(α)=A and define ϕ(A)=supp(θ(α)). We need several steps to finish this proof, namely,*ϕ is well-defined, i.e., ϕ(A) does not depend on the choice of α with supp(α)=A:Suppose α,α'∈[G] satisfy suppα=suppα'=A. Consider any β∈[G] with suppβ=G^(0)∖ A. By Lemma <ref>.<ref>, supp(θ(α))=fix(θ(β))=supp(θ(α)). * ϕ preserves intersections:Let A,B∈MAlg(G^(0)), and consider α,β,γ∈[R] with supp(α)=A∩ B, supp(β)=A∖ B and supp(γ)=B∖ A. By Lemma <ref>.<ref>, the supports of θ(α) and θ(β) are disjoint, so supp(θ(αβ))=supp(θ(α)θ(β))=supp(θ(α))∪supp(θ(β)), and similarly for α and γ. Also, supp(αβ)=A and supp(αγ)=B, so again by Lemma <ref>.<ref>,ϕ(A)∩ϕ(B) =supp(θ(αβ))∩supp(θ(αγ))=(supp(θ(α))∪supp(θ(β)))∩(supp(θ(α))∪supp(θ(γ)))=supp(θ(α))=ϕ(A∩ B).* ϕ preserves measure:By Proposition <ref>, θ is trace-preserving, so ϕ preserves measure. * If α∈[G], then ϕ(fixα)=fix(θ(α)):We need just to verify that ϕ preserves complements. Given A∈MAlg(G^(0)), the complement B=G^(0)∖ A is the unique element disjoint with A and such that μ(A)+μ(B)=1, and all of this is preserved by ϕ. * (θ,ϕ) is covariant:Let A∈MAlg(G^(0)) and α∈[G]. Take β∈[G] with suppβ=A. Then supp(αβα^-1)=α· A, andϕ(α· A) =supp(θ(αβα^-1))=supp(θ(α)θ(β)θ(α)^-1)=θ(α)·supp(θ(β))=θ(α)·ϕ(A).For every α∈[[G]], set Φ(α)=θ(α')ϕ(s(α)), where α'∈[G] is such that α⊆α' (which exists by Lemma <ref>). Using the definition of ϕ and the fact that it preserves the order, it is not hard to see that Φ is also well-defined. Note that Φ extends both θ and ϕ.Let's show that Φ is a sofic embedding. Suppose α=α'A, β=β'B, where α,β∈[[G]], α',β'∈[G] and A,B∈MAlg(G^(0)). Thenαβ=α'β'(B∩β^-1· A)Since the same kind of formula holds on ultraproducts and (θ,ϕ) is a covariant pair of morphisms, we obtain Φ(αβ)=Φ(α)Φ(β).It remains only to see that Φ is trace-preserving. Let α∈[[G]]. If we show that fixΦ(α)=ϕ(fixα) we are done because ϕ is isometric.Let α'∈[G] with α⊆α'. Let A=fixα and B=fixα'∖ A. Note that A=fixα'∩ s(α), soϕ(A)=fixθ(α')∩ s(Φ(α)).Since Φ(α)=θ(α')ϕ(s(α))⊆θ(α'), we have fix(Φ(α))=fix(θ(α'))∩ s(Φ(α)). Thus we are done. Now we extend this result to when G has periodic points, but no singleton classes. Set Per_≥ 2(G)={x∈ G^(0):2≤|s^-1(x)|<∞}. There exists α∈ [G] with suppα=Per_≥ 2(G). This follows easily from the existence of a transversal for periodic relations (<cit.>) and an argument similar to the proof of Lemma <ref>. Suppose that for all x∈ G^(0), |s^-1(x)|≥ 2. Then G is sofic if and only if [G] is metrically sofic. Let P=Per_≥ 2(G) and Aper=G^(0)∖ P, and consider the subgroupoid H=GAper of G. By previous results, it suffices to show that [H] is metrically sofic. Fix any ρ∈[G] with suppρ=P. Let θ:[G]→∏_𝒰𝔖_n_k be an isometric embedding.Consider the embedding [H]→[G], α↦α=α∪ P. This embedding modifies distances by a multiplicative factor of μ(Aper). By Lemma <ref>.<ref>, suppθ(α)⊆fixθ(ρ). We can then restrict θ(α) to fixθ(ρ) (similarly to how we did in Theorem <ref>.<ref>.) and obtain a new embedding η:[H]→∏_𝒰𝔖_m_k (where m_k≤ n_k). This new embedding modifies distances by a multiplicative factor ofd(θ(ρ),1)^-1=d(ρ,1)^-1=μ(Aper)^-1so η is in fact isometric.abbrv
http://arxiv.org/abs/1708.08023v1
{ "authors": [ "Luiz Cordeiro" ], "categories": [ "math.DS", "math.OA", "37A15" ], "primary_category": "math.DS", "published": "20170826234200", "title": "An elementary approach to sofic groupoids" }
The Study of the Bloch Transformed Fields Scattered by Locally Perturbed Periodic Surfaces Ruming ZhangCenter for Industrial Mathematics, University of Bremen ; ==========================================================================================We present an easy-to-implement and efficient analytical inversion algorithm for the unbiased random sampling of a set of points on a triangle mesh whose surface density is specified by barycentric interpolation of non-negative per-vertex weights. The correctness of the inversion algorithm is verified via statistical tests, and we show that it is faster on average than rejection sampling. Computer GraphicsG.3Probability and StatisticsProbabilistic algorithms § RELATED WORKPoint sampling on meshed surfaces is useful in a variety of computer graphics contexts <cit.>.Here we focus on the relatively simple problem of sampling a spatially inhomogeneous, unbiased random distribution of points on a mesh given a prescribed point density per unit area varying over the mesh, which has received surprisingly little attention. There has been much work on the generation of point distributions with desired local statistical characteristics for applications such as point stippling and blue noise sampling <cit.>, however these techniques are complex to implement, generally not very efficient, and deliberately introduce bias into the sampling. Statistically unbiased inhomogeneous random sampling is clearly useful in a variety of contexts, for example in Monte Carlo sampling. For the homogeneous case, there is a standard inversion algorithm for sampling a random point on a triangle mesh with uniform probability <cit.>. For the inhomogeneous case, rejection sampling <cit.> provides a general and relatively efficient solution, as was noted (for example) by Yan et. al <cit.>, however as a random algorithm this presents efficiency problems. Sik et. al <cit.> improved on rejection sampling by subdividing the mesh until each triangle can be considered to have uniform density without losing too much accuracy, then applying the homogeneous inversion algorithm. However this requires a relatively complex preprocessing stage.Arvo et. al <cit.> extended the inversion algorithm to deal with a density which varies linearly within each face of a triangle mesh (corresponding to barycentric interpolation). However they expressed their solution in the form of a set of cubic equations, without simplifying further or developing a practical, tested algorithm. In this paper we revisit the approach of Arvo et al., and provide a more explicit algorithm than previously, with statistical validation and performance comparison to rejection sampling. § METHOD Consider the general problem of independently sampling random points on a three-dimensional triangle mesh, where the probability per unit area p_X(𝐱) of choosing a given point 𝐱 is proportional to a non-negative scalar weight on the mesh, ϕ(𝐱). This weight can be interpreted as specifying the relative surface density (i.e. number per unit area) of sampled points. Normalization then implies:p_X(𝐱) = ϕ(𝐱)/∑_i ∫_T_iϕ(𝐲)dA_i(𝐲)where dA_i(𝐲) is the area element on the three-dimensional surface of triangle T_i. This may be factored into the discrete probability of choosing a given triangle, multiplied by the conditional PDF (with area measure) of choosing a point within that triangle:p_X(𝐱)=p_T(T_i) p_X|T(𝐱| T_i) = ∫_T_iϕ(𝐲)dA_i(𝐲)/∑_k∫_T_kϕ(𝐲) dA_k(𝐲)·ϕ(𝐱)/∫_T_iϕ(𝐲)dA_i(𝐲) .Defining the area-weighted average of ϕ over triangle T_i (with area A_i) as ⟨ϕ⟩_i ≡∫_T_iϕ(𝐱)dA_i(𝐱) / A_i, then the discrete probability of each triangle can be written asp_T(T_i)= A_i⟨ϕ⟩_i/∑_k A_k ⟨ϕ⟩_k,and the conditional area-measure PDF of a point 𝐱 in a given triangle T_i is p_X|T(𝐱| T_i) = ϕ(𝐱)/A_i ⟨ϕ⟩_i .Sampling from the discrete probability distribution p_T(T_i) to choose a triangle is done via the CDFP_T(T_i) = ∑_j≤ i A_j⟨ϕ⟩_j/∑_k A_k ⟨ϕ⟩_k,i.e. sample a uniform random deviate (a random variable drawn from the uniform distribution on the open interval [0,1)), then find index k such that P_T(T_k-1) ≤ξ < P_T(T_k) (with P_T(T_k)=0 by convention for k<k_min). This is usually done via bisection search, though note that more efficient algorithms exist such as that described by Sik et al. <cit.>. It remains to sample from the conditional PDF p_X|T(𝐱| T_i).[caption=Triangle selection code, label=lst:trianglesample, float] void <@TRIANGLE_CDF@>( vector<double>tri_cdf ) double cdf = 0.0; for (size_t ti=0; ti<<@NUM_TRIS@>(); i++) double phi_avg = <@WEIGHT_AVG@>(ti); cdf += phi_avg * <@AREA@>(ti); tri_cdf.push_back(cdf); for (size_t ti=0; ti<<@NUM_TRIS@>(); i++) tri_cdf[i] /= cdf; static inline size_t <@CHOOSE_TRI@>(const vector<double>tri_cdf) double xi = <@RAND@>(); size_t jl = 0; size_t ju = tri_cdf.size()+1; while (ju-jl > 1) size_t jm = (ju+jl) >> 1; if (xi >= tri_cdf[jm-1]) jl=jm; else ju=jm; return jl; A completely general method for sampling from p_X|T(𝐱| T_i)given any function ϕ(𝐱) is provided by rejection sampling.We first find ϕ_max,i = max(ϕ(𝐱): 𝐱∈ T_i) (this can be precomputed). We then choose a random point in the triangle drawn from a uniform distribution. This is most easily done by parameterizing points 𝐱∈ T_i as 𝐱 = u 𝐯^i_u + v 𝐯^i_v + w 𝐯^i_w, where 𝐯^i_u, 𝐯^i_v, 𝐯^i_w are the triangle vertices and the per-triangle barycentric coordinates (u, v, w) are each in the range [0,1] with u+v+w=1. Then uniform random sampling on the triangle is done via the formulas <cit.>u = 1 - √(ξ_1) ,v = (1 - u)ξ_2where (ξ_1, ξ_2) are uniform random deviates. We then decide to accept this trial sample by drawing another uniform random deviate ξ_3 and testing whether ξ_3ϕ_max,i < ϕ(𝐱). If this is true, we accept 𝐱 as the sample, otherwise we draw another trial sample of 𝐱 and continue, until acceptance. This procedure is outlined in Algorithm <ref>. *EndIfWhile the rejection sampling method is very general, as a random algorithm it does not provide any strict guarantees about the number of random samples which will be taken. However, in the common case of a weight defined by per-vertex barycentric weighting, the inversion method provides an analytical formula for the sampled points<cit.>. Per-vertex weighting means we associate with each of the three vertices of triangle T_i a non-negative real weight. Let us denote the weights of the three vertices 𝐯^i_u, 𝐯^i_v, 𝐯^i_w as ϕ_u, ϕ_v, ϕ_w respectively. Assuming the triangle is not degenerate, we may express the barycentric coordinates in terms of position: u(𝐱), v(𝐱), w(𝐱).Barycentric interpolation then defines the weight at all points 𝐱∈ T_i via (equivalent to linear interpolation within the triangle):ϕ(𝐱) = u(𝐱) (ϕ_u-ϕ_w) + v(𝐱)(ϕ_v-ϕ_w) + ϕ_w .This is a common scheme for interpolating per-vertex weights to produce a C^0 continuous function on the mesh.Integrals over the triangle area elements dA_i may be completed by a change of variables to barycoordinates, via the standard identity dA_i = 2A_idudv.It follows that the area-averaged barycentric weight is equal to the mean vertex weight:⟨ϕ⟩_i = ϕ_u + ϕ_v + ϕ_w/3 .Using Eqn. (<ref>), the normalized conditional PDF in barycentric coordinate measure is then given byp_U,V(u, v) = p_X|T(𝐱| T_i) dA_i/dudv = 2 ϕ(u, v)/⟨ϕ⟩_i . We now introduce the normalized relative weightsΦ_u≡ ϕ_u-ϕ_w/⟨ϕ⟩_i , Φ_v ≡ϕ_v-ϕ_w/⟨ϕ⟩_i .Each possible set of relative weights maps into a point in the (Φ_u, Φ_v) plane (i.e. each such point represents all triangles which have the same relative weights). Expressing the problem in terms of this two-dimensional vector of weights leads to some simplification in the analytical expressions compared to Arvo's approach <cit.>. Note that the case where ⟨ϕ⟩_i=0, i.e. all zero weights, can be ignored as the probability of sampling a point in such a zero-weighted triangle is zero. It follows from the non-negativity of the weights that (Φ_u, Φ_v) satisfy various inequalitiesand thus lie within the triangular region in the (Φ_u, Φ_v) plane depicted in Figure <ref>. The uniform weighting corresponds to the origin (Φ_u, Φ_v)= (0,0).[style=bottom, caption=Routine to sample the u barycoordinate., label=lst:U_NEWTON, float] double <@U@>( double Phi_u, double Phi_v, const double tol = 5.0e-3 )double r = <@RAND@>(); double l = (2.0*Phi_u - Phi_v)/3.0; const int maxIter = 20; double u = 0.5; int n=0; while (n++ < maxIter)double u1 = 1.0-u; double P= u*(2.0-u) - l*u*u1*u1 - r; double Pd = max(u1*(2.0 + l*(3.0*u-1.0)), DBL_EPSILON); double du = max(min(P/Pd, 0.25), -0.25); u -= du; u = max(min(u, 1.0-DBL_EPSILON), DBL_EPSILON); if (fabs(du) < tol) break;return u; In these coordinates, the barycentric-measure PDF Eqn. (<ref>) reduces top_U,V(u, v) = 2 ( u Φ_u + v Φ_v + 1 - Φ_u + Φ_v/3). We now describe how to sample points from this PDF. Integration gives the marginal PDF of u:p_U(u)= ∫_0^1-u p_U,V(u, v)dv.The cumulative density function (CDF) for the marginal PDF of u, ∫_0^u p_U(u')du' is given by (and similarly for the CDF of v, P_V(v))P_U(u)= u(2-u) - (2Φ_u - Φ_v)/3 u(u-1)^2 . Inversion of P_U(u)=ξ_u where ξ_u is a uniform random deviate yields the sampled value of u. It is efficient to solve this by Newton's method via the update rule: u_n+1 = u_n - P_U(u_n)-ξ_u/P'_U(u_n) .An example implementation of this sampling routine in C is provided in Listing <ref> (wheregenerates a uniform random deviate). This includes tolerances to keep the solution within bounds, and limits the step size to 1/2 to aid convergence (as statistically verified in Section <ref>).In order to sample v, we require the CDF for v conditional on the sampled value of u, given by P_V(v | u) = ∫_0^v p_V|U(v' | u)dv' = 1/p_U(u)∫_0^v p_U,V(u, v')dv'.Evaluating this givesP_V(v | u) = 2v/p_U(u)[ 1 + (u-1/3) Φ_u +(v/2-1/3) Φ_v ].Inversion of P_V(v | u)=ξ_v (where ξ_v is again a uniform random deviate) gives the sampled value of v. As P_V(v | u) is quadratic in v, this inversion has two solutionsv_± = τ±√( τ^2 (1-ξ_v) + (τ + u - 1)^2ξ_v ) where the square root here denotes the principal square root, and τ(u, Φ_u, Φ_v)≡ 1/3 - 1+(u-1/3) Φ_u/Φ_v . Here τ ranges over the entire real line, i.e. τ∈ [-∞, ∞]. As Φ_v approaches zero (uniform weighting), τ diverges; however, in this limit the inversion can be simplified to:v → (1-u)ξ_v+ O(|τ|^-1) as|Φ_v| → 0, |τ| →∞ .If τ≫ 0, then clearly the v_- branch must be chosen (in order that v ∈ [0,1]). Similarly, if τ≪ 0, then the v_+ branch must be chosen. Figure <ref> shows how the solution for v varies as a function of the random deviate ξ_v and τ.From this figure it is intuitively clear that for general τ, the correct choice of branch is given byv = {[v_+τ≤ (1-u)/2,;v_- τ > (1-u)/2. ]. [style=bottom, caption=Routine to sample the v barycoordinate., label=lst:V, float] double <@V@>( double u, double Phi_u, double Phi_v ) double r = <@RAND@>(); const double epsilon = 1.0e-6 if (fabs(Phi_v) < epsilon) return (1.0 - u)*r; double tau = 1.0/3.0 - (1.0 + (u-1.0/3.0)*Phi_u)/Phi_v; double tmp = tau + u - 1.0; double q = sqrt(tau*tau*(1.0-r) + tmp*tmp*r); return tau <= 0.5*(1.0-u) ? tau + q : tau - q;Let τ = γ(1 - u), which implies:v_±/(1-u) = γ±√(γ^2 + ξ_v(1-2γ)) .First consider the case γ>1/2. Then the term inside the square root satisfies the following inequalityγ^2 + ξ_v(1-2γ) > |γ-1|^2since (1-2γ)<0 and ξ_v<1. Thus the following inequalities are satisfied:v_+/(1-u) > γ + |γ-1| > 1, v_-/(1-u) < γ - |γ-1| < 1.Therefore since v≤(1-u), if γ>1/2 the v_- branch must be chosen. The analogous argument for γ<1/2 shows that the v_+ branch must be chosen in that case.Algorithm <ref> summarises the resulting method for inversion sampling (u, v). A C implementation for sampling v via this method is provided in routine V (Listing <ref>).In Figure <ref> we show a simple example of a point distribution sampled via this inversion method, in which the per-vertex weight is taken to be the magnitude of the local discrete vertex curvature. In Figure <ref> we show another example in which the per-vertex weight function has a periodic 3d variation of the form |cos(x/L) cos(y/L) cos(z/L)| with some length-scale L. § VALIDATION AND PERFORMANCE In Figure <ref>, we plot the calculated CDF P_V(v) and empirically measured CDF obtained via the inversion sampling of Algorithm <ref>, for a representative set of triangle weightings (here those which fit in a 16x16 grid covering the valid region in the (Φ_u, Φ_v) plane in Figure <ref>). Only 0.1% of the CDF points are shown for clarity. The Kolmogorov-Smirnov statistic for each empirical CDF is at most D=0.005, which is less than the critical statistic at 99% confidence level (D_crit=1.63/√(N)=0.007),giving confidence that the sampling algorithm is correct over the whole (Φ_u, Φ_v) region. We performed an alternative verification with the chi-squared test. We covered the valid region in the (Φ_u, Φ_v) plane with a 16x16 grid, and for each of these triangle weightings, drew samples and binned them in a grid in the uv plane (where at each bin resolution enough samples were drawn to ensure approximately 100 per uv bin). In Figure <ref>, we plot the logarithm of the chi-squared p-values for all the triangle weightings as a function of the sample binning resolution N in the uv plane (again using Newton's method). We repeated this for increasingly high bin resolution. The exponential drop in p-value logarithm to <-50 for all the samples as the uv bin resolution is increased to 64x64 gives high confidence that the sampling algorithm is correct.We focus here only on performance of the sampling within a given triangle, as we do not deal with optimization of the triangle selection itself. In Figure <ref> we show the relative performance of the rejection and inversion methods applied to independent samples from a single triangle (with Φ_u=-3, Φ_v=-3, i.e. the maximal per-triangle “inhomogeneity”) where each data point indicates the average sample time in nanoseconds averaged over 10^7 samples. These timings were taken running single-threaded on a 2.3 Intel Core i7 processor. The inversion method runs approximately 20-80% faster than rejection, where the efficiency depends strongly on the required tolerance for the sampled u barycoordinate, so some trade-off between accuracy and speed is involved. Of course, both inversion and rejection algorithms can also be easily multi-threaded. § CONCLUSION We derived a simple and efficient inversion method for sampling points on a triangle mesh with density defined by barycentrically interpolated per-vertex weights, and verified that it produces statistically correct point samples. We showed that weighted point sampling on a triangle mesh via the inversion method is faster on average than rejection sampling.We note that the method presented here can overall be regarded as complementary to that of Sik et al. <cit.>.Their method involves usage of a more sophisticated method for triangle sampling, and is more general as it can deal with arbitrarily varying weight functions (as can rejection sampling), however it is also considerably more complex to implement. While an algorithm such as their fast triangle selection method is required for optimal performance, using our analytical inversion sampling of linearly varying weights should allow for further improvement as it would allow less subdivision to achieve the same sampling accuracy. We suggest that in future work it would be interesting to explore the combination of these methods.eg-alpha-doi
http://arxiv.org/abs/1708.07559v1
{ "authors": [ "Jamie Portsmouth" ], "categories": [ "cs.GR" ], "primary_category": "cs.GR", "published": "20170824211251", "title": "Efficient barycentric point sampling on meshes" }
Deep Learning for Video Game Playing Niels Justesen^1, Philip Bontrager^2, Julian Togelius^2, Sebastian Risi^1^1IT University of Copenhagen, Copenhagen^2New York University, New York Received: December 30, 2023/ Accepted: December 30, 2023 ===========================================================================================================================================================In this article, we review recent Deep Learning advances in the context of how they have been applied to play different types of video games such as first-person shooters, arcade games, and real-time strategy games. We analyze the unique requirements that different game genres pose to a deep learning system and highlight important open challenges in the context of applying these machine learning methods to video games, such as general game playing, dealing with extremely large decision spaces and sparse rewards. § INTRODUCTIONApplying AI techniques to games is now an established research field with multiple conferences and dedicated journals. In this article, we review recent advances in deep learning for video game playing and employed game research platforms while highlighting important open challenges. Our motivation for writing this article is to review the field from the perspective of different types of games, the challenges they pose for deep learning, and how deep learning can be used to play these games. A variety of review articles on deep learning exists <cit.>, as well as surveys on reinforcement learning <cit.> and deep reinforcement learning <cit.>, here we focus on these techniques applied to video game playing. In particular, in this article, we focus on game problems and environments that have been used extensively for DL-based Game AI, such as Atari/ALE, Doom, Minecraft, StarCraft, and car racing. Additionally, we review existing work and point out important challenges that remain to be solved. We are interested inapproaches that aim to play a particular video game well (in contrast to board games such as Go, etc.), from pixels or feature vectors, without an existing forward model. Several game genres are analyzed to point out the many and diverse challenges they pose to human and machine players.It is important to note that there are many uses of AI in and for games that are not covered in this article; Game AI is a large and diverse field <cit.>.This article is focused on deep learning methods for playing video games well, while there is plenty of research on playing games in a believable, entertaining or human-like manner <cit.>. AI is also used for modeling players' behavior, experience or preferences <cit.>, or generating game content such as levels, textures or rules <cit.>. Deep learning is far from the only AI method used in games. Other prominent methods include Monte Carlo Tree Search <cit.> and evolutionary computation <cit.>. In what follows, it is important to be aware of the limitations of the scope of this article.The paper is structured as follows: The next section gives an overview of different deep learning methods applied to games, followed by the different research platforms that are currently in use. Section <ref> reviews the use of DL methods in different video game types and Section <ref> gives a historical overview of the field. We conclude the paper by pointing out important open challenges in Section <ref> and a conclusion in Section <ref>. § DEEP LEARNING IN GAMES OVERVIEWThis section gives a brief overview of neural networks and machine learning in the context of games. First, we describe common neural network architectures followed by an overview of the three main categories of machine learning tasks: supervised learning, unsupervised learning, and reinforcement learning. Approaches in these categories are typically based on gradient-descent optimization. We also highlight evolutionary approaches as well as a few examples of hybrid approaches that combine several optimization techniques.§.§ Neural Network Models Artificial neural networks (ANNs) are general purpose functions that are defined by their network structure and the weight of each graph edge. Because of their generality and ability to approximate any continuous real-valued function (given enough parameters), they have been applied to a variety of tasks, including video game playing.The architectures of these ANNs can roughly be divided into two major categories: feedforward and recurrent neural networks (RNN). Feedforward networks take a single input, for example, a representation of the game state, and outputs probabilities or values for each possible action. Convolutional neural networks (CNN) consists of trainable filters and is suitable for processing image data such as pixels from a video game screen.RNNs are typically applied to time series data, in which the output of the network can depend on the network's activation from previous time-steps <cit.>. The training process is similar to feedforward networks, except that the network's previous hidden state is fed back into the network together with the next input. This allows the network to become context-aware by memorizing the previous activations, which is useful when a single observation from a game does not represent the complete game state. For video game playing, it is common to use a stack of convolutional layers followed by recurrent layers and fully-connected feed-forward layers.The following sections will give a brief overview of different optimization methods which are commonly used for learning game-playing behaviors with deep neural networks. These methods search for the optimal set of parameters to solve some problem. Optimization can also be used to find hyper-parameters, such as network architecture and learning parameters, and is well studied within deep learning <cit.>. §.§ Optimizing Neural Networks §.§.§ Supervised Learning In supervised learning a model is trained from examples. During training, the model is asked to make a decision for which the correct answer is known. The error, i.e. difference between the provided answer and the ground truth, is used as a loss to update the model. The goal is to achieve a model that can generalize beyond the training data and thus perform well on examples it has never seen before. Large data sets usually improve the model's ability to generalize.In games, this data can come from play traces<cit.> (i.e. humans playing through the game while being recorded), allowing the agent to learn the mapping from the input state to output actions based on what actions the human performed in a given state. If the game is already solved by another algorithm, it can be used to generate training data, which is useful if the first algorithm is too slow to run in real-time.While learning to play from existing data allows agents to quickly learn best practices, it is often brittle; the data available can be expensive to produce and may be missing key scenarios the agent should be able to deal with. For gameplay, the algorithm is limited to the strategies available in the data and cannot explore new ones itself. Therefore, in games, supervised algorithms are often combined with additional training through reinforcement learning algorithms <cit.>.Another application of supervised learning in games is to learn the state transitions of a game. Instead of providing the action for a given state, the neural network can learn to predict the next state for an action-state pair. Thus, the network is essentially learning a model of the game, which can then be used to play the game better or to perform planning <cit.>.§.§.§ Unsupervised Learning Instead of learning a mapping between data and its labels, the objective in unsupervised learning is to discover patterns in the data. These algorithms can learn the distribution of features for a dataset, which can be used to cluster similar data, compress data into its essential features, or create new synthetic data that is characteristic of the original data. For games with sparse rewards (such as Montezuma's Revenge), learning from data in an unsupervised fashion is a potential solution and an important open deep learning challenge. A prominent unsupervised learning technique in deep learning is the autoencoder, which is a neural network that attempts to learn the identity function such that the output is identical to the input <cit.>. The network consists of two parts: an encoder that maps the input x to a low-dimensional hidden vector h, and a decoder that attempts to re-construct x from h. The main idea is that by keeping h small, the network has to learn to compress the data and therefore learn a good representation. Researchers are beginning to apply such unsupervised algorithms to games to help distill high dimensional data to more meaningful lower dimensional data, but this research direction is still in its early stages <cit.>. For a more detailed overview of supervised and unsupervised learning see <cit.>.§.§.§ Reinforcement Learning Approaches In reinforcement learning (RL) an agent learns a behavior by interacting with an environment that provides a reward signal back to the agent. A video game can easily be modeled as an environment in an RL setting, wherein players are modeled as agents with a finite set of actions that can be taken at each step and the reward signal can be determined by the game score.In RL, the agent relies on the reward signal. These signals can occur frequently, such as the change in score within a game, or it can occur infrequently, such as whether an agent has won or lost a game. Video games and RL go well together since most games give rewards for successful strategies. Open world games do not always have a clear reward model and are thus challenging for RL algorithms.A key challenge in applying RL to games with sparse rewards is to determine how to assign credit to the many previous actions when a reward signal is obtained. The reward R(s) for state s, needs to be propagated back to the actions that lead to the reward. Historically, there are several different ways this problem is approached which are described below. If an environment can be described as a Markov Decision Process (MDP), then the agent can build a probability tree of future states and their rewards. The probability tree can then be used to calculate the utility of the current state. For an RL agent thismeans learning the model P(s' | s, a), where P is the probability of state s' given state s and action a. With a model P, utilities can be calculated by U(s) = R(s) + γmax_a ∑_s' P(s' | s, a)U(s'),where γ is the discount factor for the utility of future states. This algorithm, known as Adaptive Dynamic Programming, can converge rather quickly as it directly handles the credit assignment problem <cit.>. The issue is that it has to build a probability tree over the whole problem space and is therefore intractable for large problems. As the games covered in this work are considered "large problems", we will not go into further detail on this algorithm.Another approach to this problem is temporal difference (TD) learning. In TD learning, the agent learns the utilities U directly based off of the observation that the current utility is equal to the current reward plus the utility value of the next state <cit.>. Instead of learning the state transition model P, it learns to model the utility U, for every state. The update equation for U is:U(s) = U(s) + α(R(s) + γ U(s') - U(s)), where α is the learning rate of the algorithm. The equation above does not take into account how s' was chosen. If a reward is found at s_t, it will only affect U(s_t). The next time the agent is at s_t-1, then U(s_t-1) will be aware of the future reward. This will propagate backward over time. Likewise, less common transitions will have less of an impact on utility values. Therefore, U will converge to the same values as are obtained from ADP, albeit slower.There are alternative implementations of TD that learn rewards for state-action pairs. This allows an agent to choose an action, given the state, with no model of how to transition to future states. For this reason, these approaches are referred to as model-free methods. A popular model-free RL method is Q-learning <cit.> where the utility of a state is equal to the maximum Q-value for a state. The update equation for Q-learning is: Q(s, a) = Q(s, a) + α(R(s) + γmax_a' Q(s',a') - Q(s, a)).In Q-learning, the future reward is accounted for by selecting the best-known future state-action pair. In a similar algorithm called SARSA (State-Action-Reward-State-Action), Q(s,a) is updated only when the next a has been selected and the next s is known <cit.>. This action pair is used instead of the maximum Q-value. This makes SARSA an on-policy method in contrast to Q-learning which is off-policy because SARSA's Q-value accountsfor the agent's own policy. Q-learning and SARSA can use a neural network as a function approximator for the Q-function. The given Q update equation can be used to provide the new "expected" Q value for a state-action pair. The network can then be updated as it is in supervised learning.An agent's policy π(s) determines which action to take given a state s. For Q-learning, a simple policy would be to always take the action with the highest Q-value. Yet, early on in training, Q-values are not very accurate and an agent could get stuck always exploiting a small reward. A learning agent should prioritize exploration of new actions as well as the exploitation of what it has learned. This problem is known as a multi-armed bandit problem and has been well explored. The ϵ-greedy strategy is a simple approach that selects the (estimated) optimal action with ϵ probability and otherwise selects a random action. One approach to RL is to perform gradient descent in the policy's parameter space. Let π_θ(s, a) be the probability that action a is taken at state s given parameters θ. The basic policy gradient algorithm from the REINFORCE family of algorithms <cit.> updates θ using the gradient ∇_θ∑_a π_θ(s, a)R(s) where R(s) is the discounted cumulative reward obtained from s and forward. In practice, a sample of possible actions from thepolicy is taken and it is updated to increase the likelihood that the more successful actions are returned in the future. This lends itself well to neural networks as π can be a neural network and θ the network weights. Actor-Critic methods combine the policy gradient approach with TD learning, where an actor learns a policy π_θ(s,a) using the policy gradient algorithm, and the critic learns to approximate R using TD-learning <cit.>. Together, they are an effective approach to iteratively learning a policy. In actor-critic methods there can either be a single network to predict both π and R, or two separate networks. For an overview of reinforcement learning applied to deep neural networks we suggest the article by Arulkumaran et al. <cit.>. §.§.§ Evolutionary ApproachesThe optimization techniques discussed so far rely on gradient descent, based on differentiation of a defined error. However, derivative-free optimization methods such as evolutionary algorithms have also been widely used to train neural networks, including, but not limited to, reinforcement learning tasks. This approach, often referred to as neuroevolution (NE), can optimize a network's weights as well as their topology/architecture. Because of their generality, NE approaches have been applied extensively to different types of video games. For a complete overview of this field, we refer the interested reader to our NE survey paper <cit.>.Compared to gradient-descent based training methods, NE approaches have the benefit of not requiring the network to be differentiable and can be applied to both supervised, unsupervised and reinforcement learning problems.The ability to evolve the topology, as well as the weights, potentially offers a way of automating the development of neural network architecture, which currently requires considerable domain knowledge. Thepromise of these techniques is that evolution could find a neural network topology that is better at playing a certain game than existing human-designed architectures. While NE has been traditionally applied to problems with lower input dimensionality than typical deep learning approaches, recently Salimans et al. <cit.> showed that evolution strategies, which rely on parameter-exploration through stochastic noise instead of calculating gradients, can achieve results competitive to current deep RL approaches for Atari video game playing, given enough computational resources.§.§.§ Hybrid Learning ApproachesMore recently researchers have started to investigate hybrid approaches for video game playing, which combine deep learning methods with other machine learning approaches.Both Alvernaz and Togelius <cit.> and Poulsen et al. <cit.> experimented with combining a deep network trained through gradient descent feeding a condensed feature representation into a network trained through artificial evolution. These hybrids aim to combine the best of both approaches as deep learning methods are able to learn directly from high-dimensional input, while evolutionary methods do not rely on differentiable architectures and work well in games with sparse rewards. Some results suggest that gradient-free methods seem to be better in the early stages of training to avoid premature convergence while gradient-based methods may be better in the end when less exploration is needed <cit.>.Another hybrid method for board game playing was AlphaGo <cit.> that relied on deep neural networks and tree search methods to defeat the world champion in Go, and <cit.> that applies planning on top of a predictive model. In general, the hybridization of ontogenetic RL (such as Q-learning) with phylogenetic methods (such as evolutionary algorithms) has the potential to be very impactful as it could enable concurrent learning on different timescales <cit.>.§ GAME GENRES AND RESEARCH PLATFORMS The fast progression of deep learning methods is undoubtedly due to the convention of comparing results on publicly available datasets. A similar convention in game AI is to use game environments to compare game playing algorithms, in which methods are ranked based on their ability to score points or win in games. Conferences like the IEEE Conference on Computational Intelligence and Games run popular competitions in a variety of game environments. This section describes popular game genres and research platforms, used in the literature, that are relevant to deep learning; some examples are shown in Figure <ref>. For each genre, we briefly outline what characterizes that genre and describe the challenges faced by algorithms playing games of the genre. The video games that are discussed in this paper have to a large extent supplanted an earlier generation of simpler control problems that long served as the main reinforcement learning benchmarks but are generally too simple for modern RL methods. In such classic control problems, the input is a simple feature vector, describing the position, velocity, and angles etc. Popular platforms for such problems are rllab <cit.>, which includes classic problems such as pole balancing and the mountain car problem, and MuJoCo (Multi-Joint dynamics with Contact), a physics engine for complex control tasks such as the humanoid walking task <cit.>.§.§ Arcade Games Classic arcade games, of the type found in the late seventies' and early eighties' arcade cabinets, home video game consoles and home computers, have been commonly used as AI benchmarks within the last decade. Representative platforms for this game type are the Atari 2600, Nintendo NES, Commodore 64 and ZX Spectrum. Most classic arcade games are characterized by movement in a two-dimensional space (sometimes represented isometrically to provide the illusion of three-dimensional movement), heavy use of graphical logics (where game rules are triggered by the intersection of sprites or images), continuous-time progression, and either continuous-space or discrete-space movement. The challenges of playing such games vary by game. Most games require fast reactions and precise timing, and a few games, in particular, early sports games such as Track & Field (Konami, 1983) rely almost exclusively on speed and reactions. Many games require prioritization of several co-occurring events, which requires some ability to predict the behavior or trajectory of other entities in the game. This challenge is explicit in e.g. Tapper (Bally Midway, 1983) but also in different ways part of platform games such as Super Mario Bros (Nintendo, 1985) and shooters such as Missile Command (Atari Inc., 1980). Another common requirement is navigating mazes or other complex environments, as exemplified clearly by games such as Pac-Man (Namco, 1980) and Boulder Dash (First Star Software, 1984). Some games, such as Montezuma's Revenge (Parker Brothers, 1984), require long-term planning involving the memorization of temporarily unobservable game states. Some games feature incomplete information and stochasticity, others are completely deterministic and fully observable. The most notable game platform used for deep learning methods is the Arcade Learning Environment (ALE) <cit.>. ALE is built on top of the Atari 2600 emulator Stella and contains more than 50 original Atari 2600 games. The framework extracts the game score, 160×210 screen pixels and the RAM content that can be used as input for game playing agents. ALE was the main environment explored in the first deep RL papers that used raw pixels as input. By enabling agents to learn from visual input, ALE thus differs from classic control problems in the reinforcement learning literature, such as the Cart Pole and Mountain Car problems. An overview and discussion of the ALE environment can be found in <cit.>.Another platform for classic arcade games is the Retro Learning Environment (RLE) that currently contains seven games released for the Super Nintendo Entertainment System (SNES) <cit.>. Many of these games have 3D graphics and the controller allows for over 720 action combinations. SNES games are thus more complex and realistic than Atari 2600 games but RLE has not been as popular as ALE. The General Video Game AI (GVG-AI) framework <cit.> allows for easy creation and modification of games and levels using the Video Game Description Language (VGDL) <cit.>. This is ideal for testing the generality of agents on multiple games or levels. GVG-AI includes over 100 classic arcade games each with five different levels. §.§ Racing Games Racing games are games where the player is tasked with controlling some kind of vehicle or character so as to reach a goal in the shortest possible time, or as to traverse as far as possible along a track in a given time. Usually, the game employs a first-person perspective or a vantage point from just behind the player-controlled vehicle. The vast majority of racing games take a continuous input signal as steering input, similar to a steering wheel. Some games, such as those in the Forza Motorsport (Microsoft Studios, 2005–2016) or Real Racing (Firemint and EA Games, 2009–2013) series, allow for complex input including gear stick, clutch and handbrake, whereas more arcade-focused games such as those in the Need for Speed (Electronic Arts, 1994–2015) series typically have a simpler set of inputs and thus lower branching factor.A challenge that is common in all racing games is that the agent needs to control the position of the vehicle and adjust the acceleration or braking, using fine-tuned continuous input, so as to traverse the track as fast as possible. Doing this optimally requires at least short-term planning, one or two turns forward. If there are resources to be managed in the game, such as fuel, damage or speed boosts, this requires longer-term planning. When other vehicles are present on the track, there is an adversarial planning aspect added, in trying to manage or block overtaking; this planning is often done in the presence of hidden information (position and resources of other vehicles on different parts of the track).A popular environment for visual reinforcement learning with realistic 3D graphics is the open racing car simulator TORCS <cit.>. §.§ First-Person Shooters (FPS) More advanced game environments have recently emerged for visual reinforcement learning agents in a First-Person Shooters (FPS). In contrast to classic arcade games such as those in the ALE benchmark, FPSes have 3D graphics with partially observable states and are thus a more realistic environment to study. Usually, the viewpoint is that of the player-controlled character, though some games that are broadly in the FPS categories adopt an over-the-shoulder viewpoint. The design of FPS games is such that part of the challenge is simply fast perception and reaction, in particular, spotting enemies and quickly aiming at them. But there are other cognitive challenges as well, including orientation and movement in a complex three-dimensional environment, predicting actions and locations of multiple adversaries, and in some game modes also team-based collaboration. If visual inputs are used, there is the challenge of extracting relevant information from pixels.Among FPS platforms are ViZDoom, a framework that allows agents to play the classic first-person shooter Doom (id Software, 1993–2017) using the screen buffer as input <cit.>. DeepMind Lab is a platform for 3D navigation and puzzle-solving tasks based on the Quake III Arena (id Software, 1999) engine <cit.>. §.§ Open-World GamesOpen-world games such as Minecraft (Mojang, 2011) or the Grand Theft Auto (Rockstar Games, 1997–2013) series are characterized by very non-linear gameplay, with a large game world to explore, either no set goals or many goals with unclear internal ordering, and large freedom of action at any given time. Key challenges for agents are exploring the world and setting goals which are realistic and meaningful. As this is a very complex challenge, most research use these open environments to explore reinforcement learning methods that can reuse and transfer learned knowledge to new tasks. Project Malmo is a platform built on top of the open-world game Minecraft, which can be used to define many diverse and complex problems <cit.>. §.§ Real-time Strategy Games Strategy games are games where the player controls multiple characters or units, and the objective of the game is to prevail in some sort of conquest or conflict. Usually, but not always, the narrative and graphics reflect a military conflict, where units may be e.g. knights, tanks or battleships. The key challenge in strategy games is to lay out and execute complex plans involving multiple units. This challenge is in general significantly harder than the planning challenge in classic board games such as Chess mainly because multiple units must be moved at any time and the effective branching factor is typically enormous. The planning horizon can be extremely long, where actions that are taken at the beginning of a game impact the overall strategy. In addition, there is the challenge of predicting the moves of one or several adversaries, who have multiple units themselves. Real-time Strategy Games (RTS) are strategy games which do not progress in discrete turns, but where actions can be taken at any point in time. RTS games add the challenge of time prioritization to the already substantial challenges of playing strategy games.The StarCraft (Blizzard Entertainment, 1998–2017) series is without a doubt the most studied game in the Real-Time Strategy (RTS) genre. The Brood War API (BWAPI)[http://bwapi.github.io/] enables software to communicate with StarCraft while the game runs, e.g. to extract state features and perform actions. BWAPI has been used extensively in game AI research, but currently, only a few examples exist where deep learning has been applied. TorchCraft is a library built on top of BWAPI that connects the scientific computing framework Torch to StarCraft to enable machine learning research for this game <cit.>. Additionally, DeepMind and Blizzard (the developers of StarCraft) have developed a machine learning API to support research in StarCraft II with features such as simplified visuals designed for convolutional networks <cit.>. This API contains several mini-challenges while it also supports the full 1v1 game setting. μRTS <cit.> and ELF <cit.> are two minimalistic RTS game engines that implement some of the features that are present in RTS games. §.§ Team Sports GamesPopular sports games are typically based on team-based sports such as soccer, basketball, and football. These games aim to be as realistic as possible with life-like animations and 3D graphics. Several soccer-like environments have been used extensively as research platforms, both with physical robots and 2D/3D simulations, in the annual Robot World Cup Soccer Games (RoboCup) <cit.>.Keepaway Soccer is a simplistic soccer-like environment where one team of agents try to maintain control of the ball while another team tries to gain control of it <cit.>. A similar environment for multi-agent learning is RoboCup 2D Half-Field-Offense (HFO) where teams of 2-3 players either take the role as offense or defense on one half of a soccer field <cit.>. §.§ Text Adventure GamesA classic text adventure game is a form of interactive fiction where players are given descriptions and instructions in text, rather than graphics, and interact with the storyline through text-based commands <cit.>. These commands are usually used to query the system about the state, interact with characters in the story, collect and use items, or navigate the space in the fictional world.These games typically implement one of three text-based interfaces: parser-based, choice-Based, and hyperlink-based <cit.>. Choice-based and hyperlink-based interfaces provide the possible actions to the player at a given state as a list, out of context, or as links in the state description. Parser-Based interfaces are, on the other hand, open to any input and the player has to learn what words the game understands. This is interesting for computers as it is much more akin to natural language, where you have to know what actions should exist based on your understanding of language and the given state.Unlike some other game genres, like arcade games, text adventure games have not had a standard benchmark of games that everyone can compare against.This makes a lot of results hard to directly compare. A lot of research has focused on games that run on Infocom's Z-Machine game engine, an engine that can play a lot of the early, classic games. Recently, Microsoft has introduced the environment TextWorld to help create a standardized text adventure environment <cit.>. §.§ OpenAI Gym & UniverseOpenAI Gym is a large platform for comparing reinforcement learning algorithms with a single interface to a suite of different environments including ALE, GVG-AI, MuJoCo, Malmo, ViZDoom and more <cit.>. OpenAI Universe is an extension to OpenAI Gym that currently interfaces with more than a thousand Flash games and aims to add many modern video games in the future[https://universe.openai.com/].§ DEEP LEARNING METHODS FOR GAME PLAYINGThis section gives an overview of deep learning techniques used to play video games, divided by game genre. Table <ref> lists deep learning methods for each game genre and highlights which input features, network architecture, and training methods they rely upon. A typical neural network architecture used in deep RL is shown in Figure <ref>. §.§ Arcade Games The Arcade Learning Environment (ALE) consists of more than 50 Atari games and has been the main testbed for deep reinforcement learning algorithms that learn control policies directly from raw pixels. This section reviews the main advancements that have been demonstrated in ALE. An overview of these advancements is shown in Table <ref>.Deep Q-Network (DQN) was the first learning algorithm that showed human expert-level control in ALE <cit.>. DQN was tested in seven Atari 2600 games and outperformed previous approaches, such as SARSA with feature construction <cit.> and neuroevolution <cit.>, as well as a human expert on three of the games. DQN is based on Q-learning, where a neural network model learns to approximate Q^π(s,a) that estimates the expected return of taking action a in state s while following a behavior policy μ. A simple network architecture consisting of two convolutional layers followed by a single fully-connected layer was used as a function approximator. A key mechanism in DQN is experience replay <cit.>, where experiences in the form {s_t, a_t, r_t+1, s_t+1} are stored in a replay memory and randomly sampled in batches when the network is updated. This enables the algorithm to reuse and learn from past and uncorrelated experiences, which reduces the variance of the updates. DQN was later extended with a separate target Q-network which parameters are held fixed between individual updates and was shown to achieve above human expert scores in 29 out of 49 tested games <cit.>. Deep Recurrent Q-Learning (DRQN) extends the DQN architecture with a recurrent layer before the output and works well for games with partially observable states <cit.>.A distributed version of DQN was shown to outperform a non-distributed version in 41 of the 49 games using the Gorila architecture (General Reinforcement Learning Architecture) <cit.>. Gorila parallelizes actors that collect experiences into a distributed replay memory as well as parallelizing learners that train on samples from the same replay memory. One problem with the Q-learning algorithm is that it often overestimates action values because it uses the same value function for action-selection and action-evaluation. Double DQN, based on double Q-learning <cit.>, reduces the observed overestimation by learning two value networks with parameters θ and θ^' that both use the other network for value-estimation, such that the target Y_t = R_t+1 + γ Q(S_t+1, max_a Q(S_t+1,a;θ_t);θ^'_t) <cit.>. Another improvement is prioritized experience replay from which important experiences are sampled more frequently based on the TD-error, which was shown to significantly improve both DQN and Double DQN <cit.>.Dueling DQN uses a network that is split into two streams after the convolutional layers to separately estimate state-value V^π(s) and the action-advantage A^π(s,a), such that Q^π(s,a) = V^π(s) + A^π(s,a) <cit.>. Dueling DQN improves Double DQN and can also be combined with prioritized experience replay. Double DQN and Dueling DQN were also tested in the five more complex games in the RLE and achieved a mean score of around 50% of a human expert <cit.>. The best result in these experiments was by Dueling DQN in the game Mortal Kombat (Midway, 1992) with 128%. Bootstrapped DQN improves exploration by training multiple Q-networks. A randomly sampled network is used during each training episode and bootstrap masks modulate the gradients to train the networks differently <cit.>. Robust policies can be learned with DQN for competitive or cooperative multi-player games by training one network for each player and play them against each other in the training process <cit.>. Agents trained in multiplayer mode perform very well against novel opponents, whereas agents trained against a stationary algorithm fail to generalize their strategies to novel adversaries.Multi-threaded asynchronous variants of DQN, SARSA and Actor-Critic methods can utilize multiple CPU threads on a single machine, reducingtraining roughly linear to the number of parallel threads <cit.>. These variants do not rely on a replay memory because the network is updated on uncorrelated experiences from parallel actors which also helps to stabilize on-policy methods. The Asynchronous Advantage Actor-Critic (A3C) algorithm is an actor-critic method that uses several parallel agents to collect experiences that all asynchronously update a global actor-critic network. A3C outperformed Prioritized Dueling DQN, which was trained for 8 days on a GPU, with just half the training time on a CPU <cit.>.An actor-critic method with experience replay (ACER) implements an efficient trust region policy method that forces updates to not deviate far from a running average of past policies <cit.>. The performance of ACER in ALE matches Dueling DQN with prioritized experience replay and A3C without experience replay, while it is much more data efficient. A3C with progressive neural networks <cit.> can effectively transfer learning from one game to another. The training is done by instantiating a network for every new task with connections to all the previous learned networks. This gives the new network access to knowledge already learned. The Advantage Actor-Critic (A2C), a synchronous variant of A3C <cit.>, updates the parameters synchronously in batches and has comparable performance while only maintaining one neural network <cit.>. Actor-Critic using Kronecker-Factored Trust Region (ACKTR) extends A2C by approximating the natural policy gradient updates for both the actor and the critic <cit.>. In Atari, ACKTR has slower updates compared to A2C (at most 25% per time step) but is more sample efficient (e.g. by a factor of 10 in Atlantis) <cit.>. Trust Region Policy Optimization (TRPO) uses a surrogate objective with theoretical guarantees for monotonic policy improvement, while it practically implements an approximation called trust region <cit.>. This is done by constraining network updates with a bound on the KL divergence between the current and the updated policy. TRPO has robust and data efficient performance in Atari games while it has high memory requirements and several restrictions. Proximal Policy Optimization (PPO) is an improvement on TRPO that uses a similar surrogate objective <cit.>, but instead uses a soft constraint (originally suggested in <cit.>) by adding the KL-divergence as a penalty. Instead of having a fixed penalty coefficient, it uses a clipped surrogate objective that penalizes policy updates outside some specified interval. PPO was shown to be more sample efficient than A2C and on par with ACER in Atari, while PPO does not rely on replay memory. PPO was also shown to have comparable or better performance than TRPO in continuous control tasks while being simpler and easier to parallelize.IMPALA (Importance Weighted Actor-Learner Architecture) is an actor-critic method where multiple learners with GPU access share gradients between each other while being synchronously updated from a set of actors <cit.>. This method can scale to a large number of machines and outperforms A3C. Additionally, IMPALA was trained, with one set of parameters, to play all 57 Atari games in ALE with a mean human-normalized score of176.9% (median of 59.7%) <cit.>. Experiences collected by the actors in the IMPALA setup can lack behind the learners' policy and thus result in off-policy learning. This discrepancy is mitigated through a V-trace algorithm that weighs the importance of experiences based on the difference between the actor's and learner's policies <cit.>.UNREAL (UNsupervised REinforcement and Auxiliary Learning) algorithm is based on A3C but uses a replay memory from which it learns auxiliary tasks and pseudo-reward functions concurrently <cit.>. UNREAL only shows a small improvement over vanilla A3C in ALE, but larger improvements in other domains (see Section <ref>). Distributional DQN takes a distributional perspective on reinforcement learning by treating Q(s,a) as an approximate distribution of returns instead of a single approximate expectation for each action <cit.>. The distribution is divided into a so-called set of atoms, which determines the granularity of the distribution. Their results show that the more fine-grained the distributions are, the better are the results, and with 51 atoms (this variant was called C51) it achieved mean scores in ALE almost comparable to UNREAL.In NoisyNets, noise is added to the network parameters and a unique noise level for each parameter is learned using gradient descent <cit.>. In contrast to ϵ-greedy exploration, where an agent either samples actions from the policy or from a uniform random distribution, NoisyNets use a noisy version of the policy to ensure exploration, and this was shown to improve DQN (NoisyNet-DQN) and A3C (NoisyNet-A3C).Rainbow combines several DQN enhancements: Double DQN, Prioritized Replay, Dueling DQN, Distributional DQN, and NoisyNets, and achieved a mean score higher than any of the enhancements individually <cit.>.Evolution Strategies (ES) are black-box optimization algorithms that rely on parameter-exploration through stochastic noise instead of calculating gradients and were found to be highly parallelizable with a linear speedup in training time when more CPUs are used <cit.>. 720 CPUs were used for one hour whereafter ES managed to outperform A3C (which ran for 4 days) in 23 out of 51 games, while ES used 3 to 10 times as much data due to its high parallelization. ES only ran a single day and thus their full potential is currently unknown. Novelty search is a popular algorithm that can overcome environments with deceptive and/or sparse rewards by guiding the search towards novel behaviors <cit.>. ES has been extended to use Novelty Search (NS-ES) which outperforms ES on several challenging Atari games by defining novel behaviors based on the RAM states <cit.>. A quality-diversity variant called NSR-ES that uses both novelty and the reward signal reach an even higher performance <cit.>. NS-ES and NSR-ES reached worse results on a few games, possibly where the reward function is not sparse or deceptive. A simple genetic algorithm with a Gaussian noise mutation operator evolves the parameters of a deep neural network (Deep GA) and can achieve surprisingly good scores across several Atari games <cit.>. Deep GA shows comparable results to DQN, A3C, and ES on 13 Atari games using up to thousands of CPUs in parallel. Additionally, random search, given roughly the same amount of computation, was shown to outperform DQN on 4 out of 13 games and A3C on 5 games <cit.>. While there has been concern that evolutionary methods do not scale as well as gradient descent-based methods, one possibility is separating the feature construction from the policy network; evolutionary algorithms can then create extremely small networks that still play well <cit.>.A few supervised learning approaches have been applied to arcade games. In Guo et al. <cit.> a slow planning agent was applied offline, using Monte-Carlo Tree Search, to generate data for training a CNN via multinomial classification. This approach, called UCTtoClassification, was shown to outperform DQN. Policy distillation <cit.> or actor-mimic <cit.> methods can be used to train one network to mimic a set of policies (e.g. for different games). These methods can reduce the size of the network and sometimes also improve the performance. A frame prediction model can be learned from a dataset generated by a DQN agent using the encoding-transformation-decoding network architecture; the model can then be used to improve exploration in a retraining phase <cit.>. Self-supervised tasks, such as reward prediction, validation of state-successor pairs, and mapping states and successor states to actions can define auxiliary losses used in pre-training of a policy network, which ultimately can improve learning <cit.>.The training objective provides feedback to the agent while the performance objective specifies the target behavior. Often, a single reward function takes both roles, but for some games, the performance objective does not guide the training sufficiently. The Hybrid Reward Architecture (HRA) splits the reward function into n different reward functions, where each of them are assigned a separate learning agent <cit.>. HRA does this by having n output streams in the network, and thus n Q-values, which are combined when actions are selected. HRA was able to achieve the maximum possible score in less than 3,000 episodes. §.§ Montezuma's Revenge Environments with sparse feedback remain an open challenge for reinforcement learning. The game Montezuma's Revenge is a good example of such an environment in ALE and has thus been studied in more detail and used for benchmarking learning methods based on intrinsic motivation and curiosity. The main idea of applying intrinsic motivation is to improve the exploration of the environment based on some self-rewarding system, which eventually will help the agent to obtain an extrinsic reward. DQN fails to obtain any reward in this game (receiving a score of 0) and Gorila achieves an average score of just 4.2. A human expert can achieve 4,367 points and it is clear that the methods presented so far are unable to deal with environments with such sparse rewards. A few promising methods aim to overcome these challenges.Hierarchical-DQN (h-DQN) <cit.>operates on two temporal scales, where one Q-value function Q_1(s,a;g), the controller, learns a policy over actions that satisfy goals chosen by a higher-level Q-value function Q_2(s, g), the meta-controller, which learns a policy over intrinsic goals (i.e. which goals to select). This method was able to reach an average score of around 400 in Montezuma's Revenge where goals were defined as states in which the agent reaches (collides with) a certain type of object. This method, therefore, must rely on some object detection mechanism. Pseudo-counts have been used to provide intrinsic motivation in the form of exploration bonuses when unexpected pixel configurations are observed and can be derived from CTS density models <cit.> or neural density models <cit.>. Density models assign probabilities to images, and a model's pseudo count of an observed image is the model's change in prediction compared to being trained oneadditional time on the same image. Impressive results were achieved in Montezuma's Revenge and other hard Atari games by combining DQN with the CTS density model (DQN-CTS) or the PixelCNN density model (DQN-PixelCNN) <cit.>. Interestingly, the results were less impressive when the CTS density model was combined with A3C (A3C-CTS) <cit.>. \end Ape-X DQN is a distributed DQN architecture similar to Gorila, as in actors are separated from the learner. Ape-X DQN was able to reach state-of-art results across the 57 Atari games using 376 cores and 1 GPU, running at ∼50K FPS <cit.>. Deep Q-learning from Demonstrations (DQfD) draw samples from an experience replay buffer that is initialized with demonstration data from a human expert and is superior to previous methods on 11 Atari games with sparse rewards <cit.>. Ape-X DQfD combines the distributed architecture from Ape-X and the learning algorithm from DQfD using expert data and was shown to outperform all previous methods in ALE as well as beating level 1 in Montezuma's Revenge <cit.>. To improve the performance, Kaplan et. al. augmented the agent training with text instructions. An instruction-based reinforcement learning approach that uses both a CNN for visual input and RNN for text-based instruction, inputs managed to achieve a score of 3,500 points. Instructions were linked to positions in rooms and agents were rewarded when they reached those locations <cit.>, demonstrating a fruitful collaboration between a human and a learning algorithm. Experiments in Montezuma's Revenge also showed that the network learned to generalize to unseen instructions that were similar to previous instructions.Similar work demonstrates how an agent can execute text-based commands in a 2D maze-like environment called XWORLD, such as walking to and picking up objects, after having learned a teacher's language <cit.>. An RNN-based language module is connected to a CNN-based perception module. These two modules were then connected to an action-selection module and a recognition module that learns the teacher's language in a question answering process.§.§ Racing GamesThere are generally two paradigms for vision-based autonomous driving highlighted in Chen at al. <cit.>; (1) end-to-end systems that learn to map images to actions directly (behavior reflex), and (2) systems that parse the sensor data to make informed decisions (mediated perception). An approach that falls in between these paradigms is direct perception where a CNN learns to map from images to meaningful affordance indicators, such as the car angle and distance to lane markings, from which a simple controller can make decisions <cit.>. Direct perception was trained on recordings of 12 hours of human driving in TORCS and the trained system was able to drive in very diverse environments. Amazingly, the network was also able to generalize to real images. End-to-end reinforcement learning algorithms such as DQN cannot be directly applied to continuous environments such as racing games because the action space must be discrete and with relatively low dimensionality. Instead, policy gradient methods, such as actor-critic <cit.> and Deterministic Policy Gradient (DPG) <cit.> can learn policies in high-dimensional and continuous action spaces. Deep DPG (DDPG) is a policy gradient method that implements both experience replay and a separate target network and was used to train a CNN end-to-end in TORCS from images <cit.>.The aforementioned A3C methods have also been applied to the racing game TORCS using only pixels as input <cit.>. In those experiments, rewards were shaped as the agent's velocity on the track, and after 12 hours of training, A3C reached a score between roughly 75% and 90% of a human tester in tracks with and without opponent bots, respectively.While most approaches to training deep networks from high-dimensional input in video games are based on gradient descent, a notable exception is an approach by Koutník et al. <cit.>, where Fourier-type coefficients were evolved that encoded a recurrent network with over 1 million weights. Here, evolution was able to find a high-performing controller for TORCS that only relied on high-dimensional visual input.§.§ First-Person ShootersKempka et al. <cit.> demonstrated that a CNN with max-pooling and fully connected layers trained with DQN can achieve human-like behaviors in basic scenarios. In the Visual Doom AI Competition 2016[http://vizdoom.cs.put.edu.pl/competition-cig-2016], a number of participants submitted pre-trained neural network-based agents that competed in a multi-player deathmatch setting. Both a limited competition was held, in which bots competed in known levels, and a full competition that included bots competing in unseen levels.The winner of the limited track used a CNN trained with A3C using reward shaping and curriculum learning <cit.>. Reward shaping tackled the problem of sparse and delayed rewards, giving artificial positive rewards for picking up items and negative rewards for using ammunition and losing health. Curriculum learning attempts to speed up learning by training on a set of progressively harder environments <cit.>. The second-place entry in the limited track used a modified DRQN network architecture with an additional stream of fully connected layers to learn supervised auxiliary tasks such as enemy detection, with the purpose of speeding up the training of the convolutional layers <cit.>. Position inference and object mapping from pixels and depth-buffers using Simultaneous Localization and Mapping (SLAM) also improve DQN in Doom <cit.>.The winner of the full deathmatch competition implemented a Direct Future Prediction (DFP) approach that was shown to outperform DQN and A3C<cit.>. The architecture used in DFP has three streams: one for the screen pixels, one for lower-dimensional measurements describing the agent's current state, and one for describing the agent's goal, which is a linear combination of prioritized measurements. DFP collects experiences in a memory and is trained with supervised learning techniques to predict the future measurements based on the current state, goal and selected action. During training, actions are selected that yield the best-predicted outcome, based on the current goal. This method can be trained on various goals and generalizes to unseen goals at test time. Navigation in 3D environments is one of the important skills required for FPS games and has been studied extensively. A CNN+LSTM network was trained with A3C extended with additional outputs predicting the pixel depths and loop closure, showing significant improvements <cit.>. The UNREAL algorithm, based on A3C, implements an auxiliary task that trains the network to predict the immediate subsequent future reward from a sequence of consecutive observations. UNREAL was tested on fruit gathering and exploration tasks in OpenArenaand achieved a mean human-normalized score of 87%, where A3C only achieved 53% <cit.>.The ability to transfer knowledge to new environments can reduce the learning time and can in some cases be crucial for some challenging tasks. Transfer learning can be achieved by pre-training a network in similar environments with simpler tasks or by using random textures during training <cit.>. The Distill and Transfer Learning (Distral) method trains several worker policies (one for each task) concurrently and shares a distilled policy  <cit.>. The worker policies are regularized to stay close to the shared policy which will be the centroid of the worker policies. Distral was applied to DeepMind Lab.The Intrinsic Curiosity Module (ICM), consisting of several neural networks, computes an intrinsic reward each time step based on the agent's inability to predict the outcome of taking actions. It was shown to learn to navigate in complex Doom and Super Mario levels only relying on intrinsic rewards <cit.>. §.§ Open-World GamesThe Hierarchical Deep Reinforcement Learning Network (H-DRLN) architecture implements a lifelong learning framework, which is shownto be able to transfer knowledge between simple tasks in Minecraft such as navigation, item collection, and placement tasks <cit.>. H-DRLN uses a variation of policy distillation <cit.> to retain and encapsulate learned knowledge into a single network. Neural Turing Machines (NTMs) are fully differentiable neural networks coupled with an external memory resource, which can learn to solve simple algorithmic problems such as copying and sorting <cit.>. Two memory-based variations, inspired by NTM, called Recurrent Memory Q-Network (RMQN) and Feedback Recurrent Memory Q-Network (FRMQN) were able to solve complex navigation tasks that require memory and active perception <cit.>. The Teacher-Student Curriculum Learning (TSCL) framework incorporates a teacher that prioritizes tasks wherein the student's performance is either increasing (learning) or decreasing (forgetting) <cit.>. TSCL enabled a policy gradient learning method to solve mazes that were otherwise not possible with a uniform sampling of subtasks.§.§ Real-Time Strategy Games The previous sections described methods that learn to play games end-to-end, i.e. a neural network is trained to map states directly to actions. Real-Time Strategy (RTS) games, however, offer much more complex environments, in which players have to control multiple agents simultaneously in real-time on a partially observable map. Additionally, RTS games have no in-game scoring and thus the reward is determined by who wins the game. For these reasons, learning to play RTS games end-to-end may be infeasible for the foreseeable future and instead, sub-problems have been studied so far. For the simplistic RTS platform μRTS a CNN was trained as a state evaluator using supervised learning on a generated data set and used in combination with Monte Carlo Tree Search <cit.>. This approach performed significantly better than previous evaluation methods. StarCraft has been a popular game platform for AI research, but so far only with a few deep learning approaches. Deep learning methods for StarCraft have focused on micromanagement (unit control) or build-order planning and has ignored other aspects of the game. The problem of delayed rewards in StarCraft can be circumvented in combat scenarios; here rewards can be shaped as the difference between damage inflicted and damage incurred <cit.>. States and actions are often described locally relative to units, which is extracted from the game engine. If agents are trained individually it is difficult to know which agents contributed to the global reward <cit.>, a problem known as the multi-agent credit assignment problem. One approach is to train a generic network, which controls each unit separately and search in policy space using Zero-Order optimization based on the reward accrued in each episode <cit.>. This strategy was able to learn successful policies for armies of up to 15 units. Independent Q-learning (IQL) simplifies the multi-agent RL problem by controlling units individually while treating other agents as if they were part of the environment <cit.>. This enables Q-learning to scale well to a large number of agents. However, when combining IQL with recent techniques such as experience replay, agents tend to optimize their policies based on experiences with obsolete policies. This problem is overcome by applying fingerprints to experiences and by applying an importance-weighted loss function that naturally decays obsolete data, which has shown improvements for some small combat scenarios <cit.>. The Multiagent Bidirectionally-Coordinated Network (BiCNet) implements a vectorized actor-critic framework based on a bi-directional RNN, with one dimension for every agent, and outputs a sequence of actions <cit.>. This network architecture is unique to the other approaches as it can handle an arbitrary number of units of different types.Counterfactual multi-agent (COMA) policy gradients is an actor-critic method with a centralized critic and decentralized actors that address the multi-agent credit assignment problem with a counterfactual baseline computed by the critic network <cit.>. COMA achieves state-of-the-art results, for decentralized methods, in small combat scenarios with up to ten units on each side.Deep learning has also been applied to build-order planning in StarCraft using macro-based supervised learning approach to imitate human strategies <cit.>. The trained network was integrated as a module used in an existing bot capable of playing the full game with an other-wise hand-crafted behavior. Another macro-based approach, here using RL instead of SL, called Convolutional Neural Network Fitted Q-Learning (CNNFQ), was trained with Double DQN for build-order planning in StarCraft II and was able to win against medium-level scripted bots on small maps <cit.>. A macro action-based reinforcement learning method that uses Proximal Policy Optimization (PPO) for build order planning and high-level attack planning was able to outperform the built-in bot in StarCraft II at level 10 <cit.>. This is particularly impressive as the level 10 bot cheats by having full vision of the map and faster resource harvesting. The results were obtained using 1920 parallel actors on 3840 CPUs across 80 machines and only for one matchup on one map. This system won a few games against Platinum-level human players but lost all games against Diamond-level players. The authors report that the learned policy "lacks strategy diversity in order to consistently beat human players" <cit.>. §.§ Team Sports Games Deep Deterministic Policy Gradients (DDPG) was applied to RoboCup 2D Half-Field-Offense (HFO) <cit.>. The actor network used two output streams, one for the selection of discrete action types (dash, turn, tackle, and kick) and one for each action type's 1-2 continuously-valued parameters (power and direction). The Inverting Gradients bounding approach downscales the gradients as the output approaches its boundaries and inverts the gradients if the parameter exceeds them. This approach outperformed both SARSA and the best agent in the 2012 RoboCup. DDPG was also applied to HFO by mixing on-policy updates with 1-step Q-Learning updates <cit.> and outperformed a hand-coded agent with expert knowledge with one player on each team. §.§ Physics GamesAs video games are usually a reflection or simplification of the real world, it can be fruitful to learn an intuition about the physical laws in an environment. A predictive neural network using an object-centered approach (also called fixations) learned to run simulations of a billiards game after being trained on random interactions <cit.>. This predictive model could then be used for planning actions in the game. A similar predictive approach was tested in a 3D game-like environment, using the Unreal Engine, where ResNet-34 <cit.> (a deep residual network used for image classification) was extended and trained to predict the visual outcome of blocks that were stacked such that they would usually fall <cit.>. Residual networks implement shortcut connections that skip layers, which can improve learning in very deep networks. §.§ Text Adventure GamesText adventure games, in which both states and actions are presented as text only, are a special video game genre. A network architecture called LSTM-DQN <cit.> was designed specifically to play these games and is implemented using LSTM networks that convert text from the world state into a vector representation, which estimates Q-values for all possible state-action pairs. LSTM-DQN was able to complete between 96% and 100% of the quests on average in two different text adventure games.To be able to improve on these results, researchers have moved toward learning language models and word embeddings to augment the neural network. An approach that combines reinforcement learning with explicit language understanding is Deep Reinforcement Relevance Net (DRRN) <cit.>. This approach has two networks that learn word embeddings. One embeds the state description, the other embeds the action description. Relevance between the two embedding vectors is calculated with an interaction function such as the inner product of the vectors or a bilinear operation. The Relevance is then used as the Q-Value and the whole process is trained end-to-end with Deep Q-Learning. This approach allows the network to generalize to phrases not seen during training which is an improvement for very large text games. The approach was tested on the text games Saving John and Machine of Death, both choice-based games.Taking language modeling further, Fulda et. al. explicitly modeled language affordances to assist in action selection <cit.>. A word embedding is first learned from a Wikipedia Corpus via unsupervised learning <cit.> and this embedding is then used to calculate analogies such as song is to sing as bike is to x, where x can then be calculated in the embedding space <cit.>. The authors build a dictionary of verbs, noun pairs, and another one of object manipulation pairs. Using the learned affordances, the model can suggest a small set of actions for a state description. Policies were learned with Q-Learning and tested on 50 Z-Machine games.The Golovin Agent focuses exclusively on language models <cit.> that are pre-trained from a corpus of books in the fantasy genre. Using word embeddings, the agent can replace synonyms with known words. Golovin is built of five command generators: General, Movement, Battle, Gather, and Inventory. These are generated by analyzing the state description, using the language models to calculate and sample from a number of features for each command. Golovin uses no reinforcement learning and scores comparable to the affordance method.Most recently, Zahavy et. al. proposed another DQN method <cit.>. This method uses a type of attention mechanism called Action Elimination Network (AEN). In parser-based games, the actions space is very large. The AEN learns, while playing, to predict which actions that will have no effect for a given state description. The AEN is then used to eliminate most of the available actions for a given state and after which the remaining actions are evaluated with the Q-network. The whole process is trained end-to-end and achieves similar performance to DQN with a manually constrained actions space. Despite the progress made for text adventure games, current techniques are still far from matching human performance.Outside of text adventure games, natural language processing has been used for other text-based games as well. To facilitate communication, a deep distributed recurrent Q-network (DDRQN) architecture was used to train several agents to learn a communication protocol to solve the multi-agent Hats and Switch riddles <cit.>. One of the novel modifications in DDRQN is that agents use shared network weights that are conditioned on their unique ID, which enables faster learning while retaining diversity between agents. § HISTORICAL OVERVIEW OF DEEP LEARNING IN GAMES The previous section discussed deep learning methods in games according to the game type. This section instead looks at the development of these methods in terms of how they influenced each other, giving a historical overview of the deep learning methods that are reviewed in the previous section. Many of these methods are inspired from or directly build upon previous methods, while some are applied to different game genres and others are tailored to specific types of games.Figure <ref> shows an influence diagram with the reviewed methods and their relations to earlier methods (the current section can be read as a long caption to that figure). Each method in the diagram is colored to show the game benchmark. DQN <cit.> was very influential as an algorithm that uses gradient-based deep learning for pixel-based video game playing and was originally applied to the Atari benchmark. Note that earlier approaches exist but with less success such as <cit.>, and successful gradient-free methods <cit.>. Double DQN <cit.> and Dueling DQN <cit.> are early extensions that use multiple networks to improve estimations. DRQN <cit.> uses a recurrent neural network as the Q network. Prioritized DQN <cit.> is another early extension and it adds improved experience replay sampling. Bootstrapped DQN <cit.> builds off of Double DQN with a different improved sampling strategy. Further DQN enhancements used for Atari include: the C51 algorithm <cit.>, which is based on DQN but changes the Q function; Noisy-Nets which make the networks stochastic to aid with exploration <cit.>; DQfD which also learns from examples <cit.>; and Rainbow, which combines many of these state-of-the-art techniques together <cit.>.Gorila was the first asynchronous method based on DQN <cit.> and was followed by A3C <cit.> which uses multiple asynchronous agents for an actor-critic approach. This was further extended at the end of 2016 with UNREAL <cit.>, which incorporates work done with auxiliary learning to handle sparse feedback environments. Since then there has been a lot of additional extensions on A3C <cit.>, <cit.>, <cit.>, <cit.>. IMPALA has taken it further with focusing on a single trained agent that can play all of the Atari games <cit.>. In 2018, the move toward large scale distributed learning has continued and advanced with Ape-X <cit.>. Evolutionary techniques are also seeing a Renaissance for video games. First Salimans et. al. showed that Evolution Strategies could compete with deep RL <cit.>. Then two more papers came out of Uber AI: one showing that derivative-free evolutionary algorithms can compete with deep RL <cit.>, and an extension to ES <cit.>. These benefit from easy parallelization and possibly have some advantage in exploration. Another approach used on Atari around the time that DQN was introduced is Trust Region Policy Optimization <cit.>. This updates a surrogate objective that is updated from the environment. Later in 2017, Proximal Policy Optimization was introduced as a more robust, simpler surrogate optimization scheme that also draws from innovations in A3C <cit.>. Some extensions are specifically for Montezuma's revenge, which is a game within the ALE benchmark, but it is particularly difficult due to sparse rewards and hidden information. The algorithms that do best on Montezuma do so by extending DQN with intrinsic motivation <cit.> and hierarchical learning <cit.>. Ms. Pack-Man was also singled out from Atari, where the reward function was learned in separate parts to make the agent more robust to new environments <cit.>. Doom is another benchmark that is new as of 2016. Most of the work for this game has been extending methods designed for Atari to handle richer data. A3C + Curriculum Learning <cit.> proposes using curriculum learning with A3C. DRQN + Auxiliary Learning <cit.> extends DRQN by adding additional rewards during training. DQN + SLAM <cit.> combines techniques for mapping unknown environments with DQN. DFP <cit.> is the only approach that is not extending an Atari technique. Like UCT To Classification <cit.> for Atari, Object-centric Prediction <cit.> for Billiard, and Direct Perception <cit.> for Racing, DFP uses supervised learning to learn about the game. All of these, except UCT To Classification, learn to directly predict some future state of the game and make a prediction from this information. None of these works, all from different years, refer to each other. Besides Direct Perception, the only unique work for racing is Deep DPG <cit.>, which extends DQN for continuous controls. This technique has been extended for RoboCup Soccer <cit.> <cit.>.Work on StarCraft micro-management (unit control) is based on Q-learning started in late 2016. IQL <cit.> extends DQN Prioritized DQN by treating all other agents as part of the environment. COMA <cit.> extends IQL by calculating counterfactual rewards, the marginal contribution each agent added. biCNet <cit.> and Zero Order Optimization <cit.>, are reinforcement learning based but are not derived from DQN. Another popular approach is hierarchical learning. In 2017 it was tried with replay data <cit.> and in 2018 state of the art results were achieved by using it with two different RL methods <cit.>.Some work published in 2016 extends DQN to play Minecraft <cit.>. At around the same time, techniques were developed to make DQN context-aware and modular to handle the large state space <cit.>. Recently, curriculum learning has been applied to Minecraft as well <cit.>.DQN was applied to text adventure games in 2015 <cit.>. Soon after, it was modified to have a language-specific architecture and use the state-action pair relevance as the Q value <cit.>. Most of the work on these games has been focused on explicit language modeling. Golovin Agent and Affordance Based Action Selection both use neural networks to learn language models which provide the actions for the agents to play <cit.>. Recently, in 2018, DQN was used again paired with an Action Elimination Network <cit.>.Combining extensions from previous algorithms have proven to be a promising direction for deep learning applied to video games, with Atari being the most popular benchmark for RL. Another clear trend, which is apparent in Table <ref>, is the focus on parallelization: distributing the work among multiple CPUs and GPUs. Parallelization is most common with actor-critic methods, such as A2C and A3C, and evolutionary approaches, such as Deep GA <cit.> and Evolution Strategies <cit.>. Hierarchical reinforcement learning, intrinsic motivation, and transfer learning are promising new directions to explore to master currently unsolved problems in video game playing.§ OPEN CHALLENGES While deep learning has shown remarkable results in video game playing, a multitude of important open challenges remain, which we review here. Indeed, looking back at the current state of research from a decade or two in the future, it is likely that we will see the current research as early steps in a broad and important research field. This section is divided into four broad categories (agent model properties, game industry, learning models of games, and computational resources) with different game-playing challenges that remain open for deep learning techniques. We mention a few potential approaches for some of the challenges while the best way forward for others is currently not clear.§.§ Agent Model Properties§.§.§ General Video Game Playing Being able to solve a single problem does not make you intelligent; nobody would say that Deep Blue or AlphaGo <cit.> possess general intelligence, as they cannot even play Checkers (without re-training), much less make coffee or tie their shoelaces. To learn generally intelligent behavior, you need to train on not just a single task, but many different tasks <cit.>. Video games have been suggested as ideal environments for learning general intelligence, partly because there are so many video games that share common interface and reward conventions <cit.>. Yet, the vast majority of work on deep learning in video games focuses on learning to play a single game or even performing a single task in a single game.While deep RL-based approaches can learn to play a variety of different Atari games, it is still a significant challengeto develop algorithms that can learn to play any kind of game (e.g. Atari games, DOOM, and StarCraft). Current approaches still require significant effort to design the network architecture and reward function to a specific type of game. Progress on the problem of playing multiple games includes progressive neural networks <cit.>, which allow new games to be learned (without forgetting previously learned ones) and solved quicker by exploiting previously learned features through lateral connections. However, they require a separate network for each task. Elastic weight consolidation <cit.> can learn multiple Atari games sequentially and avoids catastrophic forgetting by protecting weights from being modified that are important for previously learned games. In PathNet an evolutionary algorithm is used to select which parts of a neural network are used for learning new tasks, demonstrating some transfer learning performance on ALE games <cit.>.In the future it will be important to extend these methods to learn to play multiple games, even if those games are very different — most current approaches focus on different (known) games in the ALE framework. One suitable avenue for this kind of research is the new Learning Track of the GVGAI competition <cit.>. GVGAI has a potentially unlimited set of games, unlike ALE. Recent work in GVGAI showed that model-free deep RL overfitted not just to the individual game, but even to the individual level; this was countered by continuously generating new levels during training <cit.>. It is possible that significant advances on the multi-game problem will come from outside deep learning. In particular, the recent Tangled Graph representation, a form of genetic programming, has shown promise in this task <cit.>. The recent IMPALA algorithm tries to tackle multi-game learning through massive scaling, with somewhat promising results <cit.>.§.§.§ Overcoming sparse, delayed, or deceptive RewardsGames such as Montezuma's Revenge that are characterized by sparse rewards still pose a challenge for most Deep RL approaches. While recent advances that combine DQN with intrinsic motivation <cit.> or expert demonstrations <cit.> can help, games with sparse rewards are still a challenge for current deep RL methods. There is a long history of research in intrinsically motivated reinforcement learning <cit.> as well as hierarchical reinforcement learning which might be useful here <cit.>. The Project Malmo environment, based on Minecraft, provides an excellent venue for creating tasks with very sparse rewards where agents need to set their own goals. Derivative-free and gradient-free methods, such asevolution strategies and genetic algorithms, explore the parameter space by sampling locally and are promising for these games, especially when combined with novelty search as in <cit.>.§.§.§ Learning with multiple agentsCurrent deep RL approaches are mostly concerned with training a single agent. A few exceptions exist where multiple agents have to cooperate <cit.>, but it remains an open challenge how these can scale to more agents in various situations. In many current video games such as StarCraft or GTA V, many agents interact with each other and the player. To scale multi-agent learning in video games to the same level of performance as current single agent approaches will likely require new methods that can effectively train multiple agents at the same time.§.§.§ Lifetime adaptationWhile NPCs can be trained to play a variety of games well (see Section <ref>), current machine learning techniques still struggle when it comes to agents that should be able to adapt during their lifetime, i.e. while the game is being played. For example, a human player can quickly change its behavior when realizing that the player is always ambushed at the same position in an FPS map. However, most current DL techniques would require expensive re-training to adapt to these situations and other unforeseen situations that they have not encountered during training. The amount of data provided by the real-time behavior of a single human is nowhere near that required by common deep learning methods. This challenge is related to the wider problem of few-shot learning, transfer learning and general video game playing. Solving it will be important to create more believable and human-like NPCs.§.§.§ Human-like game playingLifetime learning is just one of the differences that current NPCs lack in comparison to human players. Most approaches are concerned with creating agents that play a particular game as well as possible, often only taking into account the score reached. However, if humans are expected to play against or cooperate with AI-based bots in video games, other factors come into play. Instead of creating a bot that plays perfectly, in this context it becomes more important that the bot is believable and is fun to play against, with similar idiosyncrasies we expect from a human player. Human-like game playing is an active area of research with two different competitions focused on human-like behavior namely the 2k BotPrize <cit.> and the Turing Test track of the Mario AI Championship <cit.>. Most entries in these competitions are based on various non-neural network techniques while some used evolutionary training of deep neural networks to generate human-like behavior <cit.>.§.§.§ Adjustable performance levels Almost all current research on DL for game playing aims at creating agents that can play the game as well as possible, maybe even “beating” it. However, for purposes of both game testing, creating tutorials, and demonstrating games—in all those places where it would be important to have human-like game `play—it could be important to be able to create agents with a particular skill level. If your agent plays better than any human player, then it is not a good model of what a human would do in the game. At its most basic, this could entail training an agent that plays the game very well, and then find a way of decreasing the performance of that agent. However, it would be more useful to be able to adjust the performance level in a more fine-grained way, so as to for example separately control the reaction speed or long-term planning ability of an agent. Even more useful would be to be able to ban certain capacities of playstyles of a trained agent, so a to test whether for example a given level could be solved without certain actions or tactics.One path to realizing this is the concept of procedural personas, where the preferences of an agent are encoded as a set of utility weights <cit.>. However, this concept has not been implemented using deep learning, and it is still unclear how to realize the planning depth control in this context.§.§.§ Dealing with extremely large decision spacesWhereas the average branching factor hovers around 30 for Chess and 300 for Go, a game like StarCraft has a branching factor that is orders of magnitudes larger. While recent advances in evolutionary planning have allowed real-time and long-term planning in games with larger branching factors to <cit.>, how we can scale Deep RL to such levels of complexity is an important open challenge. Learning heuristics with deep learning in these games to enhance search algorithms is also a promising direction. §.§ Game Industry§.§.§ Adoption in the game industryMany of the recent advances in DL have been accelerated because of the increased interest by a variety of different companies such as Facebook, Google/Alphabet, Microsoft and Amazon, which heavily invest in its development. However, the game industry has not embraced these advances to the same extent. This sometimes surprises commentators outside of the game industry, as games are seen as making heavy use of AI techniques. However, the type of AI that ismost commonly used in the games industry focuses more on hand-authoring of expressive non-player character (NPC) behaviors rather than machine learning. An often-cited reason for the lack of adoption of neural networks (and similar methods) within this industry is that such methods are inherently difficult to control, which could result in unwanted NPC behaviors (e.g. an NPC could decide to kill a key actor that is relevant to the story). Additionally, training deep network models require a certain level of expertise and the pool of experts in this area is still limited. It is important to address these challenges to encourage a wide adoption in the game industry. Additionally, while most DL approaches focus exclusively on playing games as well as possible, this goal might not be the most important for the game industry <cit.>. Herethe level of fun or engagement the player experiences while playing is a crucial component. One use of DL for game playing in the game production process is for game testing, where artificial agents test that levels are solvable or that the difficulty is appropriate. DL might see its most prominent use in the games industry not for playing games, but for generating game content <cit.> based on training on existing content  <cit.>, or for modeling player experience <cit.>. Within the game industry, several of the large development and technology companies, including Electronic Arts, Ubisoft and Unity have recently started in-house research arms focusing partly on deep learning. It remains to be seen whether these techniques will also be embraced by the development arms of these companies or their customers. §.§.§ Interactive tools for game developmentRelated to the previous challenge, there is currently a lack of tools for designers to easily train NPC behaviors. While many open-source tools to training deep networks exist now, most of them require a significant level of expertise. A tool that allows designers to easily specify desired NPC behaviors (and undesired ones) while assuring a certain level of control over the final trained outcomes would greatly accelerate the uptake of these new methods in the game industry. Learning from human preferences is one promising direction in this area. This approach has been extensively studied in the context of neuroevolution <cit.>, and also in the context of video games,allowing non-expert users to breed behaviors for Super Mario <cit.>. Recently a similar preference-based approach was applied to deep RL method <cit.>, allowing agents to learn Atari games based on a combination of human preference learning and deep RL. Recently, the game company King published results using imitation learning to learn policies for play-testing of Candy Crush levels, showing a promising direction for new design-tools <cit.>.§.§.§ Creating new types of video gamesDL could potentially offer a way to create completely new games. Most of today's game designs stem from a time when no advanced AI methods were available or the hardware too limited to utilize them, meaning that games have been designed to not need AI. Designing new games around AI can help to break out of these limitations. While evolutionary algorithms and neuroevolution in particular <cit.> have allowed the creation of completely new types of games, DL based on gradient descent has not been explored in this context. Neuroevolution is a core mechanic in games such as NERO <cit.>, Galactic Arms Race <cit.>, Petalz <cit.> and EvoCommander <cit.>. One challenge with gradient-based optimization is that the structures are often limited to having mathematical smoothness (i.e. differentiability), making it challenging to create interesting and unexpected outputs. §.§ Learning models of games Much work on deep learning for game-playing takes a model-free end-to-end learning approach, where a neural network is trained to produce actions given state observations as input. However, it is well known that a good and fast forward model makes game-playing much easier, as it makes it possible to use planning methods based on tree search or evolution <cit.>. Therefore, an important open challenge in this field is to develop methods that can learn a forward model of the game, making it possible to reason about its dynamics.The hope is that approaches that learn the rules of the game can generalize better to different game variations and show more robust learning. Promising work in this areaincludes the approach by Guzdial et al. <cit.> that learns a simple game engine of Super Mario Bros. from gameplay data. Kansky et al. <cit.> introduce the idea of Schema Networks that follow an object-oriented approach and are trained to predict future object attributes and rewards based on the current attributes and actions. A trained schema network thus provides a probabilistic model that can be used for planning and is able to perform zero-shot transfer to variations of Breakout similar to those used in training.§.§ Computational ResourcesWith more advanced computational models and a larger number of agents inopen-worlds, computational speed becomes a concern. Methods that aim to make the networks computationally more efficient by either compressing networks <cit.> or pruning networks after training <cit.> could be useful. Of course, improvements in processing power in general or for neural networks specifically will also be important. Currently, it is not feasible to train networks in real-time to adapt to changes in the game or to fit players' playing styles, something which could be useful in the design process. § CONCLUSION This paper reviewed deep learning methods applied to game playing in video games of various genres including; arcade, racing, first-person shooters, open-world, real-time strategy, team sports, physics, and text adventure games. Most of the reviewed work is within end-to-end model-free deep reinforcement learning, where a convolutional neural network learns to play directly from raw pixels by interacting with the game. Recent work demonstrates that derivative-free evolution strategies and genetic algorithms are competitive alternatives. Some of the reviewed work apply supervised learning to imitate behaviors from game logs, while others are based on methods that learn a model of the environment. For simple games, such as most arcade games, the reviewed methods can achieve above human-level performance, while there are many open challenges in more complex games. § ACKNOWLEDGEMENTSWe thank the numerous colleagues who took the time to comment on drafts of this article, including Chen Tessler, Diego Pérez-Liébana, Ethan Caballero, Hal Daumé III, Jonas Busk, Kai Arulkumaran, Malcolm Heywood, Marc G. Bellemare, Marc-Philippe Huget, Mike Preuss, Nando de Freitas, Nicolas A. Barriga, Olivier Delalleau, Peter Stone, Santiago Ontañón, Tambet Matiisen, Yong Fu, and Yuqing Hou.§ USES§.§ Playing games (games as AI testbeds),§.§ Content generationReview article: Procedural Content Generation via Machine Learning (PCGML)§.§ Player modelinghttps://wiseodd.github.io/techblog/2017/01/29/infogan/ § TRAINING METHODS §.§ SupervisedScott Lee, Aaron Isaksen, Christoffer Holmgård and Julian Togelius (2016): Predicting Resource Locations in Game Maps Using Deep Convolutional Neural Networks. EXAG Workshop at AIIDE. Deep Learning for Classifying Battlefield 4 PlayersThe Game Imitation: Deep Supervised Convolutional Networks for Quick Video Game AIDeep Apprenticeship Learning for Playing Video Games §.§ Deep Reinforcement Learning§.§ UnsupervisedContent generation? §.§ Combinations with Search - Mastering the game of Go with deep neural networks and tree searchDQNModels with and without memory § GAME TYPES Overview table §.§ Board games §.§ Atari games§.§ Racing Games§.§ 3D GamesLabyrinth, VizDoom, Quake, ... §.§ Strategy GamesStarcraft (also a very open challenge) §.§ Game Research PlatformsGVGAI, DeepMind Labs, OpenAI Universe, ...§ CHALLENGES §.§ Continual Learning in Games- Learning to play more than one game (progressive networks, elastic weight consolidation)§.§ Making Games More fun?abbrv l25mm < g r a p h i c s >Niels Justesen is a PhD fellow at the IT University of Copenhagen where he is part of the Center for Computer Games Research and the Robotics, Evolution and Art Lab (REAL). His research is focussed on game-playing algorithms for strategy games, including tree search, evolutionary algorithms and deep learning.Justesen holds a BSc in software development and MSc in games technology, both from the IT University of Copenhagen. He has previously worked at IT Minds.l25mm< g r a p h i c s >Philip Bontrager is a PhD Candidate at New York University school of engineering. There he is a member of the Game Innovation Lab. Philip’s research consists of using deep learning and evolution strategies to further procedural generation techniques. Philip has a B.A. in Informatics and Mathematics from Goshen College and a MSc from New York University in Computer Science. l25mm < g r a p h i c s >Julian Togelius is an Associate Professor in the Department of Computer Science and Engineering, New York University, USA. He works on all aspects of computational intelligence and games and on selected topics in evolutionary computation and evolutionary reinforcement learning. His current main research directions involve search-based procedural content generation in games, general video game playing, player modeling, and fair and relevant benchmarking of AI through game-based competitions. He is a past chair of the IEEE CIS Technical Committee on Games, and an associate editor of IEEE Transactions on Computational Intelligence and Games. Togelius holds a BA from Lund University, an MSc from the University of Sussex, and a PhD from the University of Essex. He has previously worked at IDSIA in Lugano and at the IT University of Copenhagen. l25mm < g r a p h i c s >Sebastian Risi is an Associate Professor at the IT University of Copenhagen where he is part of the Center for Computer Games Research and the Robotics, Evolution and Art Lab (REAL). His interests include neuroevolution, evolutionary robotics and human computation. Risi completed his PhD in computer science from the University of Central Florida. He has won several best paper awards at GECCO, EvoMusArt, IJCNN, and the Continual Learning Workshop at NIPS for his work on adaptive systems, the HyperNEAT algorithm for evolving complex artificial neural networks, and music generation. He is a consultant for the recently formed Uber AI labs and was a co-founder of FinchBeak, a company that focused on casual and educational social games enabled by AI technology.
http://arxiv.org/abs/1708.07902v3
{ "authors": [ "Niels Justesen", "Philip Bontrager", "Julian Togelius", "Sebastian Risi" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20170825220109", "title": "Deep Learning for Video Game Playing" }
^1Department of Mathematics,  North Eastern Hill University, NEHU Campus, Shillong - 793022 ( INDIA )^2 Department of Mathematics, St. Anthony's College, Shillong, Meghalaya 793001, India^3 Departamento de Física, DCI CampusLeónUniversidad de Guanajuato, 37150 León, Guanajuato, México^4 Harish-Chandra Research Institute, Chhatnag Road, Jhunsi, Allahabad-211019, India ^5 Mathematics Division, Department of Basic Sciences and Social Sciences,  North Eastern Hill University, NEHU Campus, Shillong - 793022 (INDIA)^6 Inter University Centre for Astronomy and Astrophysics, Pune 411 007, India^7Institut de Physique Théorique, CEA-Saclay, CNRS UMR 3681, Université Paris-Saclay, F-91191 Gif-sur-Yvette, France The present work deals with the dynamical system investigation of interacting dark energy models (quintessence and phantom) in the framework of Loop Quantum Cosmology by taking into account a broad class of self-interacting scalar field potentials. The main reason for studying potentials beyond the exponential type is to obtain additional critical points which can yield more interesting cosmological solutions. The stability of critical points and the asymptotic behavior of the phase space are analyzed using dynamical system tools and numerical techniques. We study two class of interacting dark energy models and consider two specific potentials as examples: the hyperbolic potential and the inverse power-law potential. We found a rich and interesting phenomenology including the avoidance of big rip singularities due to loop quantum effects, smooth and non-linear transitions from matter domination to dark energy domination and finite periods of phantom domination with dynamical crossing of the phantom barrier.Extended Phase Space Analysis of Interacting Dark Energy Models in Loop Quantum Cosmology   Hmar Zonunmawia^1[[email protected]], Wompherdeiki Khyllep^2[[email protected]], Nandan Roy^3,4[[email protected]], Jibitesh Dutta^5,6[[email protected][email protected]] and Nicola Tamanini^7[[email protected]] December 30, 2023 ================================================================================================================================================================================================================================================Keywords : Loop quantum cosmology, big rip singularity, Dark energy dark matter interaction, self-interacting potentials. § INTRODUCTION The accelerated expansion of our universe is by now confirmed by several observations, e.g. Cosmic Microwave Background (CMB) anisotropies <cit.>, large scale galaxy surveys <cit.> and type Ia supernovae <cit.>, but the nature of the entity that causes it, named dark energy (DE), is still obscure. The cosmological constant represents the most simple and popular choice as a candidate for DE, but it is troubled by different theoretical issues: in particular the cosmological constant problem and the coincidence problem <cit.>. In order to find alternative explanations for the observed acceleration of the universe, there are in general two different approaches one can follow: modified gravity models and dynamical DE models. Within the modified gravity framework, the underlying gravitational theory determining the cosmological evolution is different from general relativity, while dynamical DE models do not modify the gravitational interaction, but rather introduce a new type of exotic matter component in the universe to describe the accelerated expansion. In both approaches the cosmic dynamics can often be effectively described by the action of a single scalar field. Several dynamical scalar field models have been proposed and studied: examples are quintessence <cit.>, phantom DE <cit.> and k-essence <cit.>. Scalar field models with self-interacting potentials can provide a useful cosmological evolution which mimics the effect of a cosmological constant at the present epoch. In a late-time cosmological context, a canonical scalar field is commonly known as quintessence and can be motivated by the low energy limit of some well known high energy theories, e.g. string theory. The quintessence equation of state w (EoS) can take values in the range -1≤ w ≤ 1, implying accelerated expansion for w<-1/3. A canonical scalar field however cannot produce the so-called phantom regime (w<-1) which is slightly favoured by astronomical observations <cit.>. For this reason, another well-known scalar field model of DE has been proposed: the phantom field with negative kinetic energy. Phantom fields are plagued by instabilities at the quantum level <cit.>, but if considered from an effective phenomenological perspective they can be used as interesting cosmological solutions, which may better fit the observational data.In standard Einstein cosmology (EC), phantom DE models usually lead to a cosmic end described by a future big rip singularity <cit.>. This behaviour is however expected to be corrected by loop quantum effects. Loop Quantum Gravity (LQG) <cit.> is one of the well-known approaches to a quantum theory of gravity <cit.>. Its main aim consists in quantizing gravity with a non-perturbative and background independent method <cit.>. The application of LQG in the context of cosmology is called Loop Quantum Cosmology (LQC) <cit.>. LQC modifies the standard Friedmann equation by adding a term depending on a fix energy density imposed by quantum corrections <cit.>, which essentially encodes the discrete quantum geometric nature of spacetime <cit.>. This modifying contribution in the standard Friedmann equation can be used to avoid any past and future singularity <cit.>. In fact loop corrections become important when the total energy density of the universe approaches the critical high energy value predicted by the theory (cf. Eq. (<ref>)), in which case a cosmic bounce might occur and the big rip and other singularities will never be reached <cit.>. In both EC and LQC when DE is modelled as a scalar field, this is usually assumed to interact only with itself. However there is no fundamental argument to ignore a possible coupling between DE and dark matter (DM), and DE models where a scalar field interacts non-gravitationally with the matter sector have been proposed as well; see e.g. <cit.>. Interacting DE models are known to produce late time accelerated scaling attractors which can be used to alleviate the cosmic coincidence problem <cit.>. However since there is no experimental evidence for a dark sector interaction and the fundamental nature of both DE and DM is still obscure, any coupling between the two dark components is phenomenologically constructed at the level of the equations of motion, though some recent attempts to built an effective interaction at the Lagrangian level have been advanced <cit.>.In this work, using dynamical system tools, we investigate the dynamics of interacting scalar field (quintessence and phantom) in the framework of LQC for a broad class of self-interacting potentials. Similar studies, restricted to the exponential potential case, have already been performed <cit.>, and the analysis of different scalar field potential will allow us to better understand the complete cosmological potential of these DE models, at least at the background level. This type of generalization has been widely studied in different cosmological frameworks: e.g. standard quintessence models <cit.>, braneworld theories <cit.>, k-essence <cit.>, chameleon theories <cit.>, scalar-fluid theories <cit.> and (non-interacting) LQC <cit.>. In these investigations the dimension of the resulting dynamical system increases by one if compared to that of the exponential potential case, making the analysis slightly more complicated. The analysis of other scalar field potentials, beyond the exponential one, helps to better relate these phenomenological models with more fundamental high-energy theories. Moreover, from a mathematical point of view, these generalizations usually yield additional non-hyperbolic points where linear stability theory fails and the stability properties can only be determined analytically using center manifold theory or Lyapunov functions <cit.>. One can alternatively use numerical methods, for example analysing the behaviour of perturbed trajectories near the non-hyperbolic critical point <cit.>, while for the case of normally hyperbolic points, namely a non-isolated set of critical points with one vanishing eigenvalue, their stability is determined by the signature of the remaining non-vanishing eigenvalues <cit.>. For a better understanding of the cosmological dynamics of models presenting non-hyperbolic critical points, in what follows we consider two concrete potentials as examples: the hyperbolic potential V=V_0 cosh^-μ(λϕ) and the inverse power-law potential V=M^4+n/ϕ^n.In our analysis we consider two interacting models based on two different coupling functions for both the quintessence and phantom fields. The first one arises in string theory and it has already been studied in the case of standard EC (for both quintessence and phantom case) <cit.>. The same interaction has also been studied in the LQC framework for a phantom field with exponential potential <cit.>, showing that in such case the big rip singularity can be avoided. The second kind of interaction was studied recently for quintessence in the EC context <cit.>, showing that the coincidence problem can be alleviated. It was also investigated in braneworld theories with both quintessence and phantom fields as DE <cit.>. In both interactions, depending on the choice of the scalar field potential, we find that there is a late time attractor with contribution from loop quantum gravity.The paper is organized as follows. Section <ref> deals with the basic equations of interacting dark energy in LQC and shows how they can be recast into an autonomous system of equations. In Sections <ref> and <ref> we consider two interacting dark energy models and we investigate the corresponding cosmological evolution using dynamical system tools. In each of these section, we present two subsections: one for quintessence and the another for phantom DE, wherein we consider two scalar field potentials as examples. The last two sections <ref> and <ref> are devoted to discussing the cosmological implications and drawing conclusions, respectively.§ DYNAMICS OF INTERACTING SCALAR FIELD DARK ENERGY IN LOOP QUANTUM COSMOLOGYIn a flat universe the effective modified Friedmann equation in the framework of LQC is given by <cit.>3H^2= 8 π G/c^4ρ(1-ρ/ρ_c),where H is the Hubble parameter, ρ=ρ_m+ρ_ϕ is the total energy density and ρ_ϕ, ρ_m are the energy densities of dark energy and dark matter respectively. The constantρ_c=√(3)/16 π^2γ^3 G^2ħ ,is the critical loop quantum density, where γ is the dimensionless Barbero-Immirzi parameter <cit.>. To simplify the notation in what follows we shall use units where 8 π G ≡ c ≡ 1. Although the single energy components ρ_m and ρ_ϕ may not be conserved separately, the total energy density is conservedρ̇ + 3 H ( ρ + p) = 0,where p is the total pressure and an over-dot denotes differentiation with respect to the time t. One can also write the modified Raychaudhuri equation of the system asḢ = - 1/2 (ρ + p) (1 - 2 ρ/ρ_c).We assume that DE is described by either quintessence or a phantom scalar field. The general Lagrangian for both these scalar fields can be generally written asℒ = 1/2 ϵ ∂^μϕ∂_μϕ - V(ϕ),where ϵ = 1 corresponds to quintessence and ϵ = -1 corresponds to the phantom field. Here V(ϕ) is the self-interacting potential for the scalar field ϕ. The energy density and pressure of the scalar field are respectively given byρ_ϕ= ϵ 1/2ϕ̇^2 +V(ϕ) and p_ϕ=ϵ 1/2ϕ̇^2-V(ϕ).In our investigation we assume that the scalar field interacts with dark matter and the energy conservation equations take the formρ̇_̇ϕ̇+3H(1+w_ϕ)ρ_ϕ = -Q, ρ̇_̇ṁ+3Hρ_m= +Q,where Q is the interaction term and w_ϕ = p_ϕ/ρ_ϕ is the EoS of DE. If Q is positive, then the energy transfer takes place from quintessence/phantom DE to DM, whereas for a negative Q energy flows from DM to DE. From Eqs. (<ref>) and (<ref>) the evolution equation of the scalar field can be expressed asϕ̈=-3Hϕ̇-1/ϵdV/dϕ-Q/ϵ ϕ̇ ,while from Eq. (<ref>) the effective modified Raychaudhuri equation can be rewritten asḢ=-1/2(ρ_m+ϵϕ̇^2)(1-2ρ/ρ_c). In order to write our system of equations as an autonomous system of ordinary differential equations, we introduce the following set of dimensionless phase space variablesx≡ϕ̇/√(6) H , y≡√(V(ϕ))/√(3) H,z≡ρ/ρ_cand s≡ -1/VdV/dϕ.Using the new dimensionless variables (<ref>), the effective modified Friedmann equation (<ref>), can be rewritten in a dimensionless form as 1 =(ρ_m/3H^2+ϵ x^2+y^2)(1-z).This provides a constraint that can be used to eliminate ρ_m in favour of the dimensionless variables (<ref>) in all equations that follow. Using equations Eqs. (<ref>)–(<ref>), we obtain the following autonomous system of differential equations,x' =-3x+1/ϵ√(3/2) sy^2+x[3/2(1/1-z-ϵ x^2-y^2)+ϵ3x^2](1-2z)-Q/ϵ √(6) H^2ϕ̇,y' =-√(3/2) sxy +y[3/2(1/1-z-ϵ x^2-y^2)+ϵ3x^2](1-2z),z' =-3z-3z(1-z)(ϵ x^2-y^2),s' =-√(6) xf(s),where a prime denotes differentiation with respect to N=ln a (a being the usual scale factor) and we definedf(s)=s^2(Γ(s)-1) Γ= Vd^2V/dϕ^2(dV/dϕ)^-2 .We consider a class of potentials where Γ is a function of s, with Γ=1 being the case of an exponential potential. In general Γ may not be function of s, in which case one has to take into account the higher derivatives of the scalar field potential <cit.> or choose new dimensionless variables <cit.>, resulting in the increase of dimension of the dynamical system for both cases. In terms of the dimensionless variables (<ref>), the scalar field relative energy density, the relative energy density of dark matter, the relative energy contribution due to loop quantum corrections, the total cosmic energy EoS and the deceleration parameter are given byΩ_ϕ ≡ρ_ϕ/3H^2=ϵ x^2+y^2, Ω_m ≡ρ_m/3H^2=1/1-z-ϵ x^2-y^2, Ω_c≡ 1 - Ω_m - Ω_ϕ = z/z-1,w_ tot ≡p_ϕ/ρ_ϕ+ρ_m=(1-z)(ϵ x^2-y^2), q ≡ -1-Ḣ/H^2=-1+[3/2(1/1-z-ϵ x^2-y^2)+ϵ3x^2](1-2z).The dynamical systems analysis yields the asymptotic behavior of the system, i.e., the beginning and the ultimate fate of the universe. For the numerical calculations that follows, we choose initial conditions in such a way that the final state of the universe is in agreement with the present observational data: Ω_m = 0.31, Ω_ϕ = 0.69 and q=-0.62 <cit.>.In order to close the autonomous system (<ref>)–(<ref>), we need to consider a particular form of the interaction Q for which the last term in Eq. (<ref>) can be expressed as a function of the variables (<ref>). In the next two sections, we choose the following two particular interaction terms (I) Q=αρ_mϕ̇         (II) Q=βρ̇_ϕ,with α and β dimensionless constants. The first of these couplings arises naturally from scalar-tensor theories in the Einstein frame <cit.> and is well motivated from string theory. The second one is instead purely phenomenological <cit.> and it will be used to expose some interesting properties of a more complex interacting model. In what follows, we tried to keep the analysis as general as possible but in order to study the stability of some interesting critical points in more detail, we consider two particular forms for the scalar field potential as examples: V=V_0 cosh^-μ(λϕ) and V=M^4+n/ϕ^n. We note also that in non-interacting DE models, the energy density of DM is always taken to be non-negative, while in models of interacting DE the condition Ω_m<0 can be allowed (see e.g. <cit.>). In particular this means that the resulting phase spaces are generally not compact. This implies that critical points at infinity should be analyzed by compactifying the phase space using the Poincare compactification technique; see e.g. <cit.>. Nevertheless in what follows we only determine the dynamics near the finite critical points, which is enough from a phenomenological point of view since our aim is to find physically viable solutions, namely trajectories connecting DM to DE domination. Once we find such type of solutions, then given the right initial conditions the observed evolution of our universe can be described by the DE model under consideration (at least at the background level). § INTERACTING MODEL I: Q=ΑΡ_MΦ̇This type of interaction term arises naturally in scalar-tensor theory <cit.>, where the energy terms are separately conserved in the Jordan frame but become coupled in the Einstein frame. The dynamical system investigation of quintessence with this interaction and an exponential potential has been studied in both standard EC <cit.> and LQC <cit.>. In this section, we provide the dynamical analysis of this DE model with a general scalar field potential in the LQC framework. Here we only show the properties of the phase space and characterize the full cosmological dynamics of this model. Discussions about the cosmological implications are postponed to section <ref>.As noted above the dark sector interaction modifies only the x^' equation in the system (<ref>)-(<ref>), while the other equations remain unaffected. For this particular interaction the system of equations (<ref>)-(<ref>) becomesx' =-3x+1/ϵ √(3/2) sy^2 -√(6)/ϵ 2α(1/1-z-ϵ x^2-y^2)+x[3/2(1/1-z-ϵ x^2-y^2)+ϵ3x^2](1-2z),y' =-√(3/2) sxy +y[3/2(1/1-z-ϵ x^2-y^2)+ϵ3x^2](1-2z),z' =-3z-3z(1-z)(ϵ x^2-y^2),s' =-√(6) xf(s) .It is easy to check that the above dynamical system (<ref>)-(<ref>) is invariant under the transformation y→ -y, meaning that we need to analyze only the phase space for positive values of y. The critical points of the system (<ref>)-(<ref>) and the relevant parameters are listed in Table <ref> and the eigenvalues of the corresponding Jacobian matrix are listed in Table <ref>. In these tables, and throughout the rest of the paper, s_* is a solution of the equation f(s)=0 and df(s_*) denotes the value of the derivative of f at s=s_*. From Tables <ref> and <ref>, we can note that all critical points depend on the specific form of potentials: C_1, C_2, C_3, C_4, C_5 all depend on s_* and df(s_*), whereas C_6 depends on the value of f(0) for its stability. We now discuss the stability of the critical points listed in Table <ref> for quintessence and phantom dark energy separately.§.§ Quintessence dark energy (ϵ=1) We briefly discuss the properties of the critical points listed in Table <ref> with ϵ=1. The stability regions in the (s_*,α) parameter space for each critical point are shown in Fig. <ref>. * Point C_1: This point corresponds to a decelerated, stiff matter dominated universe (w_ tot=1). It is stable whenever α<-√(6)/2, s_*>√(6), df(s_*)>0, otherwise it is saddle. * Point C_2: This point corresponds to a decelerated, scaling solution. It is stable whenever α^2<3, α s_*<-3, α df(s_*)<0, otherwise it is saddle.* Point C_3: This point corresponds to an accelerated solution (q=-1) with loop quantum contribution and negative DM energy density (Ω_m<0). It corresponds to a late time attractor (stable spiral) when α s_*>0 and α df(s_*)>0. * Point C_4: Point C_4 exists when s_*^2<6. It corresponds to an accelerated solution for s_*^2<2 and describes a scalar field dominated universe. It is stable whenever s_*(α+s_*)<3 and s_*df(s_*)<0, otherwise it is saddle.* Point C_5: Point C_5 corresponds to a late time accelerated, scaling solution for some values of the parameters α and s_*. This is confirmed numerically from Fig. <ref> by plotting the region of stability and the region of acceleration for C_5 in the (s_*,α) parameter space. * Set of points C_6: A set of non-isolated critical points C_6 corresponds to the case where s=0 (i.e. the potential is effectively constant). It is a non-hyperbolic set with one vanishing eigenvalue if f(0) ≠ 0. This type of non-isolated points form a normally hyperbolic set <cit.>. The center manifold of a set of non-isolated critical points is determined by the direction of the eigenvectors corresponding to vanishing eigenvalues, while the signature of the non-vanishing eigenvalues determine its stability. For points C_6 the real components of the non-vanishing eigenvalues are negative only if f(0)>0, implying that this set of points corresponds to a late time stable attractor only if f(0)>0. It is a stable node if 0<f(0)<3/4(1-z), while it is a stable spiral if f(0)>3/4(1-z). In any other case it is saddle. Further investigation is required when f(0)=0, which will be studied for the examples below once the scalar field potential is specified. This additional set of points is interesting from a phenomenological perspective as it shows the effect of loop quantum corrections to explain late time acceleration of the universe. The properties of points C_1, C_2, C_4, C_5 are the same as in standard EC <cit.>. Point C_3 shows the effect of loop quantum corrections but implies a negative DM energy density. The additional set of points C_6 is interesting as it corresponds to a scalar field dominated solution where the effects of loop quantum gravity corrections can be used to explain the late time accelerated universe. From the above analysis, we note that critical points C_1, C_2, C_4, C_5 cannot be late time attractors simultaneously. However there is a possibility of multiple late time attractors: points C_3 and C_4 on one side, and C_3 and C_5 on the other side. This kind of situation is interesting from both a phenomenological and mathematical point of view, and it usually leads to a more complex cosmological dynamics. Depending on the choices of parameters and initial conditions, the interacting DE model considered here can successfully describe the late time evolution of the universe. Since the existence and stability properties of all critical points heavily depend on s_*, df(s_*) and f(0), the full dynamics of the phase space can be derived only once a particular scalar field potential has been chosen. For this reason in what follows we consider two particular potentials. §.§.§ Example 1: V=V_0 cosh^-μ(λϕ)Here we consider the potential V=V_0 cosh^-μ(λϕ), (where V_0 and λ are two constants of suitable dimensions while μ is a dimensionless parameter). This potential was proposed to explain the exit of the universe from a scaling regime to a de-Sitter like accelerated attractor through a mechanism of spontaneous symmetry breaking <cit.>. The dynamical behaviour of standard EC with this scalar field potential has been studied in <cit.>. For this potential we find f(s)=s^2/μ-μλ^2, so thats_*=±μλ df(s_*)=2s_*/μ = ± 2 λ .Each of the critical points C_1, C_2, C_3, C_4, C_5 appears twice in the phase space according to the two solutions s_*=±μλ. We will denote with C_i^+ the critical point associated with s_*=μλ and with C_i^- the ones associated with s_*=-μλ (i=1, 2, 3, 4, 5). The stability regions of points C_i^+ and C_i^- (i=1,2,3,4,5) for μ=1 in the (λ,α) parameter space are given in Figs. <ref> and <ref> respectively. Note the symmetry between the stability regions of points C_i^+ and points C_i^-. This is due to the invariance of the hyperbolic potential under the λ↦ -λ transformation. The properties of the critical points for the hyperbolic potential are as follow:* Points C_1^+, C_1^-: These points correspond to decelerated, stiff matter dominated universes (w_ tot=1). Point C_1^+ is stable whenever α<-√(6)/2, μλ>√(6), λ>0, otherwise it is saddle. Point C_1^- is stable whenever α<-√(6)/2, μλ<-√(6), λ<0, otherwise it is saddle. * Points C_2^+, C_2^-: These points correspond to a decelerated, scaling solutions. Point C_2^+ is stable whenever α^2<3, α μλ<-3, α λ<0, otherwise it is saddle. Point C_2^- is stable whenever α^2<3, α μλ>3, α λ>0, otherwise it is saddle. * Points C_3^+, C_3^-: These points correspond to an accelerated solution (q=-1) with negative DM energy density. Point C_3^+ corresponds to a late time attractor (stable spiral) for μ >0 and α λ>0. Point C_3^- corresponds to a late time attractorfor μ >0 and α λ<0. * Points C_4^+, C_4^-: These points exist when μ^2λ^2<6. They correspond to accelerated, scalar field dominated universes. Point C_4^+ is stable whenever μλ(α+μλ)<3, μ<0, otherwise it is saddle. Point C_4^- is stable whenever μλ(α-μλ)>-3, μ<0, otherwise it is saddle. * Points C_5^+, C_5^-: These points are cosmologically interesting as they correspond to late time accelerated scaling solutions for some values of the parameters α, μ and λ (cf. Fig. <ref>). For example, if we take α=-2.8, λ=-2, μ=1, we obtain λ_1=-1.25, λ_2=-1.18-3.38 i, λ_3=-1.18+3.38 i, λ_4=-2.5, with q=-0.37, Ω_m=0.28, Ω_ϕ=0.71 for point C_5^+, while point C_5^- is saddle in nature. On the other hand for α=-2.8, λ=2, μ=1, point C_5^- is stable but point C_5^+ is saddle. This implies that choosing the appropriate combination of parameters, critical points C_5^+ and C_5^-describe a late time accelerated scaling attractor (stable spiral) with DM and DE energy density values in agreement with present observations. * Set of points C_6: This normally hyperbolic set of critical points C_6 corresponds to a late time attractor only if μ<0.Fig. <ref> shows the contribution of loop quantum gravity corrections at early time together with the evolution of the relevant cosmological parameters (<ref>)–(<ref>). One particular trajectory is considered which evolves from a matter phase dominated by interacting energy (point C_2) and settles in the DE dominated point C_4. This model can thus describe the late time DE dominated observed phase of our universe; see Sec. <ref> for further discussions.§.§.§ Example 2: V=M^4+n/ϕ^nIn this example we consider the inverse power-law potential V(ϕ)=M^4+n/ϕ^n (where M is a mass scale, while n is a dimensionless parameter), which can lead to tracking behavior in EC <cit.>. For this potential, the function f(s) is given by f(s)=s^2/n ,meaning thats_*=0 df(s_*)=2s_*/n = 0.This implies that in this case we have only one copy for each of the points C_1, C_2, C_3, C_4, C_5 listed in Table <ref>. The properties of the critical points are as follow:* Point C_1: This describes a decelerated, stiff matter dominated point (w_ tot=1) and is always saddle.* Point C_2: This point corresponds to a decelerated, scaling solution. It is always saddle, in contrast with the hyperbolic potential for which it can be stable.* Point C_3: This point corresponds to an accelerated solution with negative DM energy density. It is a non-hyperbolic point, for which linear stability fails to determine its stability. In order to determine its stability we numerically stream plotted projections onto the (x,s) slice of the dynamical flow around its neighbourhood, as shown for example in Fig. <ref>.For several interesting combinations of the model parameters, we have checked numerically that this point cannot be stable as some trajectories are always repelled from it.* Point C_4: Point C_4 reduces to point (0,1,0,0) where the scalar field potential energy dominates.It is a non-hyperbolic point, so linear stability fails to determine its stability.Usually the stability of this type of critical point can be determined analytically by employing center manifold theory, otherwise one can also use numerical methods; see e.g. <cit.> for some applications in the recent literature.Numerically one can plot the projection of trajectories around the critical point separately on the x, y, z and s axes (see Fig. <ref> for example).We have checked numerically that nearby trajectories asymptotically approach the coordinates corresponding to point C_4 only for n>0, while the parameter α can be arbitrary.Hence depending on the choices of the model parameters this point can correspond to a late time attractor. * Point C_5: This point corresponds to an accelerated solution (q=-1) with negative DM energy density Ω_m<0. It is always a saddle since the eigenvalues λ_2 and λ_3 are always of opposite sign.* Set of points C_6: For the inverse power-law potential, this set of non-isolated critical points is non-hyperbolic but not normally hyperbolic since it presents two vanishing eigenvalues.We use numerical techniques to check stability by plotting the projection of nearby trajectories separately on the x, y, z and s axes.We find that these trajectories approach the set of points C_6 as N→∞ for n>0.In fact the effective EoS parameter is always -1for any point belonging to this set.§.§ Phantom dark energy (ϵ=-1) We now turn our attention to phantom DE with ϵ=-1. Region of stability in the (s_*,α) parameter space for the critical points listed in Table <ref> are given in Fig. <ref>. Their properties are as follow: * Point C_1: Point C_1 does not exist for ϵ=-1.* Point C_2: This point corresponds to an accelerated solution for α^2>1/2, with negative DE energy density parameter. It is stable when α^2<3/2, 2α (α+s_*)>3 and α df(s_*)<0. * Point C_3 : This point corresponds to an accelerated solution with negative DE energy density parameter. It is a stable node if 3/2<α^2<2, α s_*>0, α df(s_*)>0, it is stable spiral if α^2>2, α s_*>0, α df(s_*)>0, otherwise it is saddle. * Point C_4: Point C_4 exists for any values s_*. It corresponds to an accelerated scalar field dominated universe and it is saddle. Although in LQC this point cannot be a late time attractor, it is found to be stable in EC where it corresponds to a future big rip singularity <cit.>.This behavior was first noticed in the case of an exponential potential <cit.> and constitutes an interesting example of how cosmological singularities can be avoided by LQG effects. * Point C_5: Point C_5 corresponds to an accelerated, scaling solution for some values of parameter α and s_*. However this point is always saddle as the eigenvalues λ_2 and λ_3 have opposite sign.* Set of points C_6: As in the case of the quintessence field, this non-isolated set of critical points C_6 is a normally hyperbolic set.It corresponds to a late time attractor only if f(0)<0.It is a stable node if -3/4(1-z)<f(0)<0, it is stable spiral if f(0)<-3/4(1-z), otherwise it is saddle.Further investigation is required when f(0)=0 and we shall postpone this once a particular scalar field potential has been chosen. Again this set of points is interesting as it shows the effects of loop quantum corrections in determining the late time accelerated phase of the universe.Interestingly this point is a late time attractor which is not present in the case of the exponential potential <cit.>.From the above general analysis, we note that from Fig. <ref> points C_2, C_3 cannot be late time attractors simultaneously. Depending on the choice of the scalar field potential and the initial conditions, we thus find that the universe evolves towards an accelerated, scalar field dominated set of critical points C_6 or towards a negative DE density critical points C_2, C_3. Again since the existence and stability properties of the critical points depend heavily on s_*, df(s_*) and f(0), in what follows we analyze their properties selecting two specific potentials. §.§.§ Example 1: V=V_0 cosh^-μ(λϕ) Here we report the properties of the critical points in Table <ref> for ϵ=-1 and the potential V=V_0 cosh^-μ(λϕ). Again here we have two copies of each critical points C_2, C_3, C_4, C_5, one for each of the two solutions s_*=±μλ. As in the case of quintessence each point C_i with s_*=μλ is denoted as C_i^+ while point with s_*=-μλ are denoted as C_i^- (i=2,3,4,5). The stability regions of C_2^+, C_3^+ are given in Fig. <ref> and that of C_2^-, C_3^- are given in Fig. <ref>. Again the symmetry of these two figures is due to the invariance of the hyperbolic potential under the λ↦ -λ transformation.* Points C_2^+, C_2^-: These points correspond to an accelerated solution for α^2>1/2, with negative DE energy density parameter. Point C_2^+ is stable when α^2<3/2, 2α (α+λμ)>3 and α λ<0. Point C_2^- is stable when α^2<3/2, 2α (α-λμ)>3 and α λ>0.* Points C_3^+, C_3^-: Point C_3^+ is a stable node if 3/2<α^2<2, μ>0, α λ>0, it is a stable spiral if α^2>2, μ>0, α λ>0, otherwise it is saddle. Point C_3^- is a stable node if 3/2<α^2<2, μ>0, α λ<0, it is a stable spiral if α^2>2, μ>0, α λ<0, otherwise it is saddle.* Points C_4^+, C_4^-: They correspond to an accelerated, scalar field dominated universe. However both of them are saddle in nature for any choice of model parameters.* Points C_5^+, C_5^-: These accelerated scaling solutions are saddle in nature for any choice of model parameters.* Set of points C_6: This non-isolated normally hyperbolic set of critical points C_6 corresponds to a late time attractor only if μ>0.From the analysis above, we see that depending on the initial conditions the universe can evolve towards an accelerated, scalar field dominated set of critical points C_6. This can be seen in Fig. <ref> which shows the evolution of cosmological parameters towards an accelerating phase q=-1 (point C_6). Another interesting observation is that a transient phantom epoch described by points C_4^+, C_4^- (q=-1.5) might be possible. Although the universe undergoes a super-accelerated expansion (Ḣ>0) at this point, the future big rip singularity is avoided by loop quantum effects which turn this point from an attractor to a saddle.§.§.§ Example 2: V=M^4+n/ϕ^nWe recall that for this potential we have s_*=0 and df(s_*)=0, implying in particular that there is only one copy for each of the point C_2, C_3, C_4, C_5. The properties of the critical points are as follow: * Point C_2: Point C_2 describes a solution with negative DE energy density. It is always saddle. * Point C_3: Point C_3 corresponds to an accelerated solution with negative DE density. We have numerically checked that it is never stable for any choice of parameters. An example of the flow around this point projected on the (x,s)-plane is given in Fig. <ref>. * Point C_4: Point C_4 exists for any values α, μ and λ. It is a non-hyperbolic critical point and it is not stable as confirmed numerically. An example of the dynamical flow around this point, projected on the (x,s)-plane, is given in Fig. <ref> for some values of the model parameters. * Point C_5: Point C_5 corresponds to an accelerated, scaling solution. This point is always saddle. * Set of points C_6: Since this set contains point C_4 and trajectories on the (x,s) sub-space do not approach the coordinates of point C_4, this set cannot be a late time attractor, as it is for the hyperbolic potential. This has been checked numerically for different set of model parameters of phenomenological interest. From this analysis, we see that there is no finite late time attractor. This interacting phantom model cannot thus explain the late time behavior of our universe, although future big rip singularities appearing in EC can be avoided. § INTERACTING MODEL II: Q=ΒΡ̇_ΦThe dynamical investigation of this interaction term for quintessence dark energy in the framework of standard EC has been recently studied in <cit.> where it was found that this model can alleviate the coincidence problem. A generalised form of this interaction has been used in <cit.> to build a coupled DE model where the sign of the dark interaction changes during the cosmological history. The specific interaction Q=βρ̇_ϕ is also motivated from the dimensional point of view since Q has the dimension of a time rate of energy density <cit.>. Again in this section we will only expose the dynamical system analysis of the interacting model under study, while leaving the discussion on the cosmological implications to section <ref>.For this interaction the system (<ref>)-(<ref>) becomesx' = -3x/(1+β)+1/ϵ √(3/2) sy^2+x[3/2(1/1-z-ϵ x^2-y^2)+ϵ3x^2](1-2z),y' =-√(3/2) sxy +y[3/2(1/1-z-ϵ x^2-y^2)+ϵ3x^2](1-2z),z' =-3z-3z(1-z)(ϵ x^2-y^2),s' =-√(6) xf(s) .It can be seen that the system (<ref>)-(<ref>) is symmetric under the transformation y → -y. It is also symmetric under the transformation (x,s) → (-x,-s) for potential with f(s)=f(-s). The critical points of the system (<ref>)-(<ref>) and their properties are shown in Table <ref> and the corresponding eigenvalues of the Jacobian matrix are given in Table <ref>. The dynamics of this model is simple if compared to that of the interacting model I analysed in section <ref>. The system (<ref>)-(<ref>) contains only two critical points and two non-isolated set of critical points. The non-isolated set D_1 is completely independent from the scalar field for its existence and stability. On the other hand, critical points D_2, D_3 and the non-isolated set D_4 depend on the specific choice of the scalar field potential for their existence and stability. As before, in what follows, we analyze the cosmological dynamics separately for quintessence and phantom DE. §.§ Quintessence dark energy (ϵ=1) The properties of the critical points in table <ref> with ϵ = 1 are the following:* Set D_1:This non-isolated set of critical points corresponds to a decelerated (q=1/2), matter dominated universe (Ω_m=1). It is always saddle.* Point D_2: This point corresponds to a decelerated, scaling solution. It is always saddle since the eigenvalues λ_1 and λ_2 are of opposite signs within the region of existence. * Point D_3: The stability of this scaling solution is challenging to determine analytically due to the complicated expression of the eigenvalues of the Jacobian matrix of the system evaluated at this point. Instead we analyzed its stability numerically by plotting the regions where λ_1<0 and λ_3<0 on the (s_*,β) parameter space as shown in Fig. <ref>. Looking at Fig. <ref> we can conclude that this point is always saddle since the region of negativity of the eigenvalues λ_1 and λ_3 are disjoint. This is different from the result obtained in standard EC, where this point is a late time accelerated attractor <cit.>. * Set of points D_4: The set of critical points D_4 exists only for s=0. It is also a normally hyperbolic set with one vanishing eigenvalue if f(0) ≠ 0. This set corresponds to a late time attractor only if β>-1 and f(0)>0. It is a stable node if β>-1 and 0<f(0)<1-z/4(β+1)^2, it is stable spiral if β>-1 and f(0)>1-z/4(β+1)^2, otherwise it is saddle. Further investigation is required when f(0)=0 for which we shall postpone the analysis once a particular potential has been chosen. Again this set of points is interesting as it shows the effect of loop quantum corrections to explain the late time accelerated universe. From this analysis, we see that the universe can evolve from a matter dominated phase (point D_1) towards an accelerated scalar field dominated attracting set (set D_4) through a scaling solution (point D_2 or D_3). This scenario might be used to explain the late time transition of our universe from a matter dominated phase towards a dark energy dominated phase. Since the existence and stability behavior of points D_2, D_3 and D_4 depends on s_*, df(s_*) and f(0), we shall better analyze their properties for particular scalar field potentials. In what follows we investigate again the two potentials considered in the examples above. §.§.§ Example 1: V=V_0 cosh^-μ(λϕ) For this potential we have again two copies of critical points D_2 and D_3, since there are two solutions s_*=±μλ. As before, we shall assign D_i^+ for s_*= μλ and D_i^- for s_*=-μλ (i=2,3). Point D_1 does not depend on the choice of the potential, while the properties of the other critical points are as follow: * Points D_2^+, D_2^-: As discussed in the general case, these points are saddle in nature as its eigenvalue λ_1 is negative while λ_2 is positive for any values of s_*.* Points D_3^+, D_3^-: Since point D_3 is always saddle for s_*>0 and s_*<0 as shown in Fig. <ref> for general potentials. This implies that points D_3^+, D_3^- are saddle in nature.* Set of points D_4: This set corresponds to a late time attractor only if β>-1 and μ<0, otherwise it is saddle.Fig <ref> shows the evolution of the relevant cosmological parameters given suitable initial conditions. From this figure, we see that the universe can evolve towards an accelerated phase with q=-1 (set D_4), with a long lasting matter phase dominated by the interacting energy. From Fig. <ref> instead we see that the universe undergoes a transition to a DE dominated phase (set D_4 with z=0) from a long lasting matter phase dominated by DM (point D_1). In both cases, we find a long lasting matter phase (w_ tot = 0) as required for the formation of the cosmic structure, before transition to DE domination prevails. §.§.§ Example 2: V=M^4+n/ϕ^nWe recall that for this potential one finds s_*=0. Again the properties of point D_1 are independent of the choice of the scalar field potential, while the other critical points behave as follows: * Point D_2: This point is non-hyperbolic but it behaves as a saddle since its eigenvalues λ_1 and λ_2 are of opposite sign. * Point D_3: This point does not exist for this potential since s_*=0. * Set of points D_4: This set of points is no longer normally hyperbolic since now f(0)=0 and two eigenvalues are zero. The stability can be determined numerically by plotting the projection of trajectories near the set separately on the x, y, z, s axes (see for example Fig. <ref>). We find numerically that trajectories approach points lying on this set only for n>0 and for any choice of β. This set can thus behave as a late time attractor.§.§ Phantom dark energy (ϵ=-1) We turn now our attention to phantom DE. The properties of critical points in table <ref> with ϵ = -1 are as follow:* Point D_1: As in the case of quintessence, this point is always saddle. * Point D_2: This point corresponds to an accelerated universe (if β>2) with negative DE density. It is stable when s_*√(β-1/β+1)>6/β+1, otherwise it is saddle. * Point D_3: As in the case of quintessence, the complicated stability of this scaling solution can only be determined numerically by plotting the region of negativity of eigenvalues λ_1 and λ_3 on the (s_*,β) parameter space as shown in Fig. <ref>. This point is again always saddle since the region where λ_1<0 and λ_3<0 are disjoint. * Set of points D_4: This set corresponds to a late time attractor only if β>-1 and f(0)<0, otherwise it is saddle. Further investigation is required when f(0)=0 and we shall postpone this till a particular potential is chosen.The dynamics of this particular model is simple if compared to the interacting model I investigated in section <ref>. From the above analysis, we see that the universe can evolve from a matter dominated phase (point D_1) towards an accelerated scalar field dominated phase (set D_4). This might explain the late time transition of the universe from a matter dominated phase to a DE dominated phase. However since the existence and stability behavior of points D_2, D_3 and D_4 depend on s_*, df(s_*) and f(0), we shall better analyze their properties for the two potentials considered in our examples.§.§.§ Example 1: V=V_0 cosh^-μ(λϕ) We recall again that for this potential we obtain s_* = ±μλ, implying that two copies of points D_2 and D_3 are present in the phase space. As before point D_1 does not depend on the particular form of the scalar field potential and its properties do not change. For the other critical points we have: * Points D_2^+, D_2^-: Point D_2^+ is stable when √(β-1/β+1)>6/β+1 and point D_2^- is stable when √(β-1/β+1)<6/β+1, otherwise they are saddle. * Points D_3^+, D_3^-:As we have seen in the general case, these scaling solution are always saddle for s_*>0 and s_*<0 (see Fig. <ref>). This implies that points D_3^+, D_3^- are saddle in nature. * Set of points D_4: This set corresponds to a late time attractoronly if β>-1 and μ>0, otherwise it is saddle.The phantom model can yield an interesting and particular dynamics including finite stages of phantom domination. An example is given in Fig. <ref> where the universe undergoes a transition from a matter phase dominated by the interacting energy, to a DE final era characterized by oscillations of w_ tot which bounces the universe between quintessence and phantom domination before eventually stabilising to an effective cosmological constant behavior (w_ tot = -1). The same model can however leads to a dynamics very similar to standard ΛCDM, as reported in the example in Fig. <ref>. A long lasting matter dominated phase is in fact followed by a smooth transition to DE domination mimicking the effects of a cosmological constant.§.§.§ Example 2: V=M^4+n/ϕ^n For this example we have again s_* = 0. The properties of the critical points are then:* Point D_2: This is now a non-hyperbolic point with two vanishing eigenvalues. Its stability can be determined numerically by plotting the stream plot trajectories on the (x,s) plane. In fact no matter the values of the model parameters, point D_2 is always a saddle in the (x,s) plane, implying that it can never be stable in general, as shown for example in Fig. <ref>. * Point D_3:This point does not exists since for this potential s_*=0. * Set of points D_4: Again the stability of this set of points is verified numerically. Independently from the values of the model parameters this set of critical points never represents a stable attractor, as it can be seen in the example of Fig. <ref>.We can conclude that the phantom field with power-law potential has no finite late time attractor, implying that the final state of the universe is characterized by some critical point at infinity. This is in contrast with the example above for the hyperbolic potential, where both points D_2 and D_4 could be stable. § COSMOLOGICAL IMPLICATIONS In this section we extract the cosmological features obtained from the interacting DE models analysed above. The cosmological dynamics of both models is full of phenomenologically interesting solutions and it moreover shows the contribution of loop quantum gravity corrections in both the late and early time behaviors of our universe.In what follows we discuss each physically relevant solution separately: * Late-time DE dominated solutions: These critical points are late time attractors characterized by a cosmic phase dominated by DE (Ω_ϕ = 1) with an effective EoS mimicking a cosmological constant behavior (w_ tot≃ -1). Although this kind of solutions naturally arise for quintessence in EC, they do not appear explicitly in the two models investigated here. There are however other solutions which well describe this cosmological behavior. For example in model I the set of points C_6 well represent a late time attractor with w_ tot = -1, as also shown in the example with an hyperbolic potential (see Fig. <ref>). The same situation emerges in model II where the set of point D_4 acts as a late time attractor mimicking a cosmological constant behavior for both the quintessence and phantom fields (see Figs. <ref> and <ref>). In all these cases a smooth transition from a matter dominated phase to an effective DE era is attained in agreement with the observed dynamics of our universe. * Accelerating scaling solutions: These solutions are identified by a constant finite ratio between Ω_ϕ and Ω_m, implying that the DM energy density evolves at the same rate of the DE energy density. A scenario where a late time attractor is characterised by an accelerating scaling solution is commonly used to alleviate the cosmic coincidence problem, especially in models of interacting DE <cit.>. In the models analysed here several scaling solutions appear. For example critical points C_2 and C_3 in model I, or point D_2 in model II. Depending on the choice of the parameters, all these points can describe a stable accelerating scaling solution for either quintessence or phantom DE. Point C_3 is of particular interest since not only Ω_ϕ scales as Ω_m, but also the contribution due to loop correction Ω_c scales accordingly remaining a non negligible fraction of the energy content in the universe even at late times. * Phantom behavior: The phantom regime is associated with w_ tot< -1, which can indeed be attained by phantom DE in EC. In such cases however the final state of the universe is a big rip singularity at some finite time in the future. Loop quantum corrections are known for being able to avoid this singularity. An example of this situation is provided in Figs. <ref> and <ref> where phantom domination is turned into cosmological constant-like behavior at late times. Within this scenario the big rip is avoided and the universe eventually reach an expanding de Sitter solution. Nevertheless this leads to a finite period of phantom regime which characterizes the dynamics of the universe after a standard period of matter domination. * Loop quantum effects: Loop quantum corrections are important whenever the term z = ρ / ρ_c is not negligible. This always happens at early times when ρ≫ρ_c and Ω_c dominates over the energy budget of the universe (see Figs. <ref>, <ref> and <ref>). Note that although Ω_c dominates the effective EoS w_ tot is either zero or one, implying that the universe at early times is always well described by either a matter dominated phase or a stiff fluid dominated phase. On the other hand late time loop quantum effects do also appear. These are described by the sets of critical points C_6 and D_4, which can be used to describe the late time accelerated expansion of the universe, mimicking a cosmological constant behavior. For these solutions the DE energy density Ω_ϕ scales according to the contribution of quantum correction Ω_c, similar to a sort of scaling solution (see e.g. Fig. <ref>). * Possible observational signatures: The models investigated in this paper present also some other particular features which might provide distinguishing observational signatures with respect to the standard ΛCDM model. Clear examples are shown in Figs. <ref> and <ref>, where the effective EoS w_ tot instead of smoothly changing from zero to -1 presents particular oscillations with excursions in the phantom regime. This scenario could in fact be constrained by future data once astronomical observations will better determine the dynamics and EoS of DE at higher redshift. From Fig. <ref> we note also that early DE might appear in the dynamics of model I. These scenarios might provide distinguishing observational features, especially at the perturbations level, with respect to ΛCDM, and can thus in principle be constrained by present and future observations. § CONCLUSIONIn this paper, we studied the dynamics of interacting quintessence and phantom DE in the LQC framework. The scope was to perform a complete dynamical system investigation of interacting DE with an arbitrary scalar field potentials. We have considered two specific interactions between DE and DM of the form α ρ_m ϕ̇ (model I; see Section <ref>) and βρ̇_̇ϕ̇ (model II; see section <ref>). For each of these interactions, we have analysed two forms of dark energy: the quintessence and the phantom fields. To better characterize the dynamical properties of these interacting models beyond the standard exponential potential, we followed the well known approach of considering the quantity Γ (cf. Eq. (<ref>)) as a general function of the parameter s (cf. Eq. (<ref>)). Within this analysis the dimension of the resulting autonomous systems increases from three to four if compared with the corresponding dynamical systems obtained assuming an exponential potential, and the number of critical points multiply making these new systems more difficult to study in detail. Furthermore in order to better understand the dynamics of these models, especially concerning non-hyperbolic critical points, in each case we considered two concrete non exponential potentials, namely the hyperbolic potential V=V_0 cosh^-μ(λϕ) and the inverse power-law potential V(ϕ)=M^4+n/ϕ^n.We found some interesting and unique features arising from these interacting dark energy models beyond the exponential potential. In model I with a quintessence field, we obtained one additional set of critical points C_6 (see Table <ref>), which does not appear in EC. In fact this set implies the contribution of loop quantum gravity phenomena to a late time accelerated universe. We also observed in the case of quintessence that there is the possibility of attaining a transient period of phantom acceleration (see Fig. <ref>), predicting in this way some specific observational signatures of this model. Furthermore for some choices of the model parameters, we found multiple late time attractors. This situations are interesting from the mathematical point of view since the dynamics strongly depends on the choice of initial conditions and can be handled using bifurcation theory. In the case of phantom field, the set C_6 is a late time attracting set only for the hyperbolic potential and has no equivalent in the exponential potential case <cit.>. This seems to imply that the scalar field potential plays an important role in determining the contribution of loop quantum effect for driving the late time accelerating universe. As in the case of exponential potential, we also observe that future big rip singularities can be avoided with the universe passing from phantom domination to an accelerating de Sitter phase (q=-1). On the other hand for power-law potential interacting model I cannot explain the observed late time behavior of the universe, as there are no late time accelerating attractors.In the interacting model II with a quintessence field, apart from a standard matter dominated point D_1, we obtained one additional late time accelerated set of critical points (set D_4) with contributions from loop quantum effects. We found that the scaling solution D_3 is a saddle, which is in contrast with standard EC, where it is a late time, accelerated scaling solution <cit.>. This model can also explain the late time transition from matter domination to a DE dominated phase (see Fig. <ref>). In the case of phantom field, the set D_4 is a late time attracting set only for the hyperbolic potential but not for the power-law potential. This again suggests the important role of the scalar field potential to obtain a late time accelerated universe in the framework of LQC. As it happens for model I, for the power-law potential the interacting model II cannot explain the late time behavior of the universe as there are no late time accelerating attractors.In conclusion we found that the results obtained with an exponential potential can not only be recovered for a wider class of potentials, but also new interesting phenomenology appears (see Sec. <ref>). This is the case for both the quintessence and phantom DE fields, where several interesting solutions have been derived: for example late time DE domination, accelerating scaling solutions, phantom domination, DM to DE transition. The rich background dynamics obtained from the models investigated in this work may lead to new interesting signatures to look for in present and future observations. The next logical step to further analyse these models would be to study the dynamics of cosmological perturbations (linear or non-linear) and to compare the results against observational data in order to constrain the theoretical parameters. Such investigation is beyond the scope of the present paper and will be left for future works. J.D. is thankful to IUCAA for warm hospitality and facility of doing research works. N.T. acknowledge support from an Enhanced Eurotalents Fellowship and the Labex P2IO.Bennett:2003bz C. L. Bennett et al. [WMAP Collaboration],Astrophys. J. Suppl.148 (2003) 1 doi:10.1086/377253 [astro-ph/0302207]. Spergel:2003cb D. N. Spergel et al. [WMAP Collaboration],Astrophys. J. Suppl.148 (2003) 175 doi:10.1086/377226 [astro-ph/0302209]. Masi:2002hp S. Masi et al.,Prog. Part. Nucl. Phys.48 (2002) 243 doi:10.1016/S0146-6410(02)00131-X [astro-ph/0201137].Ade:2013zuv P. A. R. Ade et al. [Planck Collaboration],Astron. Astrophys.571 (2014) A16 doi:10.1051/0004-6361/201321591 [arXiv:1303.5076 [astro-ph.CO]].Ade:2015xua P. A. R. Ade et al. [Planck Collaboration],Astron. Astrophys.594 (2016) A13 doi:10.1051/0004-6361/201525830 [arXiv:1502.01589 [astro-ph.CO]]. Scranton:2003in R. Scranton et al. [SDSS Collaboration],astro-ph/0307335. Riess:1998cb A. G. Riess et al. [Supernova Search Team],Astron. J.116 (1998) 1009 doi:10.1086/300499 [astro-ph/9805201]. Tonry:2003zg J. L. Tonry et al. [Supernova Search Team],Astrophys. J.594 (2003) 1 doi:10.1086/376865 [astro-ph/0305008]. Perlmutter:1998np S. Perlmutter et al. [Supernova Cosmology Project Collaboration],Astrophys. J.517 (1999) 565 doi:10.1086/307221 [astro-ph/9812133].Weinberg:1988cp S. Weinberg,Rev. Mod. Phys.61 (1989) 1. doi:10.1103/RevModPhys.61.1Martin:2012bt J. Martin,Comptes Rendus Physique 13 (2012) 566 doi:10.1016/j.crhy.2012.04.008 [arXiv:1205.3365 [astro-ph.CO]]. Caldwell:1997ii R. R. Caldwell, R. Dave and P. J. Steinhardt,Phys. Rev. Lett.80 (1998) 1582 doi:10.1103/PhysRevLett.80.1582 [astro-ph/9708069]. Sahni:2002kh V. Sahni,Class. Quant. Grav.19 (2002) 3435 doi:10.1088/0264-9381/19/13/304 [astro-ph/0202076]. Carroll:1998zi S. M. Carroll,Phys. Rev. Lett.81 (1998) 3067 doi:10.1103/PhysRevLett.81.3067 [astro-ph/9806099].Caldwell:1999ew R. R. Caldwell,Phys. Lett. B 545 (2002) 23 doi:10.1016/S0370-2693(02)02589-3 [astro-ph/9908168].Singh:2003vx P. Singh, M. Sami and N. Dadhich,Phys. Rev. D 68 (2003) 023522 doi:10.1103/PhysRevD.68.023522 [hep-th/0305110].Elizalde:2004mq E. Elizalde, S. Nojiri and S. D. Odintsov,Phys. Rev. D 70 (2004) 043539 doi:10.1103/PhysRevD.70.043539 [hep-th/0405034]. ArmendarizPicon:2000ah C. Armendariz-Picon, V. F. Mukhanov and P. J. Steinhardt,Phys. Rev. D 63 (2001) 103510 doi:10.1103/PhysRevD.63.103510 [astro-ph/0006373]. Novosyadlyj:2013nya B. Novosyadlyj, O. Sergijenko, R. Durrer and V. Pelykh,JCAP 1405, 030 (2014) doi:10.1088/1475-7516/2014/05/030 [arXiv:1312.6579 [astro-ph.CO]]. Carroll:2003st S. M. Carroll, M. Hoffman and M. Trodden,Phys. Rev. D 68 (2003) 023509 doi:10.1103/PhysRevD.68.023509 [astro-ph/0301273]. Caldwell:2003vq R. R. Caldwell, M. Kamionkowski and N. N. Weinberg,Phys. Rev. Lett.91 (2003) 071301 doi:10.1103/PhysRevLett.91.071301 [astro-ph/0302506].cr  C. Rovelli, Quantum Gravity Cambridge University Press, Cambridge (2004).Han:2005km M. Han, W. Huang and Y. Ma,Int. J. Mod. Phys. D 16 (2007) 1397 doi:10.1142/S0218271807010894 [gr-qc/0509064]. Thiemann:2002nj T. Thiemann,Lect. Notes Phys.631 (2003) 41 doi:10.1007/978-3-540-45230-0-3 [gr-qc/0210094]. Ashtekar:2003hd A. Ashtekar, M. Bojowald and J. Lewandowski,Adv. Theor. Math. Phys.7 (2003) no.2,233 doi:10.4310/ATMP.2003.v7.n2.a2 [gr-qc/0304074].Ashtekar:2006rx A. Ashtekar, T. Pawlowski and P. Singh,Phys. Rev. Lett.96 (2006) 141301 doi:10.1103/PhysRevLett.96.141301 [gr-qc/0602086]. Bojowald:2001ep M. Bojowald,Class. Quant. Grav.18 (2001) L109 doi:10.1088/0264-9381/18/18/101 [gr-qc/0105113]. Date:2004zd G. Date and G. M. Hossain,Class. Quant. Grav.21 (2004) 4941 doi:10.1088/0264-9381/21/21/012 [gr-qc/0407073]. Ashtekar:2006uz A. Ashtekar, T. Pawlowski and P. Singh,Phys. Rev. D 73 (2006) 124038 doi:10.1103/PhysRevD.73.124038 [gr-qc/0604013].Banerjee:2005ga K. Banerjee and G. Date,Class. Quant. Grav.22 (2005) 2017 doi:10.1088/0264-9381/22/11/007 [gr-qc/0501102]. Ashtekar:2006wn A. Ashtekar, T. Pawlowski and P. Singh,Phys. Rev. D 74 (2006) 084003 doi:10.1103/PhysRevD.74.084003 [gr-qc/0607039]. Singh:2006sg P. Singh,Phys. Rev. D 73 (2006) 063508 doi:10.1103/PhysRevD.73.063508 [gr-qc/0603043]. Copeland:2005xs E. J. Copeland, J. E. Lidsey and S. Mizuno,Phys. Rev. D 73 (2006) 043503 doi:10.1103/PhysRevD.73.043503 [gr-qc/0510022]. Sami:2006wj M. Sami, P. Singh and S. Tsujikawa,Phys. Rev. D 74 (2006) 043514 doi:10.1103/PhysRevD.74.043514 [gr-qc/0605113]. Naskar:2007dn T. Naskar and J. Ward,Phys. Rev. D 76 (2007) 063514 doi:10.1103/PhysRevD.76.063514 [arXiv:0704.3606 [gr-qc]]. Samart:2007xz D. Samart and B. Gumjudpai,Phys. Rev. D 76 (2007) 043514 doi:10.1103/PhysRevD.76.043514 [arXiv:0704.3414 [gr-qc]].Amendola:1999er L. Amendola,Phys. Rev. D 62 (2000) 043511 doi:10.1103/PhysRevD.62.043511 [astro-ph/9908023].Billyard:2000bh A. P. Billyard and A. A. Coley,Phys. Rev. D 61 (2000) 083503 doi:10.1103/PhysRevD.61.083503 [astro-ph/9908224]. Chimento:2003iea L. P. Chimento, A. S. Jakubi, D. Pavon and W. Zimdahl,Phys. Rev. D 67 (2003) 083513 doi:10.1103/PhysRevD.67.083513 [astro-ph/0303145].Cai:2004dk R. G. Cai and A. Wang,JCAP 0503 (2005) 002 doi:10.1088/1475-7516/2005/03/002 [hep-th/0411025].Huey:2004qv G. Huey and B. D. Wandelt,Phys. Rev. D 74 (2006) 023519 doi:10.1103/PhysRevD.74.023519 [astro-ph/0407196].Boehmer:2008av C. G. Boehmer, G. Caldera-Cabral, R. Lazkoz and R. Maartens,Phys. Rev. D 78 (2008) 023505 doi:10.1103/PhysRevD.78.023505 [arXiv:0801.1565 [gr-qc]].Boehmer:2009tk C. G. Boehmer, G. Caldera-Cabral, N. Chan, R. Lazkoz and R. Maartens,Phys. Rev. D 81 (2010) 083003 doi:10.1103/PhysRevD.81.083003 [arXiv:0911.3089 [gr-qc]].Pourtsidou:2013nha A. Pourtsidou, C. Skordis and E. J. Copeland,Phys. Rev. D 88 (2013) no.8,083505 doi:10.1103/PhysRevD.88.083505 [arXiv:1307.0458 [astro-ph.CO]].Boehmer:2015kta C. G. Boehmer, N. Tamanini and M. Wright,Phys. Rev. D 91 (2015) no.12,123002 doi:10.1103/PhysRevD.91.123002 [arXiv:1501.06540 [gr-qc]].Boehmer:2015sha C. G. Boehmer, N. Tamanini and M. Wright,Phys. Rev. D 91 (2015) no.12,123003 doi:10.1103/PhysRevD.91.123003 [arXiv:1502.04030 [gr-qc]].Koivisto:2015qua T. S. Koivisto, E. N. Saridakis and N. Tamanini,JCAP 1509 (2015) 047 doi:10.1088/1475-7516/2015/09/047 [arXiv:1505.07556 [astro-ph.CO]].Fu:2008gh X. Fu, H. W. Yu and P. Wu,Phys. Rev. D 78 (2008) 063001 doi:10.1103/PhysRevD.78.063001 [arXiv:0808.1382 [gr-qc]]. Gumjudpai:2007fc B. Gumjudpai,arXiv:0706.3467 [gr-qc].Roy:2014yta N. Roy and N. Banerjee,Eur. Phys. J. Plus 129, 162 (2014) doi:10.1140/epjp/i2014-14162-7 [arXiv:1402.6821 [gr-qc]].Paliathanasis:2015gga A. Paliathanasis, M. Tsamparlis, S. Basilakos and J. D. Barrow,Phys. Rev. D 91, no. 12, 123535 (2015) doi:10.1103/PhysRevD.91.123535 [arXiv:1503.05750 [gr-qc]]. Dutta:2015jaq J. Dutta and H. Zonunmawia,Eur. Phys. J. Plus 130, no. 11 (2015) doi:10.1140/epjp/i2015-15221-3 [arXiv:1601.00283 [gr-qc]].Dutta:2016dnt J. Dutta, W. Khyllep and E. Syiemlieh,Eur. Phys. J. Plus 131, no. 2, 33 (2016) doi:10.1140/epjp/i2016-16033-7 [arXiv:1602.03329 [gr-qc]].Escobar:2013js D. Escobar, C. R. Fadragas, G. Leon and Y. Leyva,Astrophys. Space Sci.349, 575 (2014) doi:10.1007/s10509-013-1650-8 [arXiv:1301.2570 [gr-qc]]. Dutta:2016bbs J. Dutta, W. Khyllep and N. Tamanini,Phys. Rev. D 93, no. 6, 063004 (2016) doi:10.1103/PhysRevD.93.063004 [arXiv:1602.06113 [gr-qc]]. Roy:2014hsa N. Roy and N. Banerjee,Annals Phys.356, 452 (2015) doi:10.1016/j.aop.2015.03.013 [arXiv:1411.1164 [gr-qc]]. Dutta:2017kch J. Dutta, W. Khyllep and N. Tamanini,Phys. Rev. D 95, no. 2, 023515 (2017) [arXiv:1701.00744 [gr-qc]].Xiao:2011nh K. Xiao and J. Y. Zhu,Phys. Rev. D 83 (2011) 083501 doi:10.1103/PhysRevD.83.083501 [arXiv:1102.2695 [gr-qc]]. Coley A. A. Coley, Dynamical systems and cosmology. (Kluwer Academic Publishers, Dordrecht Boston London, 2003).swig  S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos. (Springer, New York Heidelberg Berlin, 1990).lper L. Perko, Differential Equations and Dynamical Systems. (Springer Verlag, 1991).Shahalam:2015sja M. Shahalam, S. D. Pathak, M. M. Verma, M. Y. Khlopov and R. Myrzakulov,Eur. Phys. J. C 75 (2015) no.8,395 doi:10.1140/epjc/s10052-015-3608-1 [arXiv:1503.08712 [gr-qc]].Lazkoz:2007mx R. Lazkoz, G. Leon and I. Quiros,Phys. Lett. B 649 (2007) 103 doi:10.1016/j.physletb.2007.03.060 [astro-ph/0701353]. Nunes:2000yc A. Nunes and J. P. Mimoso,Phys. Lett. B 488 (2000) 423 doi:10.1016/S0370-2693(00)00919-9 [gr-qc/0008003]. Wetterich:1994bg C. Wetterich,Astron. Astrophys.301 (1995) 321 [hep-th/9408025].Holden:1999hm D. J. Holden and D. Wands,Phys. Rev. D 61 (2000) 043506 doi:10.1103/PhysRevD.61.043506 [gr-qc/9908026].Quartin:2008px M. Quartin, M. O. Calvao, S. E. Joras, R. R. R. Reis and I. Waga,JCAP 0805, 007 (2008) doi:10.1088/1475-7516/2008/05/007 [arXiv:0802.0546 [astro-ph]].Tamanini:2014nvd N. Tamanini, Dynamical systems in dark energy models, PhD thesis, University College London (2014). Amendola:1999qq L. Amendola,Phys. Rev. D 60 (1999) 043501 doi:10.1103/PhysRevD.60.043501 [astro-ph/9904120]. Zhou:2007xp S. Y. Zhou,Phys. Lett. B 660 (2008) 7 doi:10.1016/j.physletb.2007.12.020 [arXiv:0705.1577 [astro-ph]]. Fang:2008fw W. Fang, Y. Li, K. Zhang and H. Q. Lu,Class. Quant. Grav.26 (2009) 155005 doi:10.1088/0264-9381/26/15/155005 [arXiv:0810.4193 [hep-th]].Steinhardt:1999nw P. J. Steinhardt, L. M. Wang and I. Zlatev,Phys. Rev. D 59 (1999) 123504 doi:10.1103/PhysRevD.59.123504 [astro-ph/9812313]. Guo:2004xx Z. K. Guo, R. G. Cai and Y. Z. Zhang,JCAP 0505 (2005) 002 doi:10.1088/1475-7516/2005/05/002 [astro-ph/0412624].Wei:2010fz H. Wei,Nucl. Phys. B 845, 381 (2011) doi:10.1016/j.nuclphysb.2010.12.010 [arXiv:1008.4968 [gr-qc]]. Wei:2010cs H. Wei,Commun. Theor. Phys.56, 972 (2011) doi:10.1088/0253-6102/56/5/29 [arXiv:1010.1074 [gr-qc]].
http://arxiv.org/abs/1708.07716v1
{ "authors": [ "Hmar Zonunmawia", "Wompherdeiki Khyllep", "Nandan Roy", "Jibitesh Dutta", "Nicola Tamanini" ], "categories": [ "gr-qc" ], "primary_category": "gr-qc", "published": "20170825125706", "title": "Extended Phase Space Analysis of Interacting Dark Energy Models in Loop Quantum Cosmology" }
Department of Mathematics and Statistics, York University, [email protected] stability, Equilibrium Points, Hysteresis, Lyapunov function, Nonlinear control systems, Partial differential equationsThe Landau–Lifshitz equation describes the behaviour of magnetic domains in ferromagnetic structures. Recently such structures have been found to be favourable for storing digital data. Stability of magnetic domains is important for this. Consequently, asymptotic stability of the equilibrium points in the Landau–Lifshitz equation are established.A suitable Lyapunov function is presented.§ INTRODUCTIONThe Landau–Lifshitz equation is a coupled set of nonlinear partial differential equations. One of its first appearances is in a 1935 paper <cit.>, in which this equation describes the behaviour of magnetic domains within a ferromagnetic structure. In recent applications, structures such as ferromagnetic nanowires have appeared in electronic devices used for storing digital information <cit.>.In particular, data is encoded as a specific pattern of magnetic domains within a ferromagnetic nanowire. Consequently the Landau–Lifshitz equation continues to be widely explored, and its stability is of growing interest <cit.>. Stability of equilibrium points is also related to hysteresis <cit.>, and investigating stability lends insights into the hystereric behaviour that appears in the Landau–Lifshitz equation <cit.>. Stability results are often based on linearization <cit.>; however, in the proceeding discussion, asymptotic stability of the Landau-Lifshitz equation is established using Lyapunov theory. This is preferred because linearization leads only to an approximation.The difficulty with Lyapunov theory is often in the construction of an appropriate Lyapunov function.Working in infinite-dimensions makes this more difficult; however, Lyapunov functions have been found for the Landau–Lifshitz equation <cit.>.In both these works, a Lyapunov function establishes that the equilibrium points of the Landau-Lifshitz equation are stable. The work in <cit.> is extended here and asymptotic stability is shown. In particular, a nonlinear control is shown to steer the system to an asymptotically stable equilibrium point.Control of the Landau–Lifshitz equation is crucial as this means the behaviour of the domain walls, which contains the encoded data, can be fully determined <cit.>. The control objective is to steer the system dynamics to any arbitrary equilibrium point, which requires asymptotic stability. This is presented in Theorem <ref>, which is the main result and can be found in Section <ref>.A summary and future avenues are in the last section. To begin, a brief mathematical review of the Landau–Lifshitz equation is discussed next. § LANDAU-LIFSHITZ EQUATION Consider a one dimensional ferromagnetic nanowire of length L>0. Let 𝐦(x,t)=(m_1(x,t),m_2(x,t),m_3(x,t)) be the magnetization of the ferromagnetic nanowire for some position x ∈ [0,L]and time t ≥ 0 with initial magnetization 𝐦(x,0)=𝐦_0(x).These dynamics are determined by∂𝐦/∂ t= 𝐦×( 𝐦_xx+𝐮)-ν𝐦×(𝐦×(𝐦_xx+𝐮)) 𝐦_x(0,t)=𝐦_x(L,t)=0, || 𝐦(x,t)||_2 =1.where × denotes the cross product and ||·||_2 is the Euclidean norm. Equation (<ref>) is the one–dimensional controlled Landau–Lifshitz equation <cit.>.It satisfies the constraint in (<ref>), which means the magnitude of the magnetization is uniform at every point of the ferromagnet. The exchange energy is 𝐦_xx. Mathematically, 𝐦_xx denotes magnetization differentiated with respect to x twice. The parameter ν≥0 is the damping parameter, which depends on the type of ferromagnet. The applied magnetic field, denoted 𝐮(t), acts as the control, and hence, when 𝐮(t)=0, equation (<ref>)can be thought of as the uncontrolled Landau–Lifshitz equation. Neumann boundary conditions(<ref>) are used here. Existence and uniqueness resultscan be found in <cit.> and references therein.Solutions to (<ref>) are defined on ℒ_2^3 = ℒ_2 ([0,L]; ℝ^3) with the usual inner–product and norm with domainD={ 𝐦∈ℒ_2^3 :𝐦_x ∈ℒ_2^3,𝐦_xx∈ℒ_2^3,𝐦_x(0)=𝐦_x(L) = 0}.The notation ·_ℒ_2^3 is used for the norm.§ ASYMPTOTIC STABILITYFor 𝐮(t)=0, the set of equilibrium points isE= {𝐚=(a_1,a_2,a_3) : a_1,a_2,a_3||𝐚||_2=1 }, which satisfies the boundary conditions in (<ref>) <cit.>.It is clear E contains an infinite number of equilibria. A particular 𝐚∈ E is stable but not asymptotically stable <cit.>; however, the set E is asymptotically stable in the ℒ_2^3–norm <cit.>. Let 𝐫 be an arbitrary equilibrium point of E with r_1≠ 0 to ensure ||𝐫||_2=1, that is,(<ref>), issatisfied. Define the control in (<ref>) to be 𝐮=k(𝐫 -𝐦 )wherek is a scalar constant. This is the same control used in <cit.>. For this control, 𝐫 is an equilibrium point of the controlled Landau-Lifshitz equation (<ref>). It is shown in Theorem <ref> that 𝐫 is locally asymptotically stable, which is the main result. Simulations demonstrating asymptotic stability of the Landau-Lifshitz equation given the control in (<ref>) are shown in <cit.>.The following lemmas are needed in Theorem <ref>. Lemmas <ref> and <ref> appear in <cit.>, which can be obtained from the product rule and integration by parts, respectively. For 𝐦∈ℒ_2^3, the derivative of 𝐠=𝐦×𝐦_x is 𝐠_x=𝐦×𝐦_xx.For 𝐦∈ℒ_2^3 satisfying (<ref>), ∫_0^L (𝐦-𝐫)^T(𝐦×𝐦_xx)dx=0. If 𝐫∈ E and 𝐦 satisfies (<ref>), then ||𝐦×𝐫||_2≤1.Recall ||𝐦×𝐫||_2 = ||𝐦||_2||𝐫 ||_2sin(θ) where θ is the angle between 𝐦. Since 𝐫 and ||𝐦||_2=||𝐫||_2=1, then ||𝐦×𝐫||_2 = ||𝐦||_2||𝐫 ||_2sin(θ) ≤sin(θ)≤ 1.For any 𝐫∈ E, there exists a range of positive values of k such that 𝐫 is a locally asymptotically stable equilibrium point of (<ref>)in the ℒ_2^3–norm. Let B(𝐫,p)={𝐦∈ℒ_2^3: ||𝐦 -𝐫||_ℒ_2^3<p }⊂ D for some constant 0<p<2.Note that since p<2, then-𝐫∉ B(𝐫,p). For any 𝐦∈ B(𝐫,p), the Lyapunov function is V(𝐦)=f(k)/2| | 𝐦-𝐫||_ℒ_2^3^2+1/2| |𝐦_x||_ℒ_2^3^2where f(k)>0 is a scalar function of k and |f(k)+k|≤1 for all k>0. Such functions exist. For example, f(k)=k for k∈(0,1/2].Taking the derivative of V,dV/dt =∫_0^Lf(k)(𝐦 -𝐫)^T𝐦̇ dx+∫_0^L𝐦_x^T𝐦̇_xdx =∫_0^Lf(k)(𝐦 -𝐫)^T𝐦̇ dx-∫_0^L𝐦_xx^T𝐦̇dx=∫_0^L(f(k)(𝐦 -𝐫)^T𝐦̇ -𝐦_xx^T𝐦̇)dxwhere the dot notation means differentiation with respect to t. Letting 𝐡 = 𝐦 - 𝐫, the integrand in (<ref>) becomesf(k)𝐡^T𝐦̇ - 𝐦_xx^T𝐦̇and equation (<ref>) becomes𝐦̇= 𝐦×( 𝐦_xx-k𝐡)-ν𝐦×(𝐦×(𝐦_xx-k𝐡)).It follows that 𝐡^T𝐦̇=𝐡^T[𝐦×( 𝐦_xx-k𝐡)-ν𝐦×(𝐦×(𝐦_xx-k𝐡))] = 𝐡^T(𝐦×𝐦_xx) -k 𝐡^T(𝐦×𝐡) -ν𝐡^T[𝐦×(𝐦×𝐦_xx)] +ν k𝐡^T[𝐦×(𝐦×𝐡)] = 𝐡^T(𝐦×𝐦_xx) -ν(𝐦×𝐦_xx)^T(𝐡×𝐦)+ν k(𝐦×𝐡)^T(𝐡×𝐦)= 𝐡^T(𝐦×𝐦_xx) -ν(𝐦×𝐦_xx)^T(𝐡×𝐦) -ν k||𝐦×𝐡||_2^2 and 𝐦_xx^T𝐦̇ = 𝐦_xx^T[𝐦×( 𝐦_xx-k𝐡)-ν𝐦×(𝐦×(𝐦_xx-k𝐡)) ]= 𝐦_xx^T(𝐦×𝐦_xx) -k𝐦_xx^T(𝐦×𝐡)-ν𝐦_xx^T[ 𝐦×(𝐦×𝐦_xx)]+ν k𝐦_xx^T[ 𝐦×(𝐦×𝐡)]= -k𝐦_xx^T(𝐦×𝐡)-ν𝐦_xx^T[ 𝐦×(𝐦×𝐦_xx)] +ν k𝐦_xx^T[ 𝐦×(𝐦×𝐡)]= -k𝐦_xx^T(𝐦×𝐡)-ν(𝐦×𝐦_xx)^T(𝐦_xx×𝐦) +ν k(𝐦×𝐡)^T(𝐦_xx×𝐦) = -k𝐦_xx^T( 𝐦×𝐡)+ν||𝐦×𝐦_xx ||_2^2 +ν k(𝐦×𝐡)^T(𝐦_xx×𝐦). Substituting (<ref>) and (<ref>) into equation (<ref>) leads tof(k)𝐡^T𝐦̇ - 𝐦_xx^T𝐦̇=f(k)𝐡^T(𝐦×𝐦_xx) -ν f(k) (𝐦×𝐦_xx)^T(𝐡×𝐦)-ν kf(k)||𝐦×𝐡||_2^2 +k𝐦_xx^T(𝐦×𝐡)-ν||𝐦×𝐦_xx ||_2^2-ν k(𝐦×𝐡)^T(𝐦_xx×𝐦)=(f(k)-k)𝐡^T(𝐦×𝐦_xx) -ν f(k) (𝐦×𝐦_xx)^T(𝐡×𝐦)-ν kf(k)||𝐦×𝐡||_2^2 -ν||𝐦×𝐦_xx ||_2^2-ν k(𝐦×𝐡)^T(𝐦_xx×𝐦)=(f(k)-k)𝐡^T(𝐦×𝐦_xx) -ν (f(k)+k) (𝐦×𝐦_xx)^T(𝐡×𝐦)-ν kf(k)||𝐦×𝐡||_2^2 -ν||𝐦×𝐦_xx ||_2^2Substituting this expression into equation (<ref>) leads todV/dt= (f(k)-k) ∫_0^L𝐡^T(𝐦×𝐦_xx)dx-ν (f(k)+k)∫_0^L (𝐦×𝐦_xx)^T(𝐦×𝐡) dx - ν kf(k)∫_0^L||𝐦×𝐡||_2^2 dx-ν∫_0^L ||𝐦×𝐦_xx ||_2^2 dx.From Lemma <ref>, the first integral equals zero since 𝐡 = 𝐦 -𝐫. It follows that dV/dt= -ν (f(k)+k)∫_0^L (𝐦×𝐦_xx)^T(𝐦×𝐡) dx - ν kf(k)∫_0^L||𝐦×𝐡||_2^2 dx-ν∫_0^L ||𝐦×𝐦_xx ||_2^2 dx=-ν (f(k)+k) ∫_0^L (𝐦×𝐦_xx)^T(𝐦×𝐡) dx - ν kf(k) ||𝐦×𝐡||_ℒ_2^3^2-ν||𝐦×𝐦_xx ||_ℒ_2^3^2.Applying the Cauchy-Schwarz Inequality to the integrand leads todV/dt≤ ν |f(k)+k| ∫_0^L ||𝐦×𝐦_xx||_2||𝐦×𝐡||_2 dx - ν kf(k) ||𝐦×𝐡||_ℒ_2^3^2-ν||𝐦×𝐦_xx ||_ℒ_2^3^2.Since ||𝐦×𝐡||_2≤ 1 fromLemma <ref>, thendV/dt≤ ν |f(k)+k| ∫_0^L ||𝐦×𝐦_xx||_2dx - ν kf(k) ||𝐦×𝐡||_ℒ_2^3^2-ν||𝐦×𝐦_xx ||_ℒ_2^3^2= ν |f(k)+k|||𝐦×𝐦_xx||_ℒ_2^3^2 - ν kf(k) ||𝐦×𝐡||_ℒ_2^3^2-ν||𝐦×𝐦_xx ||_ℒ_2^3^2= ν(|f(k)+k|-1)||𝐦×𝐦_xx||_ℒ_2^3^2 - ν kf(k) ||𝐦×𝐡||_ℒ_2^3^2. Since |f(k)+k|≤1, thendV/dt ≤ - ν kf(k) ||𝐦×𝐡||_ℒ_2^3^2.Since k>0 and f(k)>0, then dV/dt≤ 0 with dV/dt =0 if and only if 𝐦×𝐡=0.For example, suppose f(k)=k on k∈(0,1/2], which satisfies|f(k)+k|≤1. Equation (<ref>) becomes dV/dt≤ - ν k^2 ||𝐦×𝐡||_ℒ_2^3^2 which is clearly less than or equal to zero, and the value of k can be any number in the interval (0,1/2]. For instance, picking k=1/4, the Lyapunov function is V(𝐦)=1/8| | 𝐦-𝐫||_ℒ_2^3^2+1/2| |𝐦_x||_ℒ_2^3^2. Revisiting equation (<ref>), if 𝐦 =𝐫, then 𝐡=0 and hence dV/dt=0. On the other hand, dV/dt=0 implies 𝐦×𝐡=0, and hence 𝐫×𝐦=0. This is a system of algebraic equations,r_2m_3-r_3m_2 =0r_3m_1-r_1m_3 =0r_1m_2-r_2m_1 =0.The solution is m_2=r_2/r_1m_1 and m_3=r_3/r_1m_1 for any m_1 and r_1≠0.Given||𝐦||_2=1 and ||𝐫||_2=1, this leads tom_1^2=r_1^2 and hence m_1=± r_1, which leads to m_2=± r_2 and m_3=± r_3; that is, 𝐦 = ±𝐫.For V(𝐦) on B(𝐫,p), this implies 𝐦=𝐫 if dV/dt=0. Localasymptotic stabilityfollows from Lyapunov's Theorem <cit.>.§ DISCUSSIONS Asymptotic stability of an arbitrary equilibrium point of the Landau–Lifshitz equation with Neumann boundary conditions is shown in Theorem <ref>. This is established using Lyapunov theory. The result in Theorem <ref> is an extension of the work presented in <cit.>.The control given in (<ref>) can be used to controlthe hysteresis that often appears in magnetization dynamics including those described by the Landau–Lifshitz equation <cit.>. Figure <ref> depicts the input-output map of the Landau–Lifshitz equation in (<ref>). The output is the magnetization, 𝐦(x,t)=(m_1(x,t),m_2(x,t),m_2(x,t)), and the input is a periodic function denoted û(t) where ω is the frequency of this input. For each m_i with i=1,2,3, the input is the periodic function 0.01cos(ω t). For this periodic input, equation (<ref>) becomes∂𝐦/∂ t= 𝐦×( 𝐦_xx+𝐮)-ν𝐦×(𝐦×(𝐦_xx+𝐮)) +û(t). As the frequency of the (periodic) input approaches zero, loops appear in the input–output map for m_1(x,t), which indicates the presence of hysteresis <cit.>.Because hysteresis is characterized by multiple stable equilibrium points <cit.>, the ability to control the stability of equilibrium points implies the ability to control hysteresis. Such a control for the Landau–Lifshitz equation is given in (<ref>). This is a possible avenue for future exploration.plain
http://arxiv.org/abs/1708.07545v1
{ "authors": [ "Amenda Chow" ], "categories": [ "math.AP" ], "primary_category": "math.AP", "published": "20170824202908", "title": "Asymptotic Stability of the Landau-Lifshitz Equation" }